Re: proposal - log_full_scan
От | Justin Pryzby |
---|---|
Тема | Re: proposal - log_full_scan |
Дата | |
Msg-id | 20210417150931.GN3315@telsasoft.com обсуждение исходный текст |
Ответ на | proposal - log_full_scan (Pavel Stehule <pavel.stehule@gmail.com>) |
Ответы |
Re: proposal - log_full_scan
|
Список | pgsql-hackers |
On Sat, Apr 17, 2021 at 04:36:52PM +0200, Pavel Stehule wrote: > today I worked on postgres's server used for critical service. Because the > application is very specific, we had to do final tuning on production > server. I fix lot of queries, but I am not able to detect fast queries that > does full scan of middle size tables - to 1M rows. Surely I wouldn't log > all queries. Now, there are these queries with freq 10 per sec. > > Can be nice to have a possibility to set a log of queries that do full > scan and read more tuples than is specified limit or that does full scan of > specified tables. > > What do you think about the proposed feature? Are you able to use auto_explain with auto_explain.log_min_duration ? Then you can search for query logs with message ~ 'Seq Scan .* \(actual time=[.0-9]* rows=[0-9]{6,} loops=[0-9]*)' Or can you use pg_stat_all_tables.seq_scan ? But it seems to me that filtering on the duration would be both a more important criteria and a more general one, than "seq scan with number of rows". | (split_part(message, ' ', 2)::float/1000 AS duration ..) WHERE duration>2222; -- Justin
В списке pgsql-hackers по дате отправления: