Re: detecting poor query plans
От | Tom Lane |
---|---|
Тема | Re: detecting poor query plans |
Дата | |
Msg-id | 18130.1069884876@sss.pgh.pa.us обсуждение исходный текст |
Ответ на | Re: detecting poor query plans (Greg Stark <gsstark@mit.edu>) |
Ответы |
Re: detecting poor query plans
|
Список | pgsql-hackers |
Greg Stark <gsstark@mit.edu> writes: > That's a valid point. The ms/cost factor may not be constant over time. > However I think in the normal case this number will tend towards a fairly > consistent value across queries and over time. It will be influenced somewhat > by things like cache contention with other applications though. I think it would be interesting to collect the numbers over a long period of time and try to learn something from the averages. The real hole in Neil's original suggestion was that it assumed that comparisons based on just a single query would be meaningful enough to pester the user about. > On further thought the real problem is that these numbers are only available > when running with "explain" on. As shown recently on one of the lists, the > cost of the repeated gettimeofday calls can be substantial. It's not really > feasible to suggest running all queries with that profiling. Yeah. You could imagine a simplified-stats mode that only collects the total runtime (two gettimeofday's per query is nothing) and the row counts (shouldn't be impossibly expensive either, especially if we merged the needed fields into PlanState instead of requiring a separately allocated node). Not sure if that's as useful though. regards, tom lane
В списке pgsql-hackers по дате отправления: