Re: Update on high concurrency OLTP application and Postgres
От | Cosimo Streppone |
---|---|
Тема | Re: Update on high concurrency OLTP application and Postgres |
Дата | |
Msg-id | 45144C10.2070501@streppone.it обсуждение исходный текст |
Ответ на | Update on high concurrency OLTP application and Postgres 8 tuning (Cosimo Streppone <cosimo@streppone.it>) |
Ответы |
Re: Update on high concurrency OLTP application and Postgres
|
Список | pgsql-performance |
Christian Storm wrote: >>At the moment, my rule of thumb is to check out the ANALYZE VERBOSE >>messages to see if all table pages are being scanned. >> >> INFO: "mytable": scanned xxx of yyy pages, containing ... >> >>If xxx = yyy, then I keep statistics at the current level. >>When xxx is way less than yyy, I increase the numbers a bit >>and retry. >> >>It's probably primitive, but it seems to work well. > > What heuristic do you use to up the statistics for such a table? No heuristics, just try and see. For tables of ~ 10k pages, I set statistics to 100/200. For ~ 100k pages, I set them to 500 or more. I don't know the exact relation. > Once you've changed it, what metric do you use to > see if it helps or was effective? I rerun an analyze and see the results... :-) If you mean checking the usefulness, I can see it only under heavy load, if particular db queries run in the order of a few milliseconds. If I see normal queries that take longer and longer, or they even appear in the server's log (> 500 ms), then I know an analyze is needed, or statistics should be set higher. -- Cosimo
В списке pgsql-performance по дате отправления: