Re: Update on high concurrency OLTP application and Postgres
От | Jim Nasby |
---|---|
Тема | Re: Update on high concurrency OLTP application and Postgres |
Дата | |
Msg-id | CAA8A155-C735-405D-8155-7FC79010E186@nasby.net обсуждение исходный текст |
Ответ на | Re: Update on high concurrency OLTP application and Postgres (Cosimo Streppone <cosimo@streppone.it>) |
Список | pgsql-performance |
Have you ever done any testing to see if just setting default_statistics_target to 500 has a negative impact on the system? On Sep 22, 2006, at 4:48 PM, Cosimo Streppone wrote: > Christian Storm wrote: > >>> At the moment, my rule of thumb is to check out the ANALYZE VERBOSE >>> messages to see if all table pages are being scanned. >>> >>> INFO: "mytable": scanned xxx of yyy pages, containing ... >>> >>> If xxx = yyy, then I keep statistics at the current level. >>> When xxx is way less than yyy, I increase the numbers a bit >>> and retry. >>> >>> It's probably primitive, but it seems to work well. > > >> What heuristic do you use to up the statistics for such a table? > > No heuristics, just try and see. > For tables of ~ 10k pages, I set statistics to 100/200. > For ~ 100k pages, I set them to 500 or more. > I don't know the exact relation. > >> Once you've changed it, what metric do you use to > > see if it helps or was effective? > > I rerun an analyze and see the results... :-) > If you mean checking the usefulness, I can see it only > under heavy load, if particular db queries run in the order > of a few milliseconds. > > If I see normal queries that take longer and longer, or > they even appear in the server's log (> 500 ms), then > I know an analyze is needed, or statistics should be set higher. -- Jim Nasby jim@nasby.net EnterpriseDB http://enterprisedb.com 512.569.9461 (cell)
В списке pgsql-performance по дате отправления: