Re: default_statistics_target
От | Albe Laurenz |
---|---|
Тема | Re: default_statistics_target |
Дата | |
Msg-id | D960CB61B694CF459DCFB4B0128514C2039381F9@exadv11.host.magwien.gv.at обсуждение исходный текст |
Ответ на | default_statistics_target ("Carlo Stonebanks" <stonec.register@sympatico.ca>) |
Список | pgsql-performance |
Carlo Stonebanks wrote: > The whole topic of messing with stats makes my head spin but I am concerned > about some horridly performing queries that have had bad rows estimates and > others which always choose seq scans when indexes are available. Reading up > on how to improve planner estimates, I have seen references to > default_statistics_target being changed from the default of 10 to 100. > > Our DB is large, with thousands of tables, but the core schema has about 100 > tables and the typical row counts are in the millions of rows for the whole > table. We have been playing endless games with tuning this server - but with > all of the suggestions, I don't think the issue of changing > default_statistics_target has ever come up. Realizing that there is a > performance hit associated with ANALYZE, are there any other downsides to > increasing this value to 100, and is this a common setting for large DBs? From PostgreSQL 8.3 to 8.4, the default value for default_statistics_target has changed from 10 to 100. I would take that as a very strong indication that 100 is preceived to be a reasonable value by many knowlegdable people. High values of that parameter are advisable if good performance of nontrivial queries is the most important thing in your database (like in a data warehouse) and the cost of ANALYZE is only secondary. Yours, Laurenz Albe
В списке pgsql-performance по дате отправления: