Re: Should we increase the default vacuum_cost_limit?
От | Andrew Dunstan |
---|---|
Тема | Re: Should we increase the default vacuum_cost_limit? |
Дата | |
Msg-id | 13d143dd-0a44-0c16-218c-bc924b5922f0@2ndQuadrant.com обсуждение исходный текст |
Ответ на | Re: Should we increase the default vacuum_cost_limit? (David Rowley <david.rowley@2ndquadrant.com>) |
Ответы |
Re: Should we increase the default vacuum_cost_limit?
|
Список | pgsql-hackers |
On 3/8/19 6:47 PM, David Rowley wrote: > On Sat, 9 Mar 2019 at 07:10, Tom Lane <tgl@sss.pgh.pa.us> wrote: >> Jeff Janes <jeff.janes@gmail.com> writes: >>> Now that this is done, the default value is only 5x below the hard-coded >>> maximum of 10,000. >>> This seems a bit odd, and not very future-proof. Especially since the >>> hard-coded maximum appears to have no logic to it anyway, at least none >>> that is documented. Is it just mindless nannyism? >> Hm. I think the idea was that rather than setting it to "something very >> large", you'd want to just disable the feature via vacuum_cost_delay. >> But I agree that the threshold for what is ridiculously large probably >> ought to be well more than 5x the default, and maybe it is just mindless >> nannyism to have a limit less than what the implementation can handle. > Yeah, +1 to increasing it. I imagine that the 10,000 limit would not > allow people to explore the upper limits of a modern PCI-E SSD with > the standard delay time and dirty/miss scores. Also, it doesn't seem > entirely unreasonable that someone somewhere might also want to > fine-tune the hit/miss/dirty scores so that they're some larger factor > apart from each other the standard scores are. The 10,000 limit does > not allow much wiggle room for that. > Increase it to what? cheers andrew -- Andrew Dunstan https://www.2ndQuadrant.com PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
В списке pgsql-hackers по дате отправления: