Re: Allow to specify (auto-)vacuum cost limits relative to the database/cluster size?
От | Alvaro Herrera |
---|---|
Тема | Re: Allow to specify (auto-)vacuum cost limits relative to the database/cluster size? |
Дата | |
Msg-id | 20160224165403.GA413518@alvherre.pgsql обсуждение исходный текст |
Ответ на | Re: Allow to specify (auto-)vacuum cost limits relative to the database/cluster size? (Joe Conway <mail@joeconway.com>) |
Ответы |
Re: Allow to specify (auto-)vacuum cost limits relative to
the database/cluster size?
|
Список | pgsql-hackers |
Joe Conway wrote: > In my experience it is almost always best to run autovacuum very often > and very aggressively. That generally means tuning scaling factor and > thresholds as well, such that there are never more than say 50-100k dead > rows. Then running vacuum with no delays or limits runs quite fast is is > generally not noticeable/impactful. > > However that strategy does not work well for vacuums which run long, > such as an anti-wraparound vacuum. So in my opinion we need to think > about this as at least two distinct cases requiring different solutions. With the freeze map there is no need for anti-wraparound vacuums to be terribly costly, since they don't need to scan the whole table each time. That patch probably changes things a lot in this area. -- Álvaro Herrera http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
В списке pgsql-hackers по дате отправления: