Re: Per table autovacuum vacuum cost limit behaviour strange
От | Mark Kirkwood |
---|---|
Тема | Re: Per table autovacuum vacuum cost limit behaviour strange |
Дата | |
Msg-id | 54000317.6030604@catalyst.net.nz обсуждение исходный текст |
Ответ на | Re: Per table autovacuum vacuum cost limit behaviour strange (Alvaro Herrera <alvherre@2ndquadrant.com>) |
Ответы |
Re: Per table autovacuum vacuum cost limit behaviour
strange
|
Список | pgsql-hackers |
On 29/08/14 08:56, Alvaro Herrera wrote: > Robert Haas wrote: > >> I agree that you might not like that. But you might not like having >> the table vacuumed slower than the configured rate, either. My >> impression is that the time between vacuums isn't really all that >> negotiable for some people. I had one customer who had horrible bloat >> issues on a table that was vacuumed every minute; when we changed the >> configuration so that it was vacuumed every 15 seconds, those problems >> went away. > > Wow, that's extreme. For that case you can set > autovacuum_vacuum_cost_limit to 0, which disables the whole thing and > lets vacuum run at full speed -- no throttling at all. Would that > satisfy the concern? > Well no - you might have a whole lot of big tables that you want vacuum to not get too aggressive on, but a few small tables that are highly volatile. So you want *them* vacuumed really fast to prevent them becoming huge tables with only a few rows therein, but your system might not be able to handle *all* your tables being vacuum full speed. This is a fairly common scenario for (several) web CMS systems that tend to want to have session and/cache tables that are small and extremely volatile, plus the rest of the (real) data that is bigger and vastly less volatile. While there is a valid objection along the lines of "don't use a database instead of memcache", it does seem reasonable that Postgres should be able to cope with this type of workload. Cheers Mark
В списке pgsql-hackers по дате отправления: