Re: autovacuum scheduling starvation and frenzy
От | Alvaro Herrera |
---|---|
Тема | Re: autovacuum scheduling starvation and frenzy |
Дата | |
Msg-id | 20140930215915.GQ5311@eldon.alvh.no-ip.org обсуждение исходный текст |
Ответ на | Re: autovacuum scheduling starvation and frenzy (Jeff Janes <jeff.janes@gmail.com>) |
Ответы |
Re: autovacuum scheduling starvation and frenzy
Re: autovacuum scheduling starvation and frenzy |
Список | pgsql-hackers |
Jeff Janes wrote: > > I think that instead of > > trying to get a single target database in that foreach loop, we could > > try to build a prioritized list (in-wraparound-danger first, then > > in-multixid-wraparound danger, then the one with the oldest autovac time > > of all the ones that remain); then recheck the wrap-around condition by > > seeing whether there are other workers in that database that started > > after the wraparound condition appeared. > > I think we would want to check for one worker that is still running, and at > least one other worker that started and completed since the wraparound > threshold was exceeded. If there are multiple tables in the database that > need full scanning, it would make sense to have multiple workers. But if a > worker already started and finished without increasing the frozenxid and, > another attempt probably won't accomplish much either. But I have no idea > how to do that bookkeeping, or how much of an improvement it would be over > something simpler. How about something like this: * if autovacuum is disabled, then don't check these conditions; the only reason we're in do_start_worker() in that case is that somebody signalled postmaster that some database needs a for-wraparound emergency vacuum. * if autovacuum is on, and the database was processed less than autovac_naptime/2 ago, and there are no workers running in that database now, then ignore the database. Otherwise, consider it for xid-wraparound vacuuming. So if we launched a worker recently, but it already finished, we would start another one. (If the worker finished, the database should not be in need of a for-wraparound vacuum again, so this seems sensible). Also, we give priority to a database in danger sooner than the full autovac_naptime period; not immediately after the previous worker started, which should give room for other databases to be processed. The attached patch implements that. I only tested it on HEAD, but AFAICS it applies cleanly to 9.4 and 9.3; fairly sure it won't apply to 9.2. Given the lack of complaints, I'm unsure about backpatching further back than 9.3 anyway. -- Álvaro Herrera http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services
Вложения
В списке pgsql-hackers по дате отправления: