Re: We probably need autovacuum_max_wraparound_workers
От | Tom Lane |
---|---|
Тема | Re: We probably need autovacuum_max_wraparound_workers |
Дата | |
Msg-id | 3185.1340863343@sss.pgh.pa.us обсуждение исходный текст |
Ответ на | Re: We probably need autovacuum_max_wraparound_workers (Robert Haas <robertmhaas@gmail.com>) |
Ответы |
Re: We probably need autovacuum_max_wraparound_workers
|
Список | pgsql-hackers |
Robert Haas <robertmhaas@gmail.com> writes: > It's just ridiculous to assert that it doesn't matter if all the > anti-wraparound vacuums start simultaneously. It does matter. For > one thing, once every single autovacuum worker is pinned down doing an > anti-wraparound vacuum of some table, then a table that needs an > ordinary vacuum may have to wait quite some time before a worker is > available. Well, that's a fair point, but I don't think it has anything to do with Josh's complaint --- which AFAICT is about imposed load, not about failure to vacuum things that need vacuumed. Any scheme you care to design will sometimes be running max_workers workers at once, and if that's too much load there will be trouble. I grant that there can be value in a more complex strategy for when to schedule vacuuming activities, but I don't think that it has a lot to do with solving the present complaint. > Parallelism is not free, ever, and particularly not here, where it has > the potential to yank the disk head around between five different > files, seeking like crazy, instead of a nice sequential I/O pattern on > each file in turn. Interesting point. Maybe what's going on here is that autovac_balance_cost() is wrong to suppose that N workers can each have 1/N of the I/O bandwidth that we'd consider okay for a single worker to eat. Maybe extra seek costs mean we have to derate that curve by some large factor. 1/(N^2), perhaps? I bet the nature of the disk subsystem affects this a lot, though. regards, tom lane
В списке pgsql-hackers по дате отправления: