Re: Block level parallel vacuum WIP
От | Andres Freund |
---|---|
Тема | Re: Block level parallel vacuum WIP |
Дата | |
Msg-id | 20160823164836.naody2ht6cutioiz@alap3.anarazel.de обсуждение исходный текст |
Ответ на | Re: Block level parallel vacuum WIP (Robert Haas <robertmhaas@gmail.com>) |
Список | pgsql-hackers |
On 2016-08-23 12:17:30 -0400, Robert Haas wrote: > On Tue, Aug 23, 2016 at 11:17 AM, Alvaro Herrera > <alvherre@2ndquadrant.com> wrote: > > Robert Haas wrote: > >> 2. When you finish the heap scan, or when the array of dead tuple IDs > >> is full (or very nearly full?), perform a cycle of index vacuuming. > >> For now, have each worker process a separate index; extra workers just > >> wait. Perhaps use the condition variable patch that I posted > >> previously to make the workers wait. Then resume the parallel heap > >> scan, if not yet done. > > > > At least btrees should easily be scannable in parallel, given that we > > process them in physical order rather than logically walk the tree. So > > if there are more workers than indexes, it's possible to put more than > > one worker on the same index by carefully indicating each to stop at a > > predetermined index page number. > > Well that's fine if we figure it out, but I wouldn't try to include it > in the first patch. Let's make VACUUM parallel one step at a time. Given that index scan(s) are, in my experience, way more often the bottleneck than the heap-scan(s), I'm not sure that order is the best. The heap-scan benefits from the VM, the index scans don't.
В списке pgsql-hackers по дате отправления: