Re: [HACKERS] Block level parallel vacuum
От | Robert Haas |
---|---|
Тема | Re: [HACKERS] Block level parallel vacuum |
Дата | |
Msg-id | CA+TgmobkRtLb5frmEF5t9U=d+iV9c5emtN+NrRS_xrHaH1Z20A@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: [HACKERS] Block level parallel vacuum (Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp>) |
Ответы |
Re: [HACKERS] Block level parallel vacuum
|
Список | pgsql-hackers |
On Tue, Mar 19, 2019 at 3:59 AM Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp> wrote: > The leader doesn't continue heap-scan while index vacuuming is > running. And the index-page-scan seems eat up CPU easily. If > index vacuum can run simultaneously with the next heap scan > phase, we can make index scan finishes almost the same time with > the next round of heap scan. It would reduce the (possible) CPU > contention. But this requires as the twice size of shared > memoryas the current implement. I think you're approaching this from the wrong point of view. If we have a certain amount of memory available, is it better to (a) fill the entire thing with dead tuples once, or (b) better to fill half of it with dead tuples, start index vacuuming, and then fill the other half of it with dead tuples for the next index-vacuum cycle while the current one is running? I think the answer is that (a) is clearly better, because it results in half as many index vacuum cycles. We can't really ask the user how much memory it's OK to use and then use twice as much. But if we could, what you're proposing here is probably still not the right way to use it. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
В списке pgsql-hackers по дате отправления: