Re: [HACKERS] Increase Vacuum ring buffer.
От | Jeff Janes |
---|---|
Тема | Re: [HACKERS] Increase Vacuum ring buffer. |
Дата | |
Msg-id | CAMkU=1wdhazYp6x60_qYSzhg=ohPaKkSWKZs6xoK9Ap7W3yWww@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: [HACKERS] Increase Vacuum ring buffer. (Tom Lane <tgl@sss.pgh.pa.us>) |
Список | pgsql-hackers |
On Thu, Jul 20, 2017 at 12:51 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
Robert Haas <robertmhaas@gmail.com> writes:
> I think that's a valid point. There are also other concerns here -
> e.g. whether instead of adopting the patch as proposed we ought to (a)
> use some smaller size, or (b) keep the size as-is but reduce the
> maximum fraction of shared_buffers that can be consumed, or (c) divide
> the ring buffer size through by autovacuum_max_workers. Personally,
> of those approaches, I favor (b). I think a 16MB ring buffer is
> probably just fine if you've got 8GB of shared_buffers but I'm
> skeptical about it when you've got 128MB of shared_buffers.
WFM. I agree with *not* dividing the basic ring buffer size by
autovacuum_max_workers. If you have allocated more AV workers, I think
you expect AV to go faster, not for the workers to start fighting among
themselves.
But fighting among themselves is just what they do regarding the autovacuum_vacuum_cost_limit, so I don't see why it should be one way there but different here. The reason for setting autovacuum_max_workers to N is so that small tables aren't completely starved of vacuuming even if N-1 larger tables are already being vacuumed simultaneously. Now the small tables get vacuumed at speed 1/N, which kind of sucks, but that is the mechanism we currently have.
Of course just because we are in a hole with vacuum_cost_limit doesn't mean we should dig ourselves deeper, but we are being inconsistent then.
Cheers,
Jeff
В списке pgsql-hackers по дате отправления: