Re: SLRU optimization - configurable buffer pool and partitioning the SLRU lock
От | Andrey M. Borodin |
---|---|
Тема | Re: SLRU optimization - configurable buffer pool and partitioning the SLRU lock |
Дата | |
Msg-id | 532DE4E9-F02A-4984-A0D0-A6CBA82B60A8@yandex-team.ru обсуждение исходный текст |
Ответ на | Re: SLRU optimization - configurable buffer pool and partitioning the SLRU lock (Robert Haas <robertmhaas@gmail.com>) |
Ответы |
Re: SLRU optimization - configurable buffer pool and partitioning the SLRU lock
|
Список | pgsql-hackers |
> On 18 Dec 2023, at 22:30, Robert Haas <robertmhaas@gmail.com> wrote: > > On Mon, Dec 18, 2023 at 12:04 PM Robert Haas <robertmhaas@gmail.com> wrote: >> certain sense they are competing for the same job. However, they do >> aim to alleviate different TYPES of contention: the group XID update >> stuff should be most valuable when lots of processes are trying to >> update the same page, and the banks should be most valuable when there >> is simultaneous access to a bunch of different pages. So I'm not >> convinced that this patch is a reason to remove the group XID update >> mechanism, but someone might argue otherwise. > > Hmm, but, on the other hand: > > Currently all readers and writers are competing for the same LWLock. > But with this change, the readers will (mostly) no longer be competing > with the writers. So, in theory, that might reduce lock contention > enough to make the group update mechanism pointless. One page still accommodates 32K transaction statuses under one lock. It feels like a lot. About 1 second of transactionson a typical installation. When the group commit was committed did we have a benchmark to estimate efficiency of this technology? Can we repeat thattest again? Best regards, Andrey Borodin.
В списке pgsql-hackers по дате отправления: