Re: Separating Buffer LWlocks
От | Andres Freund |
---|---|
Тема | Re: Separating Buffer LWlocks |
Дата | |
Msg-id | 20150907175909.GD5084@alap3.anarazel.de обсуждение исходный текст |
Ответ на | Re: Separating Buffer LWlocks (Andres Freund <andres@anarazel.de>) |
Ответы |
Re: Separating Buffer LWlocks
|
Список | pgsql-hackers |
On 2015-09-06 15:28:40 +0200, Andres Freund wrote: > Hm. I found that the buffer content lwlocks can actually also be a > significant source of contention - I'm not sure reducing padding for > those is going to be particularly nice. I think we should rather move > the *content* lock inline into the buffer descriptor. The io lock > doesn't matter and can be as small as possible. POC patch along those lines attached. This way the content locks have full 64byte alignment *without* any additional memory usage because buffer descriptors are already padded to 64bytes. I'd to reorder BufferDesc contents a bit and reduce the width of usagecount to 8bit (which is fine given that 5 is our highest value) to make enough room. I've experimented reducing the padding of the IO locks to nothing since they're not that often contended on the CPU level. But even on my laptop that lead to a noticeable regression for a readonly pgbench workload where the dataset fit into the OS page cache but not into s_b. > Additionally I think we should increase the lwlock padding to 64byte > (i.e. the by far most command cacheline size). In the past I've seen > that to be rather beneficial. You'd already done that... Benchmarking this on my 4 core/8 threads laptop I see a very slight performance increase - which is about what we expect since this really only should affect multi-socket machines. Greetings, Andres Freund
Вложения
В списке pgsql-hackers по дате отправления: