Re: Reasoning behind LWLOCK_PADDED_SIZE/increase it to a full cacheline
От | Andres Freund |
---|---|
Тема | Re: Reasoning behind LWLOCK_PADDED_SIZE/increase it to a full cacheline |
Дата | |
Msg-id | 20130924104811.GA11964@awork2.anarazel.de обсуждение исходный текст |
Ответ на | Re: Reasoning behind LWLOCK_PADDED_SIZE/increase it to a full cacheline (Tom Lane <tgl@sss.pgh.pa.us>) |
Ответы |
Re: Reasoning behind LWLOCK_PADDED_SIZE/increase it to a
full cacheline
|
Список | pgsql-hackers |
On 2013-09-24 12:39:39 +0200, Tom Lane wrote: > Andres Freund <andres@2ndquadrant.com> writes: > > So, what we do is we guarantee that LWLocks are aligned to 16 or 32byte > > boundaries. That means that on x86-64 (64byte cachelines, 24bytes > > unpadded lwlock) two lwlocks share a cacheline. > > In my benchmarks changing the padding to 64byte increases performance in > > workloads with contended lwlocks considerably. > > At a huge cost in RAM. Remember we make two LWLocks per shared buffer. > I think that rather than using a blunt instrument like that, we ought to > see if we can identify pairs of hot LWLocks and make sure they're not > adjacent. That's a good point. What about making all but the shared buffer lwlocks 64bytes? It seems hard to analyze the interactions between all the locks and keep it maintained. Greetings, Andres Freund -- Andres Freund http://www.2ndQuadrant.com/PostgreSQL Development, 24x7 Support, Training & Services
В списке pgsql-hackers по дате отправления: