Re: WIP: "More fair" LWLocks
От | Alexander Korotkov |
---|---|
Тема | Re: WIP: "More fair" LWLocks |
Дата | |
Msg-id | CAPpHfduV3v3EG7K74-9htBZz_mpE993zGz-=2k5RNA3tqabUAA@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: WIP: "More fair" LWLocks (Alexander Korotkov <a.korotkov@postgrespro.ru>) |
Список | pgsql-hackers |
On Mon, Oct 15, 2018 at 7:06 PM Alexander Korotkov <a.korotkov@postgrespro.ru> wrote: > I'm going to continue my experiments. I would like to have something > like 4th version of patch, but without extra atomic instructions. May > be by placing number of sequential shared lockers past into separate > (non-atomic) variable. The feedback is welcome. I did try this in 6th version of patch. Now number of shared lwlocks taken in a row is stored in usagecount field of LWLock struct. Thus, this patch doesn't introduce more atomic operations, because usagecount field is not atomic. Also, size of LWLock struct didn't grow, because usagecount takes place of struct padding. Since usagecount is not atomic, it might happens that increment and setting to zero operations overlap together. In this case, setting to zero can appear to be ignored. But that's not catastrophic, because in that case LWLock will just switch to fair more sooner than it normally should. Also, I turn number of sequential shared lwlocks taken before switching to fair mode into a lwlock_shared_limit GUC. Zero disables fair mode completely. Results of pgbench scalability benchmark is attached, With lwlock_shared_limit = 16, no LWLocks are switched to fair mode in the benchmark. However, there is still small overhead. I think it's related to extra cacheline invalidation caused by access to usagecount variable. So, I probably did my best in this direction. For now, I don't have any idea of how to make overhead of "fair mode" availability lower. We (Postgres Pro) found this patch useful and integrated it into our proprietary fork. In my opinion PostgreSQL would also benefit from this patch, because it can dramatically improve the situation on some NUMA systems. Also, this feature is controlled by GUC. So, lwlock_shared_limit = 0 completely disables it, and there is no measurable overhead. Any thoughts? ------ Alexander Korotkov Postgres Professional: http://www.postgrespro.com The Russian Postgres Company
Вложения
В списке pgsql-hackers по дате отправления: