Re: Thoughts about NUM_BUFFER_PARTITIONS
От | Tomas Vondra |
---|---|
Тема | Re: Thoughts about NUM_BUFFER_PARTITIONS |
Дата | |
Msg-id | 8c4a5f06-7476-4646-bb8a-6581a26b0650@enterprisedb.com обсуждение исходный текст |
Ответ на | Re: Thoughts about NUM_BUFFER_PARTITIONS (Li Japin <japinli@hotmail.com>) |
Ответы |
Re: Thoughts about NUM_BUFFER_PARTITIONS
|
Список | pgsql-hackers |
On 2/18/24 03:30, Li Japin wrote: > > >> On Feb 10, 2024, at 20:15, Tomas Vondra <tomas.vondra@enterprisedb.com> wrote: >> >> On 2/8/24 14:27, wenhui qiu wrote: >>> Hi Heikki Linnakangas >>> I think the larger shared buffer higher the probability of multiple >>> backend processes accessing the same bucket slot BufMappingLock >>> simultaneously, ( InitBufTable(NBuffers + NUM_BUFFER_PARTITIONS); When I >>> have free time, I want to do this test. I have seen some tests, but the >>> result report is in Chinese >>> >> >> I think Heikki is right this is unrelated to the amount of RAM. The >> partitions are meant to reduce the number of lock collisions when >> multiple processes try to map a buffer concurrently. But the machines >> got much larger in this regard too - in 2006 the common CPUs had maybe >> 2-4 cores, now it's common to have CPUs with ~100 cores, and systems >> with multiple of them. OTOH the time spent holing the partition lock >> should be pretty low, IIRC we pretty much just pin the buffer before >> releasing it, and the backend should do plenty other expensive stuff. So >> who knows how many backends end up doing the locking at the same time. >> >> OTOH, with 128 partitions it takes just 14 backends to have 50% chance >> of a conflict, so with enough cores ... But how many partitions would be >> enough? With 1024 partitions it still takes only 38 backends to get 50% >> chance of a collision. Better, but considering we now have hundreds of >> cores, not sure if sufficient. >> >> (Obviously, we probably want much lower probability of a collision, I >> only used 50% to illustrate the changes). >> > > I find it seems need to change MAX_SIMUL_LWLOCKS if we enlarge the NUM_BUFFER_PARTITIONS, > I didn’t find any comments to describe the relation between MAX_SIMUL_LWLOCKS and > NUM_BUFFER_PARTITIONS, am I missing someghing? IMHO the relationship is pretty simple - MAX_SIMUL_LWLOCKS needs to be higher than NUM_BUFFER_PARTITIONS, so that the backend can acquire all the partition locks if needed. There's other places that acquire a bunch of locks, and all of them need to be careful not to exceed MAX_SIMUL_LWLOCKS. For example gist has GIST_MAX_SPLIT_PAGES. regards -- Tomas Vondra EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
В списке pgsql-hackers по дате отправления: