Re: BufferAlloc: don't take two simultaneous locks
От | Kyotaro Horiguchi |
---|---|
Тема | Re: BufferAlloc: don't take two simultaneous locks |
Дата | |
Msg-id | 20220311.153030.1881495127262295773.horikyota.ntt@gmail.com обсуждение исходный текст |
Ответ на | Re: BufferAlloc: don't take two simultaneous locks (Yura Sokolov <y.sokolov@postgrespro.ru>) |
Ответы |
Re: BufferAlloc: don't take two simultaneous locks
Re: BufferAlloc: don't take two simultaneous locks |
Список | pgsql-hackers |
At Thu, 03 Mar 2022 01:35:57 +0300, Yura Sokolov <y.sokolov@postgrespro.ru> wrote in > В Вт, 01/03/2022 в 10:24 +0300, Yura Sokolov пишет: > > Ok, here is v4. > > And here is v5. > > First, there was compilation error in Assert in dynahash.c . > Excuse me for not checking before sending previous version. > > Second, I add third commit that reduces HASHHDR allocation > size for non-partitioned dynahash: > - moved freeList to last position > - alloc and memset offset(HASHHDR, freeList[1]) for > non-partitioned hash tables. > I didn't benchmarked it, but I will be surprised if it > matters much in performance sence. > > Third, I put all three commits into single file to not > confuse commitfest application. Thanks! I looked into dynahash part. struct HASHHDR { - /* - * The freelist can become a point of contention in high-concurrency hash Why did you move around the freeList? - long nentries; /* number of entries in associated buckets */ + long nfree; /* number of free entries in the list */ + long nalloced; /* number of entries initially allocated for Why do we need nfree? HASH_ASSING should do the same thing with HASH_REMOVE. Maybe the reason is the code tries to put the detached bucket to different free list, but we can just remember the freelist_idx for the detached bucket as we do for hashp. I think that should largely reduce the footprint of this patch. -static void hdefault(HTAB *hashp); +static void hdefault(HTAB *hashp, bool partitioned); That optimization may work even a bit, but it is not irrelevant to this patch? + case HASH_REUSE: + if (currBucket != NULL) + { + /* check there is no unfinished HASH_REUSE+HASH_ASSIGN pair */ + Assert(DynaHashReuse.hashp == NULL); + Assert(DynaHashReuse.element == NULL); I think all cases in the switch(action) other than HASH_ASSIGN needs this assertion and no need for checking both, maybe only for element would be enough. regards. -- Kyotaro Horiguchi NTT Open Source Software Center
В списке pgsql-hackers по дате отправления: