Re: Patch: fix lock contention for HASHHDR.mutex
От | Robert Haas |
---|---|
Тема | Re: Patch: fix lock contention for HASHHDR.mutex |
Дата | |
Msg-id | CA+TgmoavsNuBNQf6ROn020F6g9gXJjtMrSkSFZ=RL1t2eJHQ5Q@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: Patch: fix lock contention for HASHHDR.mutex (Andres Freund <andres@anarazel.de>) |
Ответы |
Re: Patch: fix lock contention for HASHHDR.mutex
|
Список | pgsql-hackers |
On Tue, Dec 15, 2015 at 7:25 AM, Andres Freund <andres@anarazel.de> wrote: > On 2015-12-11 17:00:01 +0300, Aleksander Alekseev wrote: >> The problem is that code between LWLockAcquire (lock.c:881) and >> LWLockRelease (lock.c:1020) can _sometimes_ run up to 3-5 ms. Using >> old-good gettimeofday and logging method I managed to find a bottleneck: >> >> -- proclock = SetupLockInTable [lock.c:892] >> `-- proclock = (PROCLOCK *) hash_search_with_hash_value [lock.c:1105] >> `-- currBucket = get_hash_entry(hashp) [dynahash.c:985] >> `-- SpinLockAcquire(&hctl->mutex) [dynahash.c:1187] >> >> If my measurements are not wrong (I didn't place gettimeofday between >> SpinLockAquire/SpinLockRelease, etc) we sometimes spend about 3 ms here >> waiting for a spinlock, which doesn't seems right. > > Well, it's quite possible that your process was scheduled out, while > waiting for that spinlock. Which'd make 3ms pretty normal. > > I'd consider using a LWLock instead of a spinlock here. I've seen this > contended in a bunch of situations, and the queued behaviour, combined > with directed wakeups on the OS level, ought to improve the worst case > behaviour measurably. Amit had the idea a while back of trying to replace the HASHHDR mutex with something based on atomic ops. It seems hard to avoid the attendant A-B-A problems but maybe there's a way. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
В списке pgsql-hackers по дате отправления: