Re: spinlock contention
От | Robert Haas |
---|---|
Тема | Re: spinlock contention |
Дата | |
Msg-id | CA+TgmoYthwiV32XjUsDHMkVRsccp5eV54sj0cGBJymU0r-5oPg@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: spinlock contention (Florian Pflug <fgp@phlo.org>) |
Ответы |
Re: spinlock contention
|
Список | pgsql-hackers |
On Thu, Jul 7, 2011 at 5:54 AM, Florian Pflug <fgp@phlo.org> wrote: > In effect, the resulting thing is an LWLock with a partitioned shared > counter. The partition one backend operates on for shared locks is > determined by its backend id. > > I've added the implementation to the lock benchmarking tool at > https://github.com/fgp/lockbench > and also pushed a patched version of postgres to > https://github.com/fgp/postgres/tree/lwlock_part > > The number of shared counter partitions is current 4, but can easily > be adjusted in lwlock.h. The code uses GCC's atomic fetch and add > intrinsic if available, otherwise it falls back to using a per-partition > spin lock. I think this is probably a good trade-off for locks that are most frequently taken in shared mode (like SInvalReadLock), but it seems like it could be a very bad trade-off for locks that are frequently taken in both shared and exclusive mode (e.g. ProcArrayLock, BufMappingLocks). I don't want to fiddle with your git repo, but if you attach a patch that applies to the master branch I'll give it a spin if I have time. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
В списке pgsql-hackers по дате отправления: