Re: Exclusive Locks on Insert into large Logging tables
От | Tom Lane |
---|---|
Тема | Re: Exclusive Locks on Insert into large Logging tables |
Дата | |
Msg-id | 16805.1550541452@sss.pgh.pa.us обсуждение исходный текст |
Ответ на | Re: Exclusive Locks on Insert into large Logging tables (Laurenz Albe <laurenz.albe@cybertec.at>) |
Ответы |
RE: Exclusive Locks on Insert into large Logging tables
|
Список | pgsql-novice |
Laurenz Albe <laurenz.albe@cybertec.at> writes: > Peter Neave wrote: >> I’ve been trying to get down to the source of some locks. ... >> LOG: process 116665 acquired ExclusiveLock on extension of relation 59049887 of database 59049867 after 5838.015 ms). > Exclusive locks conflict with everything but SELECTs, see > https://www.postgresql.org/docs/current/explicit-locking.html#TABLE-LOCK-COMPATIBILITY True, but not very relevant, because: > This particular exclusive lock is takes when the relation has to be extended > with new blocks because there is no more room for the new row in the > existing blocks of the table. Yeah. This is an implementation-level lock that is independent of anything you can take at the SQL level. It's needed because only one backend can extend (physically add space to) a table at a time. If you have lots of transactions all trying to insert into one table concurrently, some contention for that lock is to be expected. Blocking for ~6 sec is pretty darn horrid though --- the expected lock hold time is only what's needed to write out a page full of zeroes. We have heard reports of excessive contention for relation extension locks before, and there's been work done to mitigate that by extending the relation more than one block at a time. Maybe you're hitting a case where that doesn't work well? Or maybe the problem is just that you've got under-specced mass storage? If the filesystem goes out to lunch when asked to do a lot of writing, it'd very possibly result in this symptom. regards, tom lane
В списке pgsql-novice по дате отправления: