Re: this is in plain text (row level locks)
От | Sailesh Krishnamurthy |
---|---|
Тема | Re: this is in plain text (row level locks) |
Дата | |
Msg-id | bxy8yqo1kz0.fsf@datafix.cs.berkeley.edu обсуждение исходный текст |
Ответ на | Re: this is in plain text (row level locks) (Tom Lane <tgl@sss.pgh.pa.us>) |
Ответы |
Re: this is in plain text (row level locks)
|
Список | pgsql-hackers |
>>>>> "Tom" == Tom Lane <tgl@sss.pgh.pa.us> writes: Tom> That doesn't work, unless you insist that the first backend Tom> can't exit its transaction until all the otherones are done. Tom> Which introduces its own possibilities for deadlock --- but Tom> even worse, how does the firstbackend *know* that the other Tom> ones are done? You're right back where you started: it has Tom> to be possibleto tell which backends have share-locked a Tom> particular row. Is a count a solution ? The first backend gets the S lock on the row - I'm assuming you plan to do it by recording it on the tuple and not in a shared memory lock table, which means that you might have to unnecessarily write an unmodified page if its buffer pool frame is stolen. The problem is that on commit time, you must carefully decrement the count value of shared locks on any tuple that you own. This can be accomplished by having each backend keep track of the list of files and TIDs for any rows for which it acquired S locks. Is this the same way that pgsql releases the X locks ? Bruce, I don't disagree that MVCC has the very nice property that writers don't block readers. However, I don't buy that 2-phase locking, with lock escalation is either unworkable because of too many locks, or causes any extra pain for the user application (apart from the fact that writers not blocking readers gives you more concurrency at some very minor overhead of not being strictly serializable). -- Pip-pip Sailesh http://www.cs.berkeley.edu/~sailesh
В списке pgsql-hackers по дате отправления: