Re: Deadlock bug
От | Josh Berkus |
---|---|
Тема | Re: Deadlock bug |
Дата | |
Msg-id | 4C6ED55E.9040708@agliodbs.com обсуждение исходный текст |
Ответ на | Re: Deadlock bug (Tom Lane <tgl@sss.pgh.pa.us>) |
Ответы |
Re: Deadlock bug
|
Список | pgsql-hackers |
> It *is* allowed to, and in fact has already done so. The problem is > that it now needs a sharelock on the referenced row in order to ensure > that the FK constraint remains satisfied, ie, nobody deletes the > referenced row before we commit the update. In the general case where > the referencing row is new (or has a new FK value) in the current > transaction, such a lock is necessary for correctness. Your case would > work if we could optimize away the FK check, but with only a limited > view of what's happened in the current transaction, it's not always > possible to optimize away the check. Hmmm. It seems to me that we'd need a sharelock on the referenced row both times. Is the below sequence missing something? process 1 process 1 locks process 2 process 2 locks update session; exclusive lock session row; update orders; exclusive lock orders row; share lock session row; update orders; exclusive lock requested orders row (blocks); share lock session row; updateorders; exclusive lock orders row; share lock session row; (in this example, there is an fk orders.sessionid --> session.id ) It certainly seems that process 2 is acquiring exactly the same locks twice, since the referenced value is never being changed. So why does it need a share lock the 2nd time and not the first? Or is the sharelock in the first cycle being optimized away improperly? -- -- Josh Berkus PostgreSQL Experts Inc. http://www.pgexperts.com
В списке pgsql-hackers по дате отправления: