On Sun, Dec 19, 2010 at 4:02 AM, Florian Pflug <fgp@phlo.org> wrote:
> Yes. Otherwise, B cannot verify that the database is consistent.
>
> Note that it's sufficient to check if B can see the effects of the
> *latest* locker of T. If it can see those, it must also see the
> effects of any previous locker. But because of this, B cannot
> distinguish different lock strengths on T - even if A locked T
> in exclusive mode, some transaction A2 may lock T in shared mode
> after A has committed but before B inspects T.
This seems to point to a rather serious problem, though. If B sees
that the last locker A1 aborted, correctness requires that it roll
back B, because there might have been some other transaction A2 which
locked locked T and committed before A1 touched it. Implementing that
behavior could lead to a lot of spurious rollbacks, but NOT
implementing it isn't fully correct.
> For serializable transactions, everything is as you explained it. Taking a
> lock amounts to saying "If you can't see what I did, leave this tuple alone".
> A read-committed transactions, though, sees different things at different
> times since it takes a new snapshot for every statement. Since we cannot
> raise a serialization error in a read-committed transaction, an UPDATE
> or DELETE statement within a read-committed transaction may very well
> modify a previously locked row, even if *doesn't* see the effects of some
> concurrent locker. Any snapshot taken after the UDPATE or DELETE hit the
> locked row, however, *will* see those changes. This includes the snapshot
> taken within any AFTER trigger fired on updating the locked row. Thus,
> things for one fine for RI constraints enforced by triggers.
I can't parse the last sentence of this paragraph. I think there is a
word or two wrong.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company