Обсуждение: long-standing data loss bug in initial sync of logical replication

Поиск
Список
Период
Сортировка

long-standing data loss bug in initial sync of logical replication

От
Tomas Vondra
Дата:
Hi,

It seems there's a long-standing data loss issue related to the initial
sync of tables in the built-in logical replication (publications etc.).
I can reproduce it fairly reliably, but I haven't figured out all the
details yet and I'm a bit out of ideas, so I'm sharing what I know with
the hope someone takes a look and either spots the issue or has some
other insight ...

On the pgsql-bugs, Depesz reported reported [1] cases where tables are
added to a publication but end up missing rows on the subscriber. I
didn't know what might be the issue, but given his experience I decided
to take a do some blind attempts to reproduce the issue.

I'm not going to repeat all the details from the pgsql-bugs thread, but
I ended up writing a script that does randomized stress test tablesync
under concurrent load. Attached are two scripts, where crash-test.sh
does the main work, while run.sh drives the test - executes
crash-test.sh in a loop and generates random parameters for it.

The run.sh generates number of tables, refresh interval (after how many
tables we refresh subscription) and how long to sleep between steps (to
allow pgbench to do more work).

The crash-test.sh then does this:

  1) initializes two clusters (expects $PATH to have pg_ctl etc.)

  2) configures them for logical replication (wal_level, ...)

  3) creates publication and subscription on the nodes

  4) creates some a bunch of tables

  5) starts a pgbench that inserts data into the tables

  6) adds the tables to the publication one by one, occasionally
     refreshing the subscription

  7) waits for tablesync of all the tables to complete (so that the
     tables get into the 'r' state, thus replicating normally)

  8) stops the pgbench

  9) waits for the subscriber to fully catch up

  10) compares that the tables on publisher/subscriber nodes

To run this, just make sure PATH includes pg, and do e.g.

   ./run.sh 10

which does 10 runs of crash-test.sh with random parameters. Each run can
take a couple minutes, depending on the parameters, hardware etc.


Obviously, we expect the tables to match on the two nodes, but the
script regularly detects cases where the subscriber is missing some of
the rows. The script dumps those tables, and the rows contain timestamps
and LSNs to allow "rough correlation" (imperfect thanks to concurrency).

Depesz reported "gaps" in the data, i.e. missing a chunk of data, but
then following rows seemingly replicated. I did see such cases too, but
most of the time I see a missing chunk of rows at the end (but maybe if
the test continued a bit longer, it'd replicate some rows).

The report talks about replication between pg12->pg14, but I don't think
the cross-version part is necessary - I'm able to reproduce the issue on
individual versions (e.g. 12->12) since 12 (I haven't tried 11, but I'd
be surprised if it wasn't affected too).

The rows include `pg_current_wal_lsn()` to roughly track the LSN where
the row is inserted, and the "gap" of missing rows for each table seems
to match pg_subscription_rel.srsublsn, i.e. the LSN up to which
tablesync copied data, and the table should be replicated as usual.

Another interesting observation is that the issue only happens for "bulk
insert" transactions, i.e.

  BEGIN;
  ... INSERT into all tables ...
  COMMIT;

but not when each insert is a separate transaction. A bit strange.


After quite a bit of debugging, I came to the conclusion this happens
because we fail to invalidate caches on the publisher, so it does not
realize it should start sending rows for that table.

In particular, we initially build RelationSyncEntry when the table is
not yet included in the publication, so we end up with pubinsert=false,
thus not replicating the inserts. Which makes sense, but we then seems
to fail to invalidate the entry after it's added to the publication.

The other problem is that even if we happen to invalidate the entry, we
call GetRelationPublications(). But even if it happens long after the
table gets added to the publication (both in time and LSN terms), it
still returns NIL as if the table had no publications. And we end up
with pubinsert=false, skipping the inserts again.

Attached are three patches against master. 0001 adds some debug logging
that I found useful when investigating the issue. 0002 illustrates the
issue by forcefully invalidating the entry for each change, and
implementing a non-syscache variant of the GetRelationPublication().
This makes the code unbearably slow, but with both changes in place I
can no longer reproduce the issue. Undoing either of the two changes
makes it reproducible again. (I'll talk about 0003 later.)

I suppose timing matters, so it's possible it gets "fixed" simply
because of that, but I find that unlikely given the number of runs I did
without observing any failure.

Overall, this looks, walks and quacks like a cache invalidation issue,
likely a missing invalidation somewhere in the ALTER PUBLICATION code.
If we fail to invalidate the pg_publication_rel syscache somewhere, that
obviously explain why GetRelationPublications() returns stale data, but
it would also explain why the RelationSyncEntry is not invalidated, as
that happens in a syscache callback.

But I tried to do various crazy things in the ALTER PUBLICATION code,
and none of that worked, so I'm a bit confused/lost.


However, while randomly poking at different things, I realized that if I
change the lock obtained on the relation in OpenTableList() from
ShareUpdateExclusiveLock to ShareRowExclusiveLock, the issue goes away.
I don't know why it works, and I don't even recall what exactly led me
to the idea of changing it.

This is what 0003 does - it reverts 0002 and changes the lock level.

AFAIK the logical decoding code doesn't actually acquire locks on the
decoded tables, so why would this change matter? The only place that
does lock the relation is the tablesync, which gets RowExclusiveLock on
it. And it's interesting that RowExclusiveLock does not conflict with
ShareUpdateExclusiveLock, but does with ShareRowExclusiveLock. But why
would this even matter, when the tablesync can only touch the table
after it gets added to the publication?


regards

[1] https://www.postgresql.org/message-id/ZTu8GTDajCkZVjMs@depesz.com

-- 
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

Вложения

Re: long-standing data loss bug in initial sync of logical replication

От
Andres Freund
Дата:
Hi,

On 2023-11-17 15:36:25 +0100, Tomas Vondra wrote:
> It seems there's a long-standing data loss issue related to the initial
> sync of tables in the built-in logical replication (publications etc.).

:(


> Overall, this looks, walks and quacks like a cache invalidation issue,
> likely a missing invalidation somewhere in the ALTER PUBLICATION code.

It could also be be that pgoutput doesn't have sufficient invalidation
handling.


One thing that looks bogus on the DDL side is how the invalidation handling
interacts with locking.


For tables etc the invalidation handling works because we hold a lock on the
relation before modifying the catalog and don't release that lock until
transaction end. That part is crucial: We queue shared invalidations at
transaction commit, *after* the transaction is marked as visible, but *before*
locks are released. That guarantees that any backend processing invalidations
will see the new contents.  However, if the lock on the modified object is
released before transaction commit, other backends can build and use a cache
entry that hasn't processed invalidations (invaliations are processed when
acquiring locks).

While there is such an object for publications, it seems to be acquired too
late to actually do much good in a number of paths. And not at all in others.

E.g.:

    pubform = (Form_pg_publication) GETSTRUCT(tup);

    /*
     * If the publication doesn't publish changes via the root partitioned
     * table, the partition's row filter and column list will be used. So
     * disallow using WHERE clause and column lists on partitioned table in
     * this case.
     */
    if (!pubform->puballtables && publish_via_partition_root_given &&
        !publish_via_partition_root)
        {
        /*
         * Lock the publication so nobody else can do anything with it. This
         * prevents concurrent alter to add partitioned table(s) with WHERE
         * clause(s) and/or column lists which we don't allow when not
         * publishing via root.
         */
        LockDatabaseObject(PublicationRelationId, pubform->oid, 0,
                           AccessShareLock);

a) Another session could have modified the publication and made puballtables out-of-date
b) The LockDatabaseObject() uses AccessShareLock, so others can get past this
   point as well

b) seems like a copy-paste bug or such?


I don't see any locking of the publication around RemovePublicationRelById(),
for example.

I might just be misunderstanding things the way publication locking is
intended to work.





> However, while randomly poking at different things, I realized that if I
> change the lock obtained on the relation in OpenTableList() from
> ShareUpdateExclusiveLock to ShareRowExclusiveLock, the issue goes away.

That's odd. There's cases where changing the lock level can cause invalidation
processing to happen because there is no pre-existing lock for the "new" lock
level, but there was for the old. But OpenTableList() is used when altering
the publications, so I don't see how that connects.

Greetings,

Andres Freund



Re: long-standing data loss bug in initial sync of logical replication

От
Andres Freund
Дата:
Hi,

On 2023-11-17 17:54:43 -0800, Andres Freund wrote:
> On 2023-11-17 15:36:25 +0100, Tomas Vondra wrote:
> > Overall, this looks, walks and quacks like a cache invalidation issue,
> > likely a missing invalidation somewhere in the ALTER PUBLICATION code.

I can confirm that something is broken with invalidation handling.

To test this I just used pg_recvlogical to stdout. It's just interesting
whether something arrives, that's easy to discern even with binary output.

CREATE PUBLICATION pb;
src/bin/pg_basebackup/pg_recvlogical --plugin=pgoutput --start --slot test -d postgres -o proto_version=4 -o
publication_names=pb-o messages=true -f -
 

S1: CREATE TABLE d(data text not null);
S1: INSERT INTO d VALUES('d1');
S2: BEGIN; INSERT INTO d VALUES('d2');
S1: ALTER PUBLICATION pb ADD TABLE d;
S2: COMMIT
S2: INSERT INTO d VALUES('d3');
S1: INSERT INTO d VALUES('d4');
RL: <nothing>

Without the 'd2' insert in an in-progress transaction, pgoutput *does* react
to the ALTER PUBLICATION.

I think the problem here is insufficient locking. The ALTER PUBLICATION pb ADD
TABLE d basically modifies the catalog state of 'd', without a lock preventing
other sessions from having a valid cache entry that they could continue to
use. Due to this, decoding S2's transactions that started before S2's commit,
will populate the cache entry with the state as of the time of S1's last
action, i.e. no need to output the change.

The reason this can happen is because OpenTableList() uses
ShareUpdateExclusiveLock. That allows the ALTER PUBLICATION to happen while
there's an ongoing INSERT.

I think this isn't just a logical decoding issue. S2's cache state just after
the ALTER PUBLICATION is going to be wrong - the table is already locked,
therefore further operations on the table don't trigger cache invalidation
processing - but the catalog state *has* changed.  It's a bigger problem for
logical decoding though, as it's a bit more lazy about invalidation processing
than normal transactions, allowing the problem to persist for longer.


I guess it's not really feasible to just increase the lock level here though
:(. The use of ShareUpdateExclusiveLock isn't new, and suddenly using AEL
would perhaps lead to new deadlocks and such? But it also seems quite wrong.


We could brute force this in the logical decoding infrastructure, by
distributing invalidations from catalog modifying transactions to all
concurrent in-progress transactions (like already done for historic catalog
snapshot, c.f. SnapBuildDistributeNewCatalogSnapshot()).  But I think that'd
be a fairly significant increase in overhead.



Greetings,

Andres Freund



Re: long-standing data loss bug in initial sync of logical replication

От
Tomas Vondra
Дата:
On 11/18/23 02:54, Andres Freund wrote:
> Hi,
> 
> On 2023-11-17 15:36:25 +0100, Tomas Vondra wrote:
>> It seems there's a long-standing data loss issue related to the initial
>> sync of tables in the built-in logical replication (publications etc.).
> 
> :(
> 

Yeah :-(

> 
>> Overall, this looks, walks and quacks like a cache invalidation issue,
>> likely a missing invalidation somewhere in the ALTER PUBLICATION code.
> 
> It could also be be that pgoutput doesn't have sufficient invalidation
> handling.
> 

I'm not sure about the details, but it can't be just about pgoutput
failing to react to some syscache invalidation. As described, just
resetting the RelationSyncEntry doesn't fix the issue - it's the
syscache that's not invalidated, IMO. But maybe that's what you mean.

> 
> One thing that looks bogus on the DDL side is how the invalidation handling
> interacts with locking.
> 
> 
> For tables etc the invalidation handling works because we hold a lock on the
> relation before modifying the catalog and don't release that lock until
> transaction end. That part is crucial: We queue shared invalidations at
> transaction commit, *after* the transaction is marked as visible, but *before*
> locks are released. That guarantees that any backend processing invalidations
> will see the new contents.  However, if the lock on the modified object is
> released before transaction commit, other backends can build and use a cache
> entry that hasn't processed invalidations (invaliations are processed when
> acquiring locks).
> 

Right.

> While there is such an object for publications, it seems to be acquired too
> late to actually do much good in a number of paths. And not at all in others.
> 
> E.g.:
> 
>     pubform = (Form_pg_publication) GETSTRUCT(tup);
> 
>     /*
>      * If the publication doesn't publish changes via the root partitioned
>      * table, the partition's row filter and column list will be used. So
>      * disallow using WHERE clause and column lists on partitioned table in
>      * this case.
>      */
>     if (!pubform->puballtables && publish_via_partition_root_given &&
>         !publish_via_partition_root)
>         {
>         /*
>          * Lock the publication so nobody else can do anything with it. This
>          * prevents concurrent alter to add partitioned table(s) with WHERE
>          * clause(s) and/or column lists which we don't allow when not
>          * publishing via root.
>          */
>         LockDatabaseObject(PublicationRelationId, pubform->oid, 0,
>                            AccessShareLock);
> 
> a) Another session could have modified the publication and made puballtables out-of-date
> b) The LockDatabaseObject() uses AccessShareLock, so others can get past this
>    point as well
> 
> b) seems like a copy-paste bug or such?
> 
> 
> I don't see any locking of the publication around RemovePublicationRelById(),
> for example.
> 
> I might just be misunderstanding things the way publication locking is
> intended to work.
> 

I've been asking similar questions while investigating this, but the
interactions with logical decoding (which kinda happens concurrently in
terms of WAL, but not concurrently in terms of time), historical
snapshots etc. make my head spin.

> 
>> However, while randomly poking at different things, I realized that if I
>> change the lock obtained on the relation in OpenTableList() from
>> ShareUpdateExclusiveLock to ShareRowExclusiveLock, the issue goes away.
> 
> That's odd. There's cases where changing the lock level can cause invalidation
> processing to happen because there is no pre-existing lock for the "new" lock
> level, but there was for the old. But OpenTableList() is used when altering
> the publications, so I don't see how that connects.
> 

Yeah, I had the idea that maybe the transaction already holds the lock
on the table, and changing this to ShareRowExclusiveLock makes it
different, possibly triggering a new invalidation or something. But I
did check with gdb, and if I set a breakpoint at OpenTableList, there
are no locks on the table.

But the effect is hard to deny - if I run the test 100 times, with the
SharedUpdateExclusiveLock I get maybe 80 failures. After changing it to
ShareRowExclusiveLock I get 0. Sure, there's some randomness for cases
like this, but this is pretty unlikely.


regards

-- 
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



Re: long-standing data loss bug in initial sync of logical replication

От
Tomas Vondra
Дата:

On 11/18/23 03:54, Andres Freund wrote:
> Hi,
> 
> On 2023-11-17 17:54:43 -0800, Andres Freund wrote:
>> On 2023-11-17 15:36:25 +0100, Tomas Vondra wrote:
>>> Overall, this looks, walks and quacks like a cache invalidation issue,
>>> likely a missing invalidation somewhere in the ALTER PUBLICATION code.
> 
> I can confirm that something is broken with invalidation handling.
> 
> To test this I just used pg_recvlogical to stdout. It's just interesting
> whether something arrives, that's easy to discern even with binary output.
> 
> CREATE PUBLICATION pb;
> src/bin/pg_basebackup/pg_recvlogical --plugin=pgoutput --start --slot test -d postgres -o proto_version=4 -o
publication_names=pb-o messages=true -f -
 
> 
> S1: CREATE TABLE d(data text not null);
> S1: INSERT INTO d VALUES('d1');
> S2: BEGIN; INSERT INTO d VALUES('d2');
> S1: ALTER PUBLICATION pb ADD TABLE d;
> S2: COMMIT
> S2: INSERT INTO d VALUES('d3');
> S1: INSERT INTO d VALUES('d4');
> RL: <nothing>
> 
> Without the 'd2' insert in an in-progress transaction, pgoutput *does* react
> to the ALTER PUBLICATION.
> 
> I think the problem here is insufficient locking. The ALTER PUBLICATION pb ADD
> TABLE d basically modifies the catalog state of 'd', without a lock preventing
> other sessions from having a valid cache entry that they could continue to
> use. Due to this, decoding S2's transactions that started before S2's commit,
> will populate the cache entry with the state as of the time of S1's last
> action, i.e. no need to output the change.
> 
> The reason this can happen is because OpenTableList() uses
> ShareUpdateExclusiveLock. That allows the ALTER PUBLICATION to happen while
> there's an ongoing INSERT.
> 

I guess this would also explain why changing the lock mode from
ShareUpdateExclusiveLock to ShareRowExclusiveLock changes the behavior.
INSERT acquires RowExclusiveLock, which doesn't conflict only with the
latter.

> I think this isn't just a logical decoding issue. S2's cache state just after
> the ALTER PUBLICATION is going to be wrong - the table is already locked,
> therefore further operations on the table don't trigger cache invalidation
> processing - but the catalog state *has* changed.  It's a bigger problem for
> logical decoding though, as it's a bit more lazy about invalidation processing
> than normal transactions, allowing the problem to persist for longer.
> 

Yeah. I'm wondering if there's some other operation acquiring a lock
weaker than RowExclusiveLock that might be affected by this. Because
then we'd need to get an even stronger lock ...

> 
> I guess it's not really feasible to just increase the lock level here though
> :(. The use of ShareUpdateExclusiveLock isn't new, and suddenly using AEL
> would perhaps lead to new deadlocks and such? But it also seems quite wrong.
> 

If this really is about the lock being too weak, then I don't see why
would it be wrong? If it's required for correctness, it's not really
wrong, IMO. Sure, stronger locks are not great ...

I'm not sure about the risk of deadlocks. If you do

    ALTER PUBLICATION ... ADD TABLE

it's not holding many other locks. It essentially gets a lock just a
lock on pg_publication catalog, and then the publication row. That's it.

If we increase the locks from ShareUpdateExclusive to ShareRowExclusive,
we're making it conflict with RowExclusive. Which is just DML, and I
think we need to do that.

So maybe that's fine? For me, a detected deadlock is better than
silently missing some of the data.

> 
> We could brute force this in the logical decoding infrastructure, by
> distributing invalidations from catalog modifying transactions to all
> concurrent in-progress transactions (like already done for historic catalog
> snapshot, c.f. SnapBuildDistributeNewCatalogSnapshot()).  But I think that'd
> be a fairly significant increase in overhead.
> 

I have no idea what the overhead would be - perhaps not too bad,
considering catalog changes are not too common (I'm sure there are
extreme cases). And maybe we could even restrict this only to
"interesting" catalogs, or something like that? (However I hate those
weird differences in behavior, it can easily lead to bugs.)

But it feels more like a band-aid than actually fixing the issue.


regards

-- 
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



Re: long-standing data loss bug in initial sync of logical replication

От
Andres Freund
Дата:
Hi,

On 2023-11-18 11:56:47 +0100, Tomas Vondra wrote:
> > I guess it's not really feasible to just increase the lock level here though
> > :(. The use of ShareUpdateExclusiveLock isn't new, and suddenly using AEL
> > would perhaps lead to new deadlocks and such? But it also seems quite wrong.
> > 
> 
> If this really is about the lock being too weak, then I don't see why
> would it be wrong?

Sorry, that was badly formulated. The wrong bit is the use of
ShareUpdateExclusiveLock.


> If it's required for correctness, it's not really wrong, IMO. Sure, stronger
> locks are not great ...
> 
> I'm not sure about the risk of deadlocks. If you do
> 
>     ALTER PUBLICATION ... ADD TABLE
> 
> it's not holding many other locks. It essentially gets a lock just a
> lock on pg_publication catalog, and then the publication row. That's it.
> 
> If we increase the locks from ShareUpdateExclusive to ShareRowExclusive,
> we're making it conflict with RowExclusive. Which is just DML, and I
> think we need to do that.

From what I can tell it needs to to be an AccessExlusiveLock. Completely
independent of logical decoding. The way the cache stays coherent is catalog
modifications conflicting with anything that builds cache entries. We have a
few cases where we do use lower level locks, but for those we have explicit
analysis for why that's ok (see e.g. reloptions.c) or we block until nobody
could have an old view of the catalog (various CONCURRENTLY) operations.


> So maybe that's fine? For me, a detected deadlock is better than
> silently missing some of the data.

That certainly is true.


> > We could brute force this in the logical decoding infrastructure, by
> > distributing invalidations from catalog modifying transactions to all
> > concurrent in-progress transactions (like already done for historic catalog
> > snapshot, c.f. SnapBuildDistributeNewCatalogSnapshot()).  But I think that'd
> > be a fairly significant increase in overhead.
> > 
> 
> I have no idea what the overhead would be - perhaps not too bad,
> considering catalog changes are not too common (I'm sure there are
> extreme cases). And maybe we could even restrict this only to
> "interesting" catalogs, or something like that? (However I hate those
> weird differences in behavior, it can easily lead to bugs.)
>
> But it feels more like a band-aid than actually fixing the issue.

Agreed.

Greetings,

Andres Freund



Re: long-standing data loss bug in initial sync of logical replication

От
Tomas Vondra
Дата:
On 11/18/23 19:12, Andres Freund wrote:
> Hi,
> 
> On 2023-11-18 11:56:47 +0100, Tomas Vondra wrote:
>>> I guess it's not really feasible to just increase the lock level here though
>>> :(. The use of ShareUpdateExclusiveLock isn't new, and suddenly using AEL
>>> would perhaps lead to new deadlocks and such? But it also seems quite wrong.
>>>
>>
>> If this really is about the lock being too weak, then I don't see why
>> would it be wrong?
> 
> Sorry, that was badly formulated. The wrong bit is the use of
> ShareUpdateExclusiveLock.
> 

Ah, you meant the current lock mode seems wrong, not that changing the
locks seems wrong. Yeah, true.

> 
>> If it's required for correctness, it's not really wrong, IMO. Sure, stronger
>> locks are not great ...
>>
>> I'm not sure about the risk of deadlocks. If you do
>>
>>     ALTER PUBLICATION ... ADD TABLE
>>
>> it's not holding many other locks. It essentially gets a lock just a
>> lock on pg_publication catalog, and then the publication row. That's it.
>>
>> If we increase the locks from ShareUpdateExclusive to ShareRowExclusive,
>> we're making it conflict with RowExclusive. Which is just DML, and I
>> think we need to do that.
> 
> From what I can tell it needs to to be an AccessExlusiveLock. Completely
> independent of logical decoding. The way the cache stays coherent is catalog
> modifications conflicting with anything that builds cache entries. We have a
> few cases where we do use lower level locks, but for those we have explicit
> analysis for why that's ok (see e.g. reloptions.c) or we block until nobody
> could have an old view of the catalog (various CONCURRENTLY) operations.
> 

Yeah, I got too focused on the issue I triggered, which seems to be
fixed by using SRE (still don't understand why ...). But you're probably
right there may be other cases where SRE would not be sufficient, I
certainly can't prove it'd be safe.

> 
>> So maybe that's fine? For me, a detected deadlock is better than
>> silently missing some of the data.
> 
> That certainly is true.
> 
> 
>>> We could brute force this in the logical decoding infrastructure, by
>>> distributing invalidations from catalog modifying transactions to all
>>> concurrent in-progress transactions (like already done for historic catalog
>>> snapshot, c.f. SnapBuildDistributeNewCatalogSnapshot()).  But I think that'd
>>> be a fairly significant increase in overhead.
>>>
>>
>> I have no idea what the overhead would be - perhaps not too bad,
>> considering catalog changes are not too common (I'm sure there are
>> extreme cases). And maybe we could even restrict this only to
>> "interesting" catalogs, or something like that? (However I hate those
>> weird differences in behavior, it can easily lead to bugs.)
>>
>> But it feels more like a band-aid than actually fixing the issue.
> 
> Agreed.
> 

... and it would no not fix the other places outside logical decoding.


regards

-- 
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



Re: long-standing data loss bug in initial sync of logical replication

От
Andres Freund
Дата:
Hi,

On 2023-11-18 21:45:35 +0100, Tomas Vondra wrote:
> On 11/18/23 19:12, Andres Freund wrote:
> >> If we increase the locks from ShareUpdateExclusive to ShareRowExclusive,
> >> we're making it conflict with RowExclusive. Which is just DML, and I
> >> think we need to do that.
> > 
> > From what I can tell it needs to to be an AccessExlusiveLock. Completely
> > independent of logical decoding. The way the cache stays coherent is catalog
> > modifications conflicting with anything that builds cache entries. We have a
> > few cases where we do use lower level locks, but for those we have explicit
> > analysis for why that's ok (see e.g. reloptions.c) or we block until nobody
> > could have an old view of the catalog (various CONCURRENTLY) operations.
> > 
> 
> Yeah, I got too focused on the issue I triggered, which seems to be
> fixed by using SRE (still don't understand why ...). But you're probably
> right there may be other cases where SRE would not be sufficient, I
> certainly can't prove it'd be safe.

I think it makes sense here: SRE prevents the problematic "scheduling" in your
test - with SRE no DML started before ALTER PUB ... ADD can commit after.

I'm not sure there are any cases where using SRE instead of AE would cause
problems for logical decoding, but it seems very hard to prove. I'd be very
surprised if just using SRE would not lead to corrupted cache contents in some
situations. The cases where a lower lock level is ok are ones where we just
don't care that the cache is coherent in that moment.

In a way, the logical decoding cache-invalidation situation is a lot more
atomic than the "normal" situation. During normal operation locking is
strictly required to prevent incoherent states when building a cache entry
after a transaction committed, but before the sinval entries have been
queued. But in the logical decoding case that window doesn't exist.

Greetings,

Andres Freund



Re: long-standing data loss bug in initial sync of logical replication

От
Tomas Vondra
Дата:

On 11/18/23 22:05, Andres Freund wrote:
> Hi,
> 
> On 2023-11-18 21:45:35 +0100, Tomas Vondra wrote:
>> On 11/18/23 19:12, Andres Freund wrote:
>>>> If we increase the locks from ShareUpdateExclusive to ShareRowExclusive,
>>>> we're making it conflict with RowExclusive. Which is just DML, and I
>>>> think we need to do that.
>>>
>>> From what I can tell it needs to to be an AccessExlusiveLock. Completely
>>> independent of logical decoding. The way the cache stays coherent is catalog
>>> modifications conflicting with anything that builds cache entries. We have a
>>> few cases where we do use lower level locks, but for those we have explicit
>>> analysis for why that's ok (see e.g. reloptions.c) or we block until nobody
>>> could have an old view of the catalog (various CONCURRENTLY) operations.
>>>
>>
>> Yeah, I got too focused on the issue I triggered, which seems to be
>> fixed by using SRE (still don't understand why ...). But you're probably
>> right there may be other cases where SRE would not be sufficient, I
>> certainly can't prove it'd be safe.
> 
> I think it makes sense here: SRE prevents the problematic "scheduling" in your
> test - with SRE no DML started before ALTER PUB ... ADD can commit after.
> 

If understand correctly, with the current code (which only gets
ShareUpdateExclusiveLock), we may end up in a situation like this
(sessions A and B):

  A: starts "ALTER PUBLICATION p ADD TABLE t" and gets the SUE lock
  A: writes the invalidation message(s) into WAL
  B: inserts into table "t"
  B: commit
  A: commit

With the stronger SRE lock, the commits would have to happen in the
opposite order, because as you say it prevents the bad ordering.

But why would this matter for logical decoding? We accumulate the the
invalidations and execute them at transaction commit, or did I miss
something?

So what I think should happen is we get to apply B first, which won't
see the table as part of the publication. It might even build the cache
entries (syscache+relsync), reflecting that. But then we get to execute
A, along with all the invalidations, and that should invalidate them.

I'm clearly missing something, because the SRE does change the behavior,
so there has to be a difference (and by my reasoning it shouldn't be).

Or maybe it's the other way around? Won't B get the invalidation, but
use a historical snapshot that doesn't yet see the table in publication?

> I'm not sure there are any cases where using SRE instead of AE would cause
> problems for logical decoding, but it seems very hard to prove. I'd be very
> surprised if just using SRE would not lead to corrupted cache contents in some
> situations. The cases where a lower lock level is ok are ones where we just
> don't care that the cache is coherent in that moment.
> 

Are you saying it might break cases that are not corrupted now? How
could obtaining a stronger lock have such effect?

> In a way, the logical decoding cache-invalidation situation is a lot more
> atomic than the "normal" situation. During normal operation locking is
> strictly required to prevent incoherent states when building a cache entry
> after a transaction committed, but before the sinval entries have been
> queued. But in the logical decoding case that window doesn't exist.
> 

Because we apply the invalidations at commit time, so it happens as a
single operation that can't interleave with other sessions?


regards

-- 
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



Re: long-standing data loss bug in initial sync of logical replication

От
Andres Freund
Дата:
On 2023-11-19 02:15:33 +0100, Tomas Vondra wrote:
> 
> 
> On 11/18/23 22:05, Andres Freund wrote:
> > Hi,
> > 
> > On 2023-11-18 21:45:35 +0100, Tomas Vondra wrote:
> >> On 11/18/23 19:12, Andres Freund wrote:
> >>>> If we increase the locks from ShareUpdateExclusive to ShareRowExclusive,
> >>>> we're making it conflict with RowExclusive. Which is just DML, and I
> >>>> think we need to do that.
> >>>
> >>> From what I can tell it needs to to be an AccessExlusiveLock. Completely
> >>> independent of logical decoding. The way the cache stays coherent is catalog
> >>> modifications conflicting with anything that builds cache entries. We have a
> >>> few cases where we do use lower level locks, but for those we have explicit
> >>> analysis for why that's ok (see e.g. reloptions.c) or we block until nobody
> >>> could have an old view of the catalog (various CONCURRENTLY) operations.
> >>>
> >>
> >> Yeah, I got too focused on the issue I triggered, which seems to be
> >> fixed by using SRE (still don't understand why ...). But you're probably
> >> right there may be other cases where SRE would not be sufficient, I
> >> certainly can't prove it'd be safe.
> > 
> > I think it makes sense here: SRE prevents the problematic "scheduling" in your
> > test - with SRE no DML started before ALTER PUB ... ADD can commit after.
> > 
> 
> If understand correctly, with the current code (which only gets
> ShareUpdateExclusiveLock), we may end up in a situation like this
> (sessions A and B):
> 
>   A: starts "ALTER PUBLICATION p ADD TABLE t" and gets the SUE lock
>   A: writes the invalidation message(s) into WAL
>   B: inserts into table "t"
>   B: commit
>   A: commit

I don't think this the problematic sequence - at least it's not what I had
reproed in
https://postgr.es/m/20231118025445.crhaeeuvoe2g5dv6%40awork3.anarazel.de

Adding line numbers:

1) S1: CREATE TABLE d(data text not null);
2) S1: INSERT INTO d VALUES('d1');
3) S2: BEGIN; INSERT INTO d VALUES('d2');
4) S1: ALTER PUBLICATION pb ADD TABLE d;
5) S2: COMMIT
6) S2: INSERT INTO d VALUES('d3');
7) S1: INSERT INTO d VALUES('d4');
8) RL: <nothing>

The problem with the sequence is that the insert from 3) is decoded *after* 4)
and that to decode the insert (which happened before the ALTER) the catalog
snapshot and cache state is from *before* the ALTER TABLE. Because the
transaction started in 3) doesn't actually modify any catalogs, no
invalidations are executed after decoding it. The result is that the cache
looks like it did at 3), not like after 4). Undesirable timetravel...

It's worth noting that here the cache state is briefly correct, after 4), it's
just that after 5) it stays the old state.

If 4) instead uses a SRE lock, then S1 will be blocked until S2 commits, and
everything is fine.



> > I'm not sure there are any cases where using SRE instead of AE would cause
> > problems for logical decoding, but it seems very hard to prove. I'd be very
> > surprised if just using SRE would not lead to corrupted cache contents in some
> > situations. The cases where a lower lock level is ok are ones where we just
> > don't care that the cache is coherent in that moment.

> Are you saying it might break cases that are not corrupted now? How
> could obtaining a stronger lock have such effect?

No, I mean that I don't know if using SRE instead of AE would have negative
consequences for logical decoding. I.e. whether, from a logical decoding POV,
it'd suffice to increase the lock level to just SRE instead of AE.

Since I don't see how it'd be correct otherwise, it's kind of a moot question.


> > In a way, the logical decoding cache-invalidation situation is a lot more
> > atomic than the "normal" situation. During normal operation locking is
> > strictly required to prevent incoherent states when building a cache entry
> > after a transaction committed, but before the sinval entries have been
> > queued. But in the logical decoding case that window doesn't exist.
> > 
> Because we apply the invalidations at commit time, so it happens as a
> single operation that can't interleave with other sessions?

Yea, the situation is much simpler during logical decoding than "originally" -
there's no concurrency.

Greetings,

Andres Freund



Re: long-standing data loss bug in initial sync of logical replication

От
Vadim Lakt
Дата:
Hi,

On 19.11.2023 09:18, Andres Freund wrote:
> Yea, the situation is much simpler during logical decoding than "originally" -
> there's no concurrency.
>
> Greetings,
>
> Andres Freund
>
We've encountered a similar error on our industrial server.

The case: After adding a table to logical replication, table 
initialization proceeds normally, but new data from the publisher's 
table does not appear on the subscriber server. After we added the 
table, we checked and saw that the data was present on the subscriber 
and everything was normal, we discovered the error after some time. I 
have attached scripts to the email.

The patch from the first message also solves this problem.

-- 
Best regards,
Vadim Lakt

Вложения