Обсуждение: Issue with logical replication slot during switchover
An issue occurred during the initial switchover using PostgreSQL version 17.5. The setup consists of a cluster with two nodes, managed by Patroni version 4.0.5.
Logical replication is configured on the same instance, and the new feature enabling logical replication slots to be failover-safe in a highly available environment is used. Logical slot management is currently disabled in Patroni.
Following are some screen captured during the swichover
1. Run the switchover with Patroni
patronictl switchover
Current cluster topology
+ Cluster: ClusterX (7529893278186104053) ----+----+-----------+
| Member | Host | Role | State | TL | Lag in MB |
+----------+--------------+---------+-----------+----+-----------+
| node_1 | xxxxxxxxxxxx | Leader | running | 4 | |
| node_2 | xxxxxxxxxxxx | Replica | streaming | 4 | 0 |
+----------+--------------+---------+-----------+----+-----------+
Since the logical slot is in a failover state on both the primary and the standby, an attempt could be made to resynchronize them.
On Thu, Aug 7, 2025 at 6:50 PM Fabrice Chapuis <fabrice636861@gmail.com> wrote: > > Hi, > > An issue occurred during the initial switchover using PostgreSQL version 17.5. The setup consists of a cluster with twonodes, managed by Patroni version 4.0.5. > Logical replication is configured on the same instance, and the new feature enabling logical replication slots to be failover-safein a highly available environment is used. Logical slot management is currently disabled in Patroni. > > Following are some screen captured during the swichover > > 1. Run the switchover with Patroni > > patronictl switchover > > Current cluster topology > > + Cluster: ClusterX (7529893278186104053) ----+----+-----------+ > > | Member | Host | Role | State | TL | Lag in MB | > > +----------+--------------+---------+-----------+----+-----------+ > > | node_1 | xxxxxxxxxxxx | Leader | running | 4 | | > > | node_2 | xxxxxxxxxxxx | Replica | streaming | 4 | 0 | > > +----------+--------------+---------+-----------+----+-----------+ > > 2. Check the slot on the new Primary > > select * from pg_replication_slots where slot_type = 'logical'; > +-[ RECORD 1 ]--------+----------------+ > | slot_name | logical_slot | > | plugin | pgoutput | > | slot_type | logical | > | datoid | 25605 | > | database | db_test | > | temporary | f | > | active | t | > | active_pid | 3841546 | > | xmin | | > | catalog_xmin | 10399 | > | restart_lsn | 0/37002410 | > | confirmed_flush_lsn | 0/37002448 | > | wal_status | reserved | > | safe_wal_size | | > | two_phase | f | > | inactive_since | | > | conflicting | f | > | invalidation_reason | | > | failover | t | > | synced | t | > +---------------------+----------------+ > Logical replication is active again after the promote > > 3. Check the slot on the new standby > select * from pg_replication_slots where slot_type = 'logical'; > +-[ RECORD 1 ]--------+-------------------------------+ > | slot_name | logical_slot | > | plugin | pgoutput | > | slot_type | logical | > | datoid | 25605 | > | database | db_test | > | temporary | f | > | active | f | > | active_pid | | > | xmin | | > | catalog_xmin | 10397 | > | restart_lsn | 0/3638F5F0 | > | confirmed_flush_lsn | 0/3638F6A0 | > | wal_status | reserved | > | safe_wal_size | | > | two_phase | f | > | inactive_since | 2025-08-05 10:21:03.342587+02 | > | conflicting | f | > | invalidation_reason | | > | failover | t | > | synced | f | > +---------------------+--------------------------- > > The synced flag keep value false. > Following error in in the log > 2025-06-10 16:40:58.996 CEST [739829]: [1-1] user=,db=,client=,application= LOG: slot sync worker started > 2025-06-10 16:40:59.011 CEST [739829]: [2-1] user=,db=,client=,application= ERROR: exiting from slot synchronization becausesame name slot "logical_slot" already exists on the standby > > I would like to make a proposal to address the issue: > Since the logical slot is in a failover state on both the primary and the standby, an attempt could be made to resynchronizethem. > I modify the slotsync.c module > +++ b/src/backend/replication/logical/slotsync.c > @@ -649,24 +649,46 @@ synchronize_one_slot(RemoteSlot *remote_slot, Oid remote_dbid) > > return false; > } > - > - /* Search for the named slot */ > + // Both local and remote slot have the same name > if ((slot = SearchNamedReplicationSlot(remote_slot->name, true))) > { > bool synced; > + bool failover_status = remote_slot->failover; > > SpinLockAcquire(&slot->mutex); > synced = slot->data.synced; > SpinLockRelease(&slot->mutex); > + > + if (!synced){ > + > + Assert(!MyReplicationSlot); > + > + if (failover_status){ > + > + ReplicationSlotAcquire(remote_slot->name, true, true); > + > + // Put the synced flag to true to attempt resynchronizing failover slot on the standby > + MyReplicationSlot->data.synced = true; > + > + ReplicationSlotMarkDirty(); > > - /* User-created slot with the same name exists, raise ERROR. */ > - if (!synced) > - ereport(ERROR, > + ReplicationSlotRelease(); > + > + /* Get rid of a replication slot that is no longer wanted */ > + ereport(WARNING, > + errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE), > + errmsg("slot \"%s\" local slot has the same name as remote slot and they are infailover mode, try to synchronize them", > + remote_slot->name)); > + return false; /* Going back to the main loop after droping the failover slot */ > + } > + else > + /* User-created slot with the same name exists, raise ERROR. */ > + ereport(ERROR, > errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE), > errmsg("exiting from slot synchronization because same" > - " name slot \"%s\" already exists on the standby", > - remote_slot->name)); > - > + " name slot \"%s\" already exists on the standby", > + remote_slot->name)); > + } > /* > * The slot has been synchronized before. > * > This message follows the discussions started in this thread: > https://www.postgresql.org/message-id/CAA5-nLDvnqGtBsKu4T_s-cS%2BdGbpSLEzRwgep1XfYzGhQ4o65A%40mail.gmail.com > > Help would be appreciated to move this point forward > Thank You for starting a new thread and working on it. I think you have given reference to the wrong thread, the correct one is [1]. IIUC, the proposed fix is checking if remote_slot is failover-enabled and the slot with the same name exists locally but has 'synced'=false, enable the 'synced' flag and proceed with synchronization from next cycle onward, else error out. But remote-slot's failover will always be true if we have reached this stage. Did you actually mean to check if the local slot is failover-enabled but has 'synced' set to false (indicating it’s a new standby after a switchover)? Even with that check, it might not be the correct thing to overwrite such a slot internally. I think in [1], we discussed a couple of ideas related to a GUC, alter-api, drop_all_slots API, but I don't see any of those proposed here. Do we want to try any of that? [1]: https://www.postgresql.org/message-id/flat/CAA5-nLD0vKn6T1-OHROBNfN2Pxa17zVo4UoVBdfHn2y%3D7nKixA%40mail.gmail.com thanks Shveta
ok the check on the failover status for the remote slot is perhaps redundant.
I'm not sure what impact setting the synced flag to true might have. But if you run an additional switchover, it works fine because the synced flag on the new primary is set to true now.
If we come back to the idea of the GUC or the API, adding an allow_overwrite parameter to the pg_create_logical_replication_slot function and removing the logical slot when set to true could be a suitable approach.
Regards,
On Thu, Aug 7, 2025 at 6:50 PM Fabrice Chapuis <fabrice636861@gmail.com> wrote:
>
> Hi,
>
> An issue occurred during the initial switchover using PostgreSQL version 17.5. The setup consists of a cluster with two nodes, managed by Patroni version 4.0.5.
> Logical replication is configured on the same instance, and the new feature enabling logical replication slots to be failover-safe in a highly available environment is used. Logical slot management is currently disabled in Patroni.
>
> Following are some screen captured during the swichover
>
> 1. Run the switchover with Patroni
>
> patronictl switchover
>
> Current cluster topology
>
> + Cluster: ClusterX (7529893278186104053) ----+----+-----------+
>
> | Member | Host | Role | State | TL | Lag in MB |
>
> +----------+--------------+---------+-----------+----+-----------+
>
> | node_1 | xxxxxxxxxxxx | Leader | running | 4 | |
>
> | node_2 | xxxxxxxxxxxx | Replica | streaming | 4 | 0 |
>
> +----------+--------------+---------+-----------+----+-----------+
>
> 2. Check the slot on the new Primary
>
> select * from pg_replication_slots where slot_type = 'logical';
> +-[ RECORD 1 ]--------+----------------+
> | slot_name | logical_slot |
> | plugin | pgoutput |
> | slot_type | logical |
> | datoid | 25605 |
> | database | db_test |
> | temporary | f |
> | active | t |
> | active_pid | 3841546 |
> | xmin | |
> | catalog_xmin | 10399 |
> | restart_lsn | 0/37002410 |
> | confirmed_flush_lsn | 0/37002448 |
> | wal_status | reserved |
> | safe_wal_size | |
> | two_phase | f |
> | inactive_since | |
> | conflicting | f |
> | invalidation_reason | |
> | failover | t |
> | synced | t |
> +---------------------+----------------+
> Logical replication is active again after the promote
>
> 3. Check the slot on the new standby
> select * from pg_replication_slots where slot_type = 'logical';
> +-[ RECORD 1 ]--------+-------------------------------+
> | slot_name | logical_slot |
> | plugin | pgoutput |
> | slot_type | logical |
> | datoid | 25605 |
> | database | db_test |
> | temporary | f |
> | active | f |
> | active_pid | |
> | xmin | |
> | catalog_xmin | 10397 |
> | restart_lsn | 0/3638F5F0 |
> | confirmed_flush_lsn | 0/3638F6A0 |
> | wal_status | reserved |
> | safe_wal_size | |
> | two_phase | f |
> | inactive_since | 2025-08-05 10:21:03.342587+02 |
> | conflicting | f |
> | invalidation_reason | |
> | failover | t |
> | synced | f |
> +---------------------+---------------------------
>
> The synced flag keep value false.
> Following error in in the log
> 2025-06-10 16:40:58.996 CEST [739829]: [1-1] user=,db=,client=,application= LOG: slot sync worker started
> 2025-06-10 16:40:59.011 CEST [739829]: [2-1] user=,db=,client=,application= ERROR: exiting from slot synchronization because same name slot "logical_slot" already exists on the standby
>
> I would like to make a proposal to address the issue:
> Since the logical slot is in a failover state on both the primary and the standby, an attempt could be made to resynchronize them.
> I modify the slotsync.c module
> +++ b/src/backend/replication/logical/slotsync.c
> @@ -649,24 +649,46 @@ synchronize_one_slot(RemoteSlot *remote_slot, Oid remote_dbid)
>
> return false;
> }
> -
> - /* Search for the named slot */
> + // Both local and remote slot have the same name
> if ((slot = SearchNamedReplicationSlot(remote_slot->name, true)))
> {
> bool synced;
> + bool failover_status = remote_slot->failover;
>
> SpinLockAcquire(&slot->mutex);
> synced = slot->data.synced;
> SpinLockRelease(&slot->mutex);
> +
> + if (!synced){
> +
> + Assert(!MyReplicationSlot);
> +
> + if (failover_status){
> +
> + ReplicationSlotAcquire(remote_slot->name, true, true);
> +
> + // Put the synced flag to true to attempt resynchronizing failover slot on the standby
> + MyReplicationSlot->data.synced = true;
> +
> + ReplicationSlotMarkDirty();
>
> - /* User-created slot with the same name exists, raise ERROR. */
> - if (!synced)
> - ereport(ERROR,
> + ReplicationSlotRelease();
> +
> + /* Get rid of a replication slot that is no longer wanted */
> + ereport(WARNING,
> + errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
> + errmsg("slot \"%s\" local slot has the same name as remote slot and they are in failover mode, try to synchronize them",
> + remote_slot->name));
> + return false; /* Going back to the main loop after droping the failover slot */
> + }
> + else
> + /* User-created slot with the same name exists, raise ERROR. */
> + ereport(ERROR,
> errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
> errmsg("exiting from slot synchronization because same"
> - " name slot \"%s\" already exists on the standby",
> - remote_slot->name));
> -
> + " name slot \"%s\" already exists on the standby",
> + remote_slot->name));
> + }
> /*
> * The slot has been synchronized before.
> *
> This message follows the discussions started in this thread:
> https://www.postgresql.org/message-id/CAA5-nLDvnqGtBsKu4T_s-cS%2BdGbpSLEzRwgep1XfYzGhQ4o65A%40mail.gmail.com
>
> Help would be appreciated to move this point forward
>
Thank You for starting a new thread and working on it.
I think you have given reference to the wrong thread, the correct one
is [1].
IIUC, the proposed fix is checking if remote_slot is failover-enabled
and the slot with the same name exists locally but has 'synced'=false,
enable the 'synced' flag and proceed with synchronization from next
cycle onward, else error out. But remote-slot's failover will always
be true if we have reached this stage. Did you actually mean to check
if the local slot is failover-enabled but has 'synced' set to false
(indicating it’s a new standby after a switchover)? Even with that
check, it might not be the correct thing to overwrite such a slot
internally. I think in [1], we discussed a couple of ideas related to
a GUC, alter-api, drop_all_slots API, but I don't see any of those
proposed here. Do we want to try any of that?
[1]:
https://www.postgresql.org/message-id/flat/CAA5-nLD0vKn6T1-OHROBNfN2Pxa17zVo4UoVBdfHn2y%3D7nKixA%40mail.gmail.com
thanks
Shveta
On Fri, Aug 8, 2025 at 7:01 PM Fabrice Chapuis <fabrice636861@gmail.com> wrote: > > Thanks Shveta for coming on this point again and fixing the link. > The idea is to check if the slot has same name to try to resynchronize it with the primary. > ok the check on the failover status for the remote slot is perhaps redundant. > I'm not sure what impact setting the synced flag to true might have. But if you run an additional switchover, it worksfine because the synced flag on the new primary is set to true now. > If we come back to the idea of the GUC or the API, adding an allow_overwrite parameter to the pg_create_logical_replication_slotfunction and removing the logical slot when set to true could be a suitable approach. > > What is your opinion? > If implemented as a GUC, it would address only a specific corner case, making it less suitable to be added as a GUC. OTOH, adding it as a slot's property makes more sense. You can start with introducing a new slot property, allow_overwrite. By default, this property will be set to false. a) The function pg_create_logical_replication_slot() can be extended to accept this parameter. b) A new API pg_alter_logical_replication_slot() can be introduced, to modify this property after slot creation if needed. c) The commands CREATE SUBSCRIPTION and ALTER SUBSCRIPTION are not needed to include an allow_overwrite parameter. When CREATE SUBSCRIPTION creates a slot, it will always set allow_overwrite to false by default. If users need to change this later, they can use the new API pg_alter_logical_replication_slot() to update the property. d) Additionally, pg_alter_logical_replication_slot() can serve as a generic API to modify other slot properties as well. This appears to be a reasonable idea with potential use cases beyond just allowing synchronization post switchover. Thoughts? ~~~ Another problem as you pointed out is inconsistent behaviour across switchovers. On the first switchover, we get the error on new standby: "Exiting from slot synchronization because a slot with the same name already exists on the standby." But in the case of a double switchover, this error does not occur. This is due to the 'synced' flag not set on new standby on first switchover while set in double switchover. I think the behaviour should be the same. In both cases, it should emit the same error. We are thinking of a potential solution here and will start a new thread if needed. thanks Shveta
Thanks for sharing this. In fact, I agree, introducing an allow_overwrite
slot property makes seems cleaner than a GUC for this specific use case. a) At first, an extension of pg_create_logical_replication_slot()
could be proposedb) pg_alter_logical_replication_slot():
that could be is a neat approach, a generic API for modifying other slot properties seems like a forward-looking design choice. However I don't know if slot properties could be modified easily because when slot is active, a process is holding them in memory.
c) CREATE SUBSCRIPTION and ALTER SUBSCRIPTION stay untouched
Regarding the switchover inconsistency, I agree that behavior should be uniform. The current discrepancy between the first and double switchover is confusing, and having the same error in both cases would improve predictability. It's up to you if you want to open a thread for this.
Regards,
Fabrice
On Fri, Aug 8, 2025 at 7:01 PM Fabrice Chapuis <fabrice636861@gmail.com> wrote:
>
> Thanks Shveta for coming on this point again and fixing the link.
> The idea is to check if the slot has same name to try to resynchronize it with the primary.
> ok the check on the failover status for the remote slot is perhaps redundant.
> I'm not sure what impact setting the synced flag to true might have. But if you run an additional switchover, it works fine because the synced flag on the new primary is set to true now.
> If we come back to the idea of the GUC or the API, adding an allow_overwrite parameter to the pg_create_logical_replication_slot function and removing the logical slot when set to true could be a suitable approach.
>
> What is your opinion?
>
If implemented as a GUC, it would address only a specific corner case,
making it less suitable to be added as a GUC.
OTOH, adding it as a slot's property makes more sense. You can start
with introducing a new slot property, allow_overwrite. By default,
this property will be set to false.
a) The function pg_create_logical_replication_slot() can be extended
to accept this parameter.
b) A new API pg_alter_logical_replication_slot() can be introduced, to
modify this property after slot creation if needed.
c) The commands CREATE SUBSCRIPTION and ALTER SUBSCRIPTION are not
needed to include an allow_overwrite parameter. When CREATE
SUBSCRIPTION creates a slot, it will always set allow_overwrite to
false by default. If users need to change this later, they can use the
new API pg_alter_logical_replication_slot() to update the property.
d) Additionally, pg_alter_logical_replication_slot() can serve as a
generic API to modify other slot properties as well.
This appears to be a reasonable idea with potential use cases beyond
just allowing synchronization post switchover. Thoughts?
~~~
Another problem as you pointed out is inconsistent behaviour across
switchovers. On the first switchover, we get the error on new standby:
"Exiting from slot synchronization because a slot with the same name
already exists on the standby."
But in the case of a double switchover, this error does not occur.
This is due to the 'synced' flag not set on new standby on first
switchover while set in double switchover. I think the behaviour
should be the same. In both cases, it should emit the same error. We
are thinking of a potential solution here and will start a new thread
if needed.
thanks
Shveta
On Wed, Aug 13, 2025 at 9:29 PM Fabrice Chapuis <fabrice636861@gmail.com> wrote: > > Thanks for sharing this. In fact, I agree, introducing an allow_overwrite slot property makes seems cleaner than a GUCfor this specific use case. > > a) At first, an extension of pg_create_logical_replication_slot() could be proposed > b) pg_alter_logical_replication_slot(): that could be is a neat approach, a generic API for modifying other slot propertiesseems like a forward-looking design choice. However I don't know if slot properties could be modified easily becausewhen slot is active, a process is holding them in memory. Right, it can not be.This API can be used when the slot is not acquired by any other process. We may have use-cases for that as well similar to one we have for this switchover case. thanks Shveta
On Wed, Aug 13, 2025 at 9:29 PM Fabrice Chapuis <fabrice636861@gmail.com> wrote:
>
> Thanks for sharing this. In fact, I agree, introducing an allow_overwrite slot property makes seems cleaner than a GUC for this specific use case.
>
> a) At first, an extension of pg_create_logical_replication_slot() could be proposed
> b) pg_alter_logical_replication_slot(): that could be is a neat approach, a generic API for modifying other slot properties seems like a forward-looking design choice. However I don't know if slot properties could be modified easily because when slot is active, a process is holding them in memory.
Right, it can not be.This API can be used when the slot is not
acquired by any other process. We may have use-cases for that as well
similar to one we have for this switchover case.
thanks
Shveta
On Fri, Aug 8, 2025 at 7:01 PM Fabrice Chapuis <fabrice636861@gmail.com> wrote:
>
> Thanks Shveta for coming on this point again and fixing the link.
> The idea is to check if the slot has same name to try to resynchronize it with the primary.
> ok the check on the failover status for the remote slot is perhaps redundant.
> I'm not sure what impact setting the synced flag to true might have. But if you run an additional switchover, it works fine because the synced flag on the new primary is set to true now.
> If we come back to the idea of the GUC or the API, adding an allow_overwrite parameter to the pg_create_logical_replication_slot function and removing the logical slot when set to true could be a suitable approach.
>
> What is your opinion?
>
If implemented as a GUC, it would address only a specific corner case,
making it less suitable to be added as a GUC.
OTOH, adding it as a slot's property makes more sense. You can start
with introducing a new slot property, allow_overwrite. By default,
this property will be set to false.
a) The function pg_create_logical_replication_slot() can be extended
to accept this parameter.
b) A new API pg_alter_logical_replication_slot() can be introduced, to
modify this property after slot creation if needed.
c) The commands CREATE SUBSCRIPTION and ALTER SUBSCRIPTION are not
needed to include an allow_overwrite parameter. When CREATE
SUBSCRIPTION creates a slot, it will always set allow_overwrite to
false by default. If users need to change this later, they can use the
new API pg_alter_logical_replication_slot() to update the property.
d) Additionally, pg_alter_logical_replication_slot() can serve as a
generic API to modify other slot properties as well.
This appears to be a reasonable idea with potential use cases beyond
just allowing synchronization post switchover. Thoughts?
~~~
Another problem as you pointed out is inconsistent behaviour across
switchovers. On the first switchover, we get the error on new standby:
"Exiting from slot synchronization because a slot with the same name
already exists on the standby."
But in the case of a double switchover, this error does not occur.
This is due to the 'synced' flag not set on new standby on first
switchover while set in double switchover. I think the behaviour
should be the same. In both cases, it should emit the same error. We
are thinking of a potential solution here and will start a new thread
if needed.
thanks
Shveta
On Wed, Aug 27, 2025 at 7:55 PM Fabrice Chapuis <fabrice636861@gmail.com> wrote: > > For the first step (a), the pg_create_logical_replication_slot interface is extended. > The slot on the new attached standby will be dropped and recreated if the flag allow_overwrite is set to true. > I tested the modified source, could you please give me a feedback on code changes. > Thanks for working on this. Can you please provide a patch instead of diff? It will be easier for reviewers to apply and test it then. thanks Shveta
On Wed, Aug 27, 2025 at 7:55 PM Fabrice Chapuis <fabrice636861@gmail.com> wrote:
>
> For the first step (a), the pg_create_logical_replication_slot interface is extended.
> The slot on the new attached standby will be dropped and recreated if the flag allow_overwrite is set to true.
> I tested the modified source, could you please give me a feedback on code changes.
>
Thanks for working on this. Can you please provide a patch instead of
diff? It will be easier for reviewers to apply and test it then.
thanks
Shveta
On Thu, Aug 28, 2025 at 9:11 PM Fabrice Chapuis <fabrice636861@gmail.com> wrote: > > What is the procedure to create this patch. Thank you for your help. > We use 'git format-patch' to generate formatted patches. I have given a few links ([1],[2],[3)] on know-how. I usually use these steps: git add <file1.c> <adds file to staging area> git add <file2.c> <once all the required files are added, create a commit> git commit <add the required message to describe the patch> git format-patch -1 -v1 <create the patch> -1-> to create patch for last 1 commit -v1 --> patch-prefix. I have used v1 which indicates version-1, when we upload the patch a second time, we can use -v2 and so on. [1]: https://stackoverflow.com/questions/6658313/how-can-i-generate-a-git-patch-for-a-specific-commit [2]: https://www.geeksforgeeks.org/git/how-to-generate-a-git-patch-for-a-specific-commit/ [3]: https://gist.github.com/ganapathichidambaram/3504e31fc8ba809762b3a0d9d9ef8ec2 thanks Shveta
On Thu, Aug 28, 2025 at 9:11 PM Fabrice Chapuis <fabrice636861@gmail.com> wrote:
>
> What is the procedure to create this patch. Thank you for your help.
>
We use 'git format-patch' to generate formatted patches. I have given
a few links ([1],[2],[3)] on know-how.
I usually use these steps:
git add <file1.c> <adds file to staging area>
git add <file2.c>
<once all the required files are added, create a commit>
git commit <add the required message to describe the patch>
git format-patch -1 -v1 <create the patch>
-1-> to create patch for last 1 commit
-v1 --> patch-prefix. I have used v1 which indicates version-1, when
we upload the patch a second time, we can use -v2 and so on.
[1]: https://stackoverflow.com/questions/6658313/how-can-i-generate-a-git-patch-for-a-specific-commit
[2]: https://www.geeksforgeeks.org/git/how-to-generate-a-git-patch-for-a-specific-commit/
[3]: https://gist.github.com/ganapathichidambaram/3504e31fc8ba809762b3a0d9d9ef8ec2
thanks
Shveta
Вложения
On Fri, 29 Aug 2025 at 19:28, Fabrice Chapuis <fabrice636861@gmail.com> wrote: > > Thanks Shveta for your procedure to follow, here is the version 1 of the patch > > Regards, > > Fabrice > > On Fri, Aug 29, 2025 at 5:34 AM shveta malik <shveta.malik@gmail.com> wrote: >> >> On Thu, Aug 28, 2025 at 9:11 PM Fabrice Chapuis <fabrice636861@gmail.com> wrote: >> > >> > What is the procedure to create this patch. Thank you for your help. >> > >> >> We use 'git format-patch' to generate formatted patches. I have given >> a few links ([1],[2],[3)] on know-how. >> >> I usually use these steps: >> >> >> git add <file1.c> <adds file to staging area> >> git add <file2.c> >> <once all the required files are added, create a commit> >> git commit <add the required message to describe the patch> >> git format-patch -1 -v1 <create the patch> >> -1-> to create patch for last 1 commit >> -v1 --> patch-prefix. I have used v1 which indicates version-1, when >> we upload the patch a second time, we can use -v2 and so on. >> >> [1]: https://stackoverflow.com/questions/6658313/how-can-i-generate-a-git-patch-for-a-specific-commit >> [2]: https://www.geeksforgeeks.org/git/how-to-generate-a-git-patch-for-a-specific-commit/ >> [3]: https://gist.github.com/ganapathichidambaram/3504e31fc8ba809762b3a0d9d9ef8ec2 >> >> thanks >> Shveta Hi Fabrice, Thanks for providing the patch. I reviewed your patch and have following comment: 1. I think we should add a commit message in the patch. It would help to give an understanding of the patch. 2. I tried applying patch on HEAD, it generates following warnings: Applying: fix failover slot issue when doing a switchover .git/rebase-apply/patch:31: trailing whitespace. bool allow_overwrite = false; .git/rebase-apply/patch:45: trailing whitespace. bool synced; .git/rebase-apply/patch:55: trailing whitespace. .git/rebase-apply/patch:56: trailing whitespace. if (!synced){ .git/rebase-apply/patch:57: trailing whitespace. /* warning: squelched 16 whitespace errors warning: 21 lines add whitespace errors. 3. I also tried to build the patch on HEAD and it is giving the following error: ubuntu@ubuntu-Virtual-Machine:~/Project/pg/postgres$ ./../../install-clean.sh pg_30_8 > warn.log launcher.c: In function ‘CreateConflictDetectionSlot’: launcher.c:1457:9: error: too few arguments to function ‘ReplicationSlotCreate’ 1457 | ReplicationSlotCreate(CONFLICT_DETECTION_SLOT, false, RS_PERSISTENT, false, | ^~~~~~~~~~~~~~~~~~~~~ In file included from launcher.c:35: ../../../../src/include/replication/slot.h:307:13: note: declared here 307 | extern void ReplicationSlotCreate(const char *name, bool db_specific, | ^~~~~~~~~~~~~~~~~~~~~ make[4]: *** [<builtin>: launcher.o] Error 1 make[3]: *** [../../../src/backend/common.mk:37: logical-recursive] Error 2 make[2]: *** [common.mk:37: replication-recursive] Error 2 make[1]: *** [Makefile:42: install-backend-recurse] Error 2 make: *** [GNUmakefile:11: install-src-recurse] Error 2 4. I have some general comments regarding formatting of the patch. + // Both local and remote slot have the same name We use following format for single line comments: /* comment text */ and for multi line comments we use following format: /* * comment text begins here * and continues here */ 5. We should use proper indentation here: + elog(LOG, "Logical replication slot %s created with option allow_overwrite to %s", + NameStr(slot->data.name), + slot->data.allow_overwrite ? "true" : "false"); 6. + if (!synced){ + /* + * Check if we need to overwrite an existing + * logical slot + */ We should start the parentheses from the next line. Also, indentation for comment is not correct here. 7. + if (allow_overwrite){ + /* + * Get rid of a replication slot that is no + *longer wanted + */ Similar comment as above. Please refer [1] [2] for proper formatting of the patch. [1]: https://www.postgresql.org/docs/devel/source-format.html [2]: https://wiki.postgresql.org/wiki/Creating_Clean_Patches Thanks, Shlok Kyal
On Sat, Aug 30, 2025 at 11:43 AM Shlok Kyal <shlok.kyal.oss@gmail.com> wrote: > > Hi Fabrice, > > Thanks for providing the patch. I reviewed your patch and have > following comment: > > 1. I think we should add a commit message in the patch. It would help > to give an understanding of the patch. > > 2. I tried applying patch on HEAD, it generates following warnings: > Applying: fix failover slot issue when doing a switchover > .git/rebase-apply/patch:31: trailing whitespace. > bool allow_overwrite = false; > .git/rebase-apply/patch:45: trailing whitespace. > bool synced; > .git/rebase-apply/patch:55: trailing whitespace. > > .git/rebase-apply/patch:56: trailing whitespace. > if (!synced){ > .git/rebase-apply/patch:57: trailing whitespace. > /* > warning: squelched 16 whitespace errors > warning: 21 lines add whitespace errors. 'git diff --check' can be used before 'git commit' to get info of above issues before creating a patch. > 3. I also tried to build the patch on HEAD and it is giving the following error: > ubuntu@ubuntu-Virtual-Machine:~/Project/pg/postgres$ > ./../../install-clean.sh pg_30_8 > warn.log > launcher.c: In function ‘CreateConflictDetectionSlot’: > launcher.c:1457:9: error: too few arguments to function ‘ReplicationSlotCreate’ > 1457 | ReplicationSlotCreate(CONFLICT_DETECTION_SLOT, false, > RS_PERSISTENT, false, > | ^~~~~~~~~~~~~~~~~~~~~ > In file included from launcher.c:35: > ../../../../src/include/replication/slot.h:307:13: note: declared here > 307 | extern void ReplicationSlotCreate(const char *name, bool db_specific, > | ^~~~~~~~~~~~~~~~~~~~~ > make[4]: *** [<builtin>: launcher.o] Error 1 > make[3]: *** [../../../src/backend/common.mk:37: logical-recursive] Error 2 > make[2]: *** [common.mk:37: replication-recursive] Error 2 > make[1]: *** [Makefile:42: install-backend-recurse] Error 2 > make: *** [GNUmakefile:11: install-src-recurse] Error 2 > > 4. I have some general comments regarding formatting of the patch. > + // Both local and remote slot have the same name > > We use following format for single line comments: > /* comment text */ > and for multi line comments we use following format: > /* > * comment text begins here > * and continues here > */ > > 5. We should use proper indentation here: > + elog(LOG, "Logical replication slot %s created with option > allow_overwrite to %s", > + NameStr(slot->data.name), > + slot->data.allow_overwrite ? "true" : "false"); > src/tools/pgindent <file name> can be used to do indentation before creating a patch. > 6. > + if (!synced){ > + /* > + * Check if we need to overwrite an existing > + * logical slot > + */ > We should start the parentheses from the next line. > Also, indentation for comment is not correct here. > > 7. > + if (allow_overwrite){ > + /* > + * Get rid of a replication slot that is no > + *longer wanted > + */ > Similar comment as above. > > Please refer [1] [2] for proper formatting of the patch. > [1]: https://www.postgresql.org/docs/devel/source-format.html > [2]: https://wiki.postgresql.org/wiki/Creating_Clean_Patches > > Thanks, > Shlok Kyal
I will make de modifications based on the remarks you mentioned.
On Sat, Aug 30, 2025 at 11:43 AM Shlok Kyal <shlok.kyal.oss@gmail.com> wrote:
>
> Hi Fabrice,
>
> Thanks for providing the patch. I reviewed your patch and have
> following comment:
>
> 1. I think we should add a commit message in the patch. It would help
> to give an understanding of the patch.
>
> 2. I tried applying patch on HEAD, it generates following warnings:
> Applying: fix failover slot issue when doing a switchover
> .git/rebase-apply/patch:31: trailing whitespace.
> bool allow_overwrite = false;
> .git/rebase-apply/patch:45: trailing whitespace.
> bool synced;
> .git/rebase-apply/patch:55: trailing whitespace.
>
> .git/rebase-apply/patch:56: trailing whitespace.
> if (!synced){
> .git/rebase-apply/patch:57: trailing whitespace.
> /*
> warning: squelched 16 whitespace errors
> warning: 21 lines add whitespace errors.
'git diff --check' can be used before 'git commit' to get info of
above issues before creating a patch.
> 3. I also tried to build the patch on HEAD and it is giving the following error:
> ubuntu@ubuntu-Virtual-Machine:~/Project/pg/postgres$
> ./../../install-clean.sh pg_30_8 > warn.log
> launcher.c: In function ‘CreateConflictDetectionSlot’:
> launcher.c:1457:9: error: too few arguments to function ‘ReplicationSlotCreate’
> 1457 | ReplicationSlotCreate(CONFLICT_DETECTION_SLOT, false,
> RS_PERSISTENT, false,
> | ^~~~~~~~~~~~~~~~~~~~~
> In file included from launcher.c:35:
> ../../../../src/include/replication/slot.h:307:13: note: declared here
> 307 | extern void ReplicationSlotCreate(const char *name, bool db_specific,
> | ^~~~~~~~~~~~~~~~~~~~~
> make[4]: *** [<builtin>: launcher.o] Error 1
> make[3]: *** [../../../src/backend/common.mk:37: logical-recursive] Error 2
> make[2]: *** [common.mk:37: replication-recursive] Error 2
> make[1]: *** [Makefile:42: install-backend-recurse] Error 2
> make: *** [GNUmakefile:11: install-src-recurse] Error 2
>
> 4. I have some general comments regarding formatting of the patch.
> + // Both local and remote slot have the same name
>
> We use following format for single line comments:
> /* comment text */
> and for multi line comments we use following format:
> /*
> * comment text begins here
> * and continues here
> */
>
> 5. We should use proper indentation here:
> + elog(LOG, "Logical replication slot %s created with option
> allow_overwrite to %s",
> + NameStr(slot->data.name),
> + slot->data.allow_overwrite ? "true" : "false");
>
src/tools/pgindent <file name>
can be used to do indentation before creating a patch.
> 6.
> + if (!synced){
> + /*
> + * Check if we need to overwrite an existing
> + * logical slot
> + */
> We should start the parentheses from the next line.
> Also, indentation for comment is not correct here.
>
> 7.
> + if (allow_overwrite){
> + /*
> + * Get rid of a replication slot that is no
> + *longer wanted
> + */
> Similar comment as above.
>
> Please refer [1] [2] for proper formatting of the patch.
> [1]: https://www.postgresql.org/docs/devel/source-format.html
> [2]: https://wiki.postgresql.org/wiki/Creating_Clean_Patches
>
> Thanks,
> Shlok Kyal
In a next version of the patch, I think I have make change to call ReplicationSlotCreate() function with the value of the flag allow_overwrite be consistent and modify the interface ReplicationSlotCreate to display the attribute allow_overwrite.
Hi,
I will make de modifications based on the remarks you mentioned.Regards,FabriceOn Mon, Sep 1, 2025 at 5:45 AM shveta malik <shveta.malik@gmail.com> wrote:On Sat, Aug 30, 2025 at 11:43 AM Shlok Kyal <shlok.kyal.oss@gmail.com> wrote:
>
> Hi Fabrice,
>
> Thanks for providing the patch. I reviewed your patch and have
> following comment:
>
> 1. I think we should add a commit message in the patch. It would help
> to give an understanding of the patch.
>
> 2. I tried applying patch on HEAD, it generates following warnings:
> Applying: fix failover slot issue when doing a switchover
> .git/rebase-apply/patch:31: trailing whitespace.
> bool allow_overwrite = false;
> .git/rebase-apply/patch:45: trailing whitespace.
> bool synced;
> .git/rebase-apply/patch:55: trailing whitespace.
>
> .git/rebase-apply/patch:56: trailing whitespace.
> if (!synced){
> .git/rebase-apply/patch:57: trailing whitespace.
> /*
> warning: squelched 16 whitespace errors
> warning: 21 lines add whitespace errors.
'git diff --check' can be used before 'git commit' to get info of
above issues before creating a patch.
> 3. I also tried to build the patch on HEAD and it is giving the following error:
> ubuntu@ubuntu-Virtual-Machine:~/Project/pg/postgres$
> ./../../install-clean.sh pg_30_8 > warn.log
> launcher.c: In function ‘CreateConflictDetectionSlot’:
> launcher.c:1457:9: error: too few arguments to function ‘ReplicationSlotCreate’
> 1457 | ReplicationSlotCreate(CONFLICT_DETECTION_SLOT, false,
> RS_PERSISTENT, false,
> | ^~~~~~~~~~~~~~~~~~~~~
> In file included from launcher.c:35:
> ../../../../src/include/replication/slot.h:307:13: note: declared here
> 307 | extern void ReplicationSlotCreate(const char *name, bool db_specific,
> | ^~~~~~~~~~~~~~~~~~~~~
> make[4]: *** [<builtin>: launcher.o] Error 1
> make[3]: *** [../../../src/backend/common.mk:37: logical-recursive] Error 2
> make[2]: *** [common.mk:37: replication-recursive] Error 2
> make[1]: *** [Makefile:42: install-backend-recurse] Error 2
> make: *** [GNUmakefile:11: install-src-recurse] Error 2
>
> 4. I have some general comments regarding formatting of the patch.
> + // Both local and remote slot have the same name
>
> We use following format for single line comments:
> /* comment text */
> and for multi line comments we use following format:
> /*
> * comment text begins here
> * and continues here
> */
>
> 5. We should use proper indentation here:
> + elog(LOG, "Logical replication slot %s created with option
> allow_overwrite to %s",
> + NameStr(slot->data.name),
> + slot->data.allow_overwrite ? "true" : "false");
>
src/tools/pgindent <file name>
can be used to do indentation before creating a patch.
> 6.
> + if (!synced){
> + /*
> + * Check if we need to overwrite an existing
> + * logical slot
> + */
> We should start the parentheses from the next line.
> Also, indentation for comment is not correct here.
>
> 7.
> + if (allow_overwrite){
> + /*
> + * Get rid of a replication slot that is no
> + *longer wanted
> + */
> Similar comment as above.
>
> Please refer [1] [2] for proper formatting of the patch.
> [1]: https://www.postgresql.org/docs/devel/source-format.html
> [2]: https://wiki.postgresql.org/wiki/Creating_Clean_Patches
>
> Thanks,
> Shlok Kyal
Вложения
On Wed, Aug 13, 2025 at 11:34 AM shveta malik <shveta.malik@gmail.com> wrote: > > On Fri, Aug 8, 2025 at 7:01 PM Fabrice Chapuis <fabrice636861@gmail.com> wrote: > > > Another problem as you pointed out is inconsistent behaviour across > switchovers. On the first switchover, we get the error on new standby: > "Exiting from slot synchronization because a slot with the same name > already exists on the standby." > > But in the case of a double switchover, this error does not occur. > This is due to the 'synced' flag not set on new standby on first > switchover while set in double switchover. I think the behaviour > should be the same. In both cases, it should emit the same error. We > are thinking of a potential solution here and will start a new thread > if needed. > Started a thread [1] for this issue. [1]: https://www.postgresql.org/message-id/flat/CAJpy0uBJqTQdq%2BUDW55oK2bhxMig%3DFaEYFV%3D%2BZaKmtmgNWwsHw%40mail.gmail.com thanks Shveta
On Sun, Sep 7, 2025 at 1:32 PM Fabrice Chapuis <fabrice636861@gmail.com> wrote: > > Hi, > Here the version 1 of the patch with the modifications. > I do a git diff --check until there are no more errors. > In a next version of the patch, I think I have make change to call ReplicationSlotCreate() function with the value of theflag allow_overwrite be consistent In which flow? I see all ReplicationSlotCreate() references already accepting the value. > and modify the interface ReplicationSlotCreate to display the attribute allow_overwrite. > Do you mean to modify pg_replication_slots? Thank You for the patch. Please find a few comments: 1) In both create_physical_replication_slot() and create_physical_replication_slot(): + false, false,false); ,false --> , false (space after comma is recommended) 2) + elog(DEBUG1, "logical replication slot %s created with option allow_overwrite to %s", + NameStr(slot->data.name), slot->data.allow_overwrite ? "true" : "false"); IMO, we don't need this as we do not log other properties of the slot as well. 3) We can have pg_replication_slots (pg_get_replication_slots) modified to display the new property. Also we can modify docs to have the new property defined. 4) + { + /* Check if we need to overwrite an existing logical slot */ + if (allow_overwrite) + { + /* Get rid of a replication slot that is no longer wanted */ + ReplicationSlotDrop(remote_slot->name,true); + + /* Get rid of a replication slot that is no longer wanted */ + ereport(WARNING, + errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE), + errmsg("slot \"%s\" already exists" + " on the standby but it will be dropped because " + "flag allow_overwrite is set to true", + remote_slot->name)); + + /* Going back to the main loop after droping the failover slot */ + return false; Currently we are returning after dropping the slot. But I think we shall drop the slot and proceed with new slot creation of the same name otherwise we may be left with old slot dropped and no new slot created specially when API is run. Scenario: --On primary, create 2 slots: slot1 and slot2. --On standby, create one slot slot1 with allow_overwrite=true. --Run 'SELECT pg_sync_replication_slots();' on standby. At the end of API, expectation is that both slots will be present with 'synced'=true, but only slot2 is present. if we run this API next time, slot1 is created. It should have been dropped and recreated (or say overwritten) in the first run itself. 5) IMO, LOG is sufficient here, as the action aligns with the user-provided 'allow_overwrite' setting. thanks Shveta
On Sun, Sep 7, 2025 at 1:32 PM Fabrice Chapuis <fabrice636861@gmail.com> wrote:
>
> Hi,
> Here the version 1 of the patch with the modifications.
> I do a git diff --check until there are no more errors.
> In a next version of the patch, I think I have make change to call ReplicationSlotCreate() function with the value of the flag allow_overwrite be consistent
In which flow? I see all ReplicationSlotCreate() references already
accepting the value.
> and modify the interface ReplicationSlotCreate to display the attribute allow_overwrite.
>
Do you mean to modify pg_replication_slots?
Thank You for the patch. Please find a few comments:
1)
In both create_physical_replication_slot() and
create_physical_replication_slot():
+ false, false,false);
,false --> , false (space after comma is recommended)
2)
+ elog(DEBUG1, "logical replication slot %s created with option
allow_overwrite to %s",
+ NameStr(slot->data.name), slot->data.allow_overwrite ? "true" : "false");
IMO, we don't need this as we do not log other properties of the slot as well.
3)
We can have pg_replication_slots (pg_get_replication_slots) modified
to display the new property. Also we can modify docs to have the new
property defined.
4)
+ {
+ /* Check if we need to overwrite an existing logical slot */
+ if (allow_overwrite)
+ {
+ /* Get rid of a replication slot that is no longer wanted */
+ ReplicationSlotDrop(remote_slot->name,true);
+
+ /* Get rid of a replication slot that is no longer wanted */
+ ereport(WARNING,
+ errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+ errmsg("slot \"%s\" already exists"
+ " on the standby but it will be dropped because "
+ "flag allow_overwrite is set to true",
+ remote_slot->name));
+
+ /* Going back to the main loop after droping the failover slot */
+ return false;
Currently we are returning after dropping the slot. But I think we
shall drop the slot and proceed with new slot creation of the same
name otherwise we may be left with old slot dropped and no new slot
created specially when API is run.
Scenario:
--On primary, create 2 slots: slot1 and slot2.
--On standby, create one slot slot1 with allow_overwrite=true.
--Run 'SELECT pg_sync_replication_slots();' on standby.
At the end of API, expectation is that both slots will be present with
'synced'=true, but only slot2 is present. if we run this API next
time, slot1 is created. It should have been dropped and recreated (or
say overwritten) in the first run itself.
5)
IMO, LOG is sufficient here, as the action aligns with the
user-provided 'allow_overwrite' setting.
thanks
Shveta
Here is the version 1 of the patch with corrections
1)
In both create_physical_replication_slot() and
create_physical_replication_slot():
+ false, false,false);
,false --> , false (space after comma is recommended)
2)
+ elog(DEBUG1, "logical replication slot %s created with option
allow_overwrite to %s",
+ NameStr(slot->data.name), slot->data.allow_overwrite ? "true" : "false");
IMO, we don't need this as we do not log other properties of the slot as well.
Point 1-2 done
3)
We can have pg_replication_slots (pg_get_replication_slots) modified
to display the new property. Also we can modify docs to have the new
property defined.
+ {
+ /* Check if we need to overwrite an existing logical slot */
+ if (allow_overwrite)
+ {
+ /* Get rid of a replication slot that is no longer wanted */
+ ReplicationSlotDrop(remote_slot->name,true);
+
+ /* Get rid of a replication slot that is no longer wanted */
+ ereport(WARNING,
+ errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+ errmsg("slot \"%s\" already exists"
+ " on the standby but it will be dropped because "
+ "flag allow_overwrite is set to true",
+ remote_slot->name));
+
+ /* Going back to the main loop after droping the failover slot */
+ return false;
Currently we are returning after dropping the slot. But I think we
shall drop the slot and proceed with new slot creation of the same
name otherwise we may be left with old slot dropped and no new slot
created specially when API is run.
Scenario:
--On primary, create 2 slots: slot1 and slot2.
--On standby, create one slot slot1 with allow_overwrite=true.
--Run 'SELECT pg_sync_replication_slots();' on standby.
At the end of API, expectation is that both slots will be present with
'synced'=true, but only slot2 is present. if we run this API next
time, slot1 is created. It should have been dropped and recreated (or
say overwritten) in the first run itself.
After testing with syn_standby_slots disable on the standby, it works. I think the code for the synchronisation of the new slot could be factorised.
5)
IMO, LOG is sufficient here, as the action aligns with the
user-provided 'allow_overwrite' setting.
Thanks Shveta for all your comments. I'll prepare a version 2 of the patch including your proposals.RegardsFabriceOn Thu, 11 Sep 2025, 07:42 shveta malik <shveta.malik@gmail.com> wrote:On Sun, Sep 7, 2025 at 1:32 PM Fabrice Chapuis <fabrice636861@gmail.com> wrote:
>
> Hi,
> Here the version 1 of the patch with the modifications.
> I do a git diff --check until there are no more errors.
> In a next version of the patch, I think I have make change to call ReplicationSlotCreate() function with the value of the flag allow_overwrite be consistent
In which flow? I see all ReplicationSlotCreate() references already
accepting the value.
> and modify the interface ReplicationSlotCreate to display the attribute allow_overwrite.
>
Do you mean to modify pg_replication_slots?
Thank You for the patch. Please find a few comments:
1)
In both create_physical_replication_slot() and
create_physical_replication_slot():
+ false, false,false);
,false --> , false (space after comma is recommended)
2)
+ elog(DEBUG1, "logical replication slot %s created with option
allow_overwrite to %s",
+ NameStr(slot->data.name), slot->data.allow_overwrite ? "true" : "false");
IMO, we don't need this as we do not log other properties of the slot as well.
3)
We can have pg_replication_slots (pg_get_replication_slots) modified
to display the new property. Also we can modify docs to have the new
property defined.
4)
+ {
+ /* Check if we need to overwrite an existing logical slot */
+ if (allow_overwrite)
+ {
+ /* Get rid of a replication slot that is no longer wanted */
+ ReplicationSlotDrop(remote_slot->name,true);
+
+ /* Get rid of a replication slot that is no longer wanted */
+ ereport(WARNING,
+ errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+ errmsg("slot \"%s\" already exists"
+ " on the standby but it will be dropped because "
+ "flag allow_overwrite is set to true",
+ remote_slot->name));
+
+ /* Going back to the main loop after droping the failover slot */
+ return false;
Currently we are returning after dropping the slot. But I think we
shall drop the slot and proceed with new slot creation of the same
name otherwise we may be left with old slot dropped and no new slot
created specially when API is run.
Scenario:
--On primary, create 2 slots: slot1 and slot2.
--On standby, create one slot slot1 with allow_overwrite=true.
--Run 'SELECT pg_sync_replication_slots();' on standby.
At the end of API, expectation is that both slots will be present with
'synced'=true, but only slot2 is present. if we run this API next
time, slot1 is created. It should have been dropped and recreated (or
say overwritten) in the first run itself.
5)
IMO, LOG is sufficient here, as the action aligns with the
user-provided 'allow_overwrite' setting.
thanks
Shveta
Вложения
On Fri, Sep 26, 2025 at 9:52 PM Fabrice Chapuis <fabrice636861@gmail.com> wrote: > > Hi Shveta, > > Here is the version 1 of the patch with corrections > Thanks. We can bump the version to v2 now — that is, each time we post a new version, we can increment the version number. The patch fails to apply to the latest HEAD. Also it had some 'trailing whitespace' issues. Can you please rebase the patch and post it again? > 1) > In both create_physical_replication_slot() and > create_physical_replication_slot(): > + false, false,false); > > ,false --> , false (space after comma is recommended) > > 2) > + elog(DEBUG1, "logical replication slot %s created with option > allow_overwrite to %s", > + NameStr(slot->data.name), slot->data.allow_overwrite ? "true" : "false"); > > IMO, we don't need this as we do not log other properties of the slot as well. > Point 1-2 done > 3) > We can have pg_replication_slots (pg_get_replication_slots) modified > to display the new property. Also we can modify docs to have the new > property defined. > > Point 3 Not yet implemented > > 4) > + { > + /* Check if we need to overwrite an existing logical slot */ > + if (allow_overwrite) > + { > + /* Get rid of a replication slot that is no longer wanted */ > + ReplicationSlotDrop(remote_slot->name,true); > + > + /* Get rid of a replication slot that is no longer wanted */ > + ereport(WARNING, > + errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE), > + errmsg("slot \"%s\" already exists" > + " on the standby but it will be dropped because " > + "flag allow_overwrite is set to true", > + remote_slot->name)); > + > + /* Going back to the main loop after droping the failover slot */ > + return false; > > Currently we are returning after dropping the slot. But I think we > shall drop the slot and proceed with new slot creation of the same > name otherwise we may be left with old slot dropped and no new slot > created specially when API is run. > > Scenario: > --On primary, create 2 slots: slot1 and slot2. > --On standby, create one slot slot1 with allow_overwrite=true. > --Run 'SELECT pg_sync_replication_slots();' on standby. > > At the end of API, expectation is that both slots will be present with > 'synced'=true, but only slot2 is present. if we run this API next > time, slot1 is created. It should have been dropped and recreated (or > say overwritten) in the first run itself. > > Point 4: > I put the creation slot under the allow_overwrite condition. > After testing with syn_standby_slots disable on the standby, it works. I think the code for the synchronisation of thenew slot could be factorised. > > 5) > IMO, LOG is sufficient here, as the action aligns with the > user-provided 'allow_overwrite' setting. > > Point 5 > Where is it in the code? > This one: + ereport(WARNING, + errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE), + errmsg("slot \"%s\" already exists" + " on the standby but try to recreate it because " + "flag allow_overwrite is set to true", + remote_slot->name)); thanks Shveta
Thanks
On Fri, Sep 26, 2025 at 9:52 PM Fabrice Chapuis <fabrice636861@gmail.com> wrote:
>
> Hi Shveta,
>
> Here is the version 1 of the patch with corrections
>
Thanks. We can bump the version to v2 now — that is, each time we post
a new version, we can increment the version number.
The patch fails to apply to the latest HEAD. Also it had some
'trailing whitespace' issues. Can you please rebase the patch and post
it again?
> 1)
> In both create_physical_replication_slot() and
> create_physical_replication_slot():
> + false, false,false);
>
> ,false --> , false (space after comma is recommended)
>
> 2)
> + elog(DEBUG1, "logical replication slot %s created with option
> allow_overwrite to %s",
> + NameStr(slot->data.name), slot->data.allow_overwrite ? "true" : "false");
>
> IMO, we don't need this as we do not log other properties of the slot as well.
> Point 1-2 done
> 3)
> We can have pg_replication_slots (pg_get_replication_slots) modified
> to display the new property. Also we can modify docs to have the new
> property defined.
>
> Point 3 Not yet implemented
>
> 4)
> + {
> + /* Check if we need to overwrite an existing logical slot */
> + if (allow_overwrite)
> + {
> + /* Get rid of a replication slot that is no longer wanted */
> + ReplicationSlotDrop(remote_slot->name,true);
> +
> + /* Get rid of a replication slot that is no longer wanted */
> + ereport(WARNING,
> + errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
> + errmsg("slot \"%s\" already exists"
> + " on the standby but it will be dropped because "
> + "flag allow_overwrite is set to true",
> + remote_slot->name));
> +
> + /* Going back to the main loop after droping the failover slot */
> + return false;
>
> Currently we are returning after dropping the slot. But I think we
> shall drop the slot and proceed with new slot creation of the same
> name otherwise we may be left with old slot dropped and no new slot
> created specially when API is run.
>
> Scenario:
> --On primary, create 2 slots: slot1 and slot2.
> --On standby, create one slot slot1 with allow_overwrite=true.
> --Run 'SELECT pg_sync_replication_slots();' on standby.
>
> At the end of API, expectation is that both slots will be present with
> 'synced'=true, but only slot2 is present. if we run this API next
> time, slot1 is created. It should have been dropped and recreated (or
> say overwritten) in the first run itself.
>
> Point 4:
> I put the creation slot under the allow_overwrite condition.
> After testing with syn_standby_slots disable on the standby, it works. I think the code for the synchronisation of the new slot could be factorised.
>
> 5)
> IMO, LOG is sufficient here, as the action aligns with the
> user-provided 'allow_overwrite' setting.
>
> Point 5
> Where is it in the code?
>
This one:
+ ereport(WARNING,
+ errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+ errmsg("slot \"%s\" already exists"
+ " on the standby but try to recreate it because "
+ "flag allow_overwrite is set to true",
+ remote_slot->name));
thanks
Shveta
Вложения
On Tue, Sep 30, 2025 at 12:17 PM Fabrice Chapuis <fabrice636861@gmail.com> wrote: > > Hi, > > Here the generated v2 of the Patch. > Thanks. I have refactored the code for synchronize_one_slot() as there was some code-repetition. Please take it if you find it okay. Also I felt that when we create a slot through slot-synchronization, we should create it with allow_overwrite as false. And thus in the attached patch, I have changed that part as well. It is a top up patch. Attached it as txt, please rename before applying atop your changes. Attached the steps for your reference. thanks Shveta
Вложения
On Tue, Sep 30, 2025 at 3:11 PM shveta malik <shveta.malik@gmail.com> wrote: > > On Tue, Sep 30, 2025 at 12:17 PM Fabrice Chapuis > <fabrice636861@gmail.com> wrote: > > > > Hi, > > > > Here the generated v2 of the Patch. > > > > Thanks. I have refactored the code for synchronize_one_slot() as there > was some code-repetition. Please take it if you find it okay. > > Also I felt that when we create a slot through slot-synchronization, > we should create it with allow_overwrite as false. And thus in the > attached patch, I have changed that part as well. > > It is a top up patch. Attached it as txt, please rename before > applying atop your changes. Attached the steps for your reference. > Next, pg_replication_slots (pg_get_replication_slots) can be modified to display the new property. And the corresponding doc for pg_replication_slots (doc/src/sgml/system-views.sgml) can be updated. thanks Shveta
was some code-repetition. Please take it if you find it okay.
Also I felt that when we create a slot through slot-synchronization,
we should create it with allow_overwrite as false. And thus in the
attached patch, I have changed that part as well.
At the moment we can let the slot creation with allow_overwrite set to false.
It is a top up patch. Attached it as txt, please rename before
applying atop your changes. Attached the steps for your reference.
On Tue, Sep 30, 2025 at 12:17 PM Fabrice Chapuis
<fabrice636861@gmail.com> wrote:
>
> Hi,
>
> Here the generated v2 of the Patch.
>
Thanks. I have refactored the code for synchronize_one_slot() as there
was some code-repetition. Please take it if you find it okay.
Also I felt that when we create a slot through slot-synchronization,
we should create it with allow_overwrite as false. And thus in the
attached patch, I have changed that part as well.
It is a top up patch. Attached it as txt, please rename before
applying atop your changes. Attached the steps for your reference.
thanks
Shveta
Вложения
to display the new property. And the corresponding doc for
pg_replication_slots (doc/src/sgml/system-views.sgml) can be updated.
On Tue, Sep 30, 2025 at 3:11 PM shveta malik <shveta.malik@gmail.com> wrote:
>
> On Tue, Sep 30, 2025 at 12:17 PM Fabrice Chapuis
> <fabrice636861@gmail.com> wrote:
> >
> > Hi,
> >
> > Here the generated v2 of the Patch.
> >
>
> Thanks. I have refactored the code for synchronize_one_slot() as there
> was some code-repetition. Please take it if you find it okay.
>
> Also I felt that when we create a slot through slot-synchronization,
> we should create it with allow_overwrite as false. And thus in the
> attached patch, I have changed that part as well.
>
> It is a top up patch. Attached it as txt, please rename before
> applying atop your changes. Attached the steps for your reference.
>
Next, pg_replication_slots (pg_get_replication_slots) can be modified
to display the new property. And the corresponding doc for
pg_replication_slots (doc/src/sgml/system-views.sgml) can be updated.
thanks
Shveta