Обсуждение: Issue with logical replication slot during switchover
An issue occurred during the initial switchover using PostgreSQL version 17.5. The setup consists of a cluster with two nodes, managed by Patroni version 4.0.5.
Logical replication is configured on the same instance, and the new feature enabling logical replication slots to be failover-safe in a highly available environment is used. Logical slot management is currently disabled in Patroni.
Following are some screen captured during the swichover
1. Run the switchover with Patroni
patronictl switchover
Current cluster topology
+ Cluster: ClusterX (7529893278186104053) ----+----+-----------+
| Member | Host | Role | State | TL | Lag in MB |
+----------+--------------+---------+-----------+----+-----------+
| node_1 | xxxxxxxxxxxx | Leader | running | 4 | |
| node_2 | xxxxxxxxxxxx | Replica | streaming | 4 | 0 |
+----------+--------------+---------+-----------+----+-----------+
Since the logical slot is in a failover state on both the primary and the standby, an attempt could be made to resynchronize them.
On Thu, Aug 7, 2025 at 6:50 PM Fabrice Chapuis <fabrice636861@gmail.com> wrote:
>
> Hi,
>
> An issue occurred during the initial switchover using PostgreSQL version 17.5. The setup consists of a cluster with
twonodes, managed by Patroni version 4.0.5.
> Logical replication is configured on the same instance, and the new feature enabling logical replication slots to be
failover-safein a highly available environment is used. Logical slot management is currently disabled in Patroni.
>
> Following are some screen captured during the swichover
>
> 1. Run the switchover with Patroni
>
> patronictl switchover
>
> Current cluster topology
>
> + Cluster: ClusterX (7529893278186104053) ----+----+-----------+
>
> | Member | Host | Role | State | TL | Lag in MB |
>
> +----------+--------------+---------+-----------+----+-----------+
>
> | node_1 | xxxxxxxxxxxx | Leader | running | 4 | |
>
> | node_2 | xxxxxxxxxxxx | Replica | streaming | 4 | 0 |
>
> +----------+--------------+---------+-----------+----+-----------+
>
> 2. Check the slot on the new Primary
>
> select * from pg_replication_slots where slot_type = 'logical';
> +-[ RECORD 1 ]--------+----------------+
> | slot_name | logical_slot |
> | plugin | pgoutput |
> | slot_type | logical |
> | datoid | 25605 |
> | database | db_test |
> | temporary | f |
> | active | t |
> | active_pid | 3841546 |
> | xmin | |
> | catalog_xmin | 10399 |
> | restart_lsn | 0/37002410 |
> | confirmed_flush_lsn | 0/37002448 |
> | wal_status | reserved |
> | safe_wal_size | |
> | two_phase | f |
> | inactive_since | |
> | conflicting | f |
> | invalidation_reason | |
> | failover | t |
> | synced | t |
> +---------------------+----------------+
> Logical replication is active again after the promote
>
> 3. Check the slot on the new standby
> select * from pg_replication_slots where slot_type = 'logical';
> +-[ RECORD 1 ]--------+-------------------------------+
> | slot_name | logical_slot |
> | plugin | pgoutput |
> | slot_type | logical |
> | datoid | 25605 |
> | database | db_test |
> | temporary | f |
> | active | f |
> | active_pid | |
> | xmin | |
> | catalog_xmin | 10397 |
> | restart_lsn | 0/3638F5F0 |
> | confirmed_flush_lsn | 0/3638F6A0 |
> | wal_status | reserved |
> | safe_wal_size | |
> | two_phase | f |
> | inactive_since | 2025-08-05 10:21:03.342587+02 |
> | conflicting | f |
> | invalidation_reason | |
> | failover | t |
> | synced | f |
> +---------------------+---------------------------
>
> The synced flag keep value false.
> Following error in in the log
> 2025-06-10 16:40:58.996 CEST [739829]: [1-1] user=,db=,client=,application= LOG: slot sync worker started
> 2025-06-10 16:40:59.011 CEST [739829]: [2-1] user=,db=,client=,application= ERROR: exiting from slot synchronization
becausesame name slot "logical_slot" already exists on the standby
>
> I would like to make a proposal to address the issue:
> Since the logical slot is in a failover state on both the primary and the standby, an attempt could be made to
resynchronizethem.
> I modify the slotsync.c module
> +++ b/src/backend/replication/logical/slotsync.c
> @@ -649,24 +649,46 @@ synchronize_one_slot(RemoteSlot *remote_slot, Oid remote_dbid)
>
> return false;
> }
> -
> - /* Search for the named slot */
> + // Both local and remote slot have the same name
> if ((slot = SearchNamedReplicationSlot(remote_slot->name, true)))
> {
> bool synced;
> + bool failover_status = remote_slot->failover;
>
> SpinLockAcquire(&slot->mutex);
> synced = slot->data.synced;
> SpinLockRelease(&slot->mutex);
> +
> + if (!synced){
> +
> + Assert(!MyReplicationSlot);
> +
> + if (failover_status){
> +
> + ReplicationSlotAcquire(remote_slot->name, true, true);
> +
> + // Put the synced flag to true to attempt resynchronizing failover slot on the
standby
> + MyReplicationSlot->data.synced = true;
> +
> + ReplicationSlotMarkDirty();
>
> - /* User-created slot with the same name exists, raise ERROR. */
> - if (!synced)
> - ereport(ERROR,
> + ReplicationSlotRelease();
> +
> + /* Get rid of a replication slot that is no longer wanted */
> + ereport(WARNING,
> + errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
> + errmsg("slot \"%s\" local slot has the same name as remote slot and they are
infailover mode, try to synchronize them",
> + remote_slot->name));
> + return false; /* Going back to the main loop after droping the failover slot */
> + }
> + else
> + /* User-created slot with the same name exists, raise ERROR. */
> + ereport(ERROR,
> errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
> errmsg("exiting from slot synchronization because same"
> - " name slot \"%s\" already exists on the standby",
> - remote_slot->name));
> -
> + " name slot \"%s\" already exists on the standby",
> + remote_slot->name));
> + }
> /*
> * The slot has been synchronized before.
> *
> This message follows the discussions started in this thread:
> https://www.postgresql.org/message-id/CAA5-nLDvnqGtBsKu4T_s-cS%2BdGbpSLEzRwgep1XfYzGhQ4o65A%40mail.gmail.com
>
> Help would be appreciated to move this point forward
>
Thank You for starting a new thread and working on it.
I think you have given reference to the wrong thread, the correct one
is [1].
IIUC, the proposed fix is checking if remote_slot is failover-enabled
and the slot with the same name exists locally but has 'synced'=false,
enable the 'synced' flag and proceed with synchronization from next
cycle onward, else error out. But remote-slot's failover will always
be true if we have reached this stage. Did you actually mean to check
if the local slot is failover-enabled but has 'synced' set to false
(indicating it’s a new standby after a switchover)? Even with that
check, it might not be the correct thing to overwrite such a slot
internally. I think in [1], we discussed a couple of ideas related to
a GUC, alter-api, drop_all_slots API, but I don't see any of those
proposed here. Do we want to try any of that?
[1]:
https://www.postgresql.org/message-id/flat/CAA5-nLD0vKn6T1-OHROBNfN2Pxa17zVo4UoVBdfHn2y%3D7nKixA%40mail.gmail.com
thanks
Shveta
ok the check on the failover status for the remote slot is perhaps redundant.
I'm not sure what impact setting the synced flag to true might have. But if you run an additional switchover, it works fine because the synced flag on the new primary is set to true now.
If we come back to the idea of the GUC or the API, adding an allow_overwrite parameter to the pg_create_logical_replication_slot function and removing the logical slot when set to true could be a suitable approach.
Regards,
On Thu, Aug 7, 2025 at 6:50 PM Fabrice Chapuis <fabrice636861@gmail.com> wrote:
>
> Hi,
>
> An issue occurred during the initial switchover using PostgreSQL version 17.5. The setup consists of a cluster with two nodes, managed by Patroni version 4.0.5.
> Logical replication is configured on the same instance, and the new feature enabling logical replication slots to be failover-safe in a highly available environment is used. Logical slot management is currently disabled in Patroni.
>
> Following are some screen captured during the swichover
>
> 1. Run the switchover with Patroni
>
> patronictl switchover
>
> Current cluster topology
>
> + Cluster: ClusterX (7529893278186104053) ----+----+-----------+
>
> | Member | Host | Role | State | TL | Lag in MB |
>
> +----------+--------------+---------+-----------+----+-----------+
>
> | node_1 | xxxxxxxxxxxx | Leader | running | 4 | |
>
> | node_2 | xxxxxxxxxxxx | Replica | streaming | 4 | 0 |
>
> +----------+--------------+---------+-----------+----+-----------+
>
> 2. Check the slot on the new Primary
>
> select * from pg_replication_slots where slot_type = 'logical';
> +-[ RECORD 1 ]--------+----------------+
> | slot_name | logical_slot |
> | plugin | pgoutput |
> | slot_type | logical |
> | datoid | 25605 |
> | database | db_test |
> | temporary | f |
> | active | t |
> | active_pid | 3841546 |
> | xmin | |
> | catalog_xmin | 10399 |
> | restart_lsn | 0/37002410 |
> | confirmed_flush_lsn | 0/37002448 |
> | wal_status | reserved |
> | safe_wal_size | |
> | two_phase | f |
> | inactive_since | |
> | conflicting | f |
> | invalidation_reason | |
> | failover | t |
> | synced | t |
> +---------------------+----------------+
> Logical replication is active again after the promote
>
> 3. Check the slot on the new standby
> select * from pg_replication_slots where slot_type = 'logical';
> +-[ RECORD 1 ]--------+-------------------------------+
> | slot_name | logical_slot |
> | plugin | pgoutput |
> | slot_type | logical |
> | datoid | 25605 |
> | database | db_test |
> | temporary | f |
> | active | f |
> | active_pid | |
> | xmin | |
> | catalog_xmin | 10397 |
> | restart_lsn | 0/3638F5F0 |
> | confirmed_flush_lsn | 0/3638F6A0 |
> | wal_status | reserved |
> | safe_wal_size | |
> | two_phase | f |
> | inactive_since | 2025-08-05 10:21:03.342587+02 |
> | conflicting | f |
> | invalidation_reason | |
> | failover | t |
> | synced | f |
> +---------------------+---------------------------
>
> The synced flag keep value false.
> Following error in in the log
> 2025-06-10 16:40:58.996 CEST [739829]: [1-1] user=,db=,client=,application= LOG: slot sync worker started
> 2025-06-10 16:40:59.011 CEST [739829]: [2-1] user=,db=,client=,application= ERROR: exiting from slot synchronization because same name slot "logical_slot" already exists on the standby
>
> I would like to make a proposal to address the issue:
> Since the logical slot is in a failover state on both the primary and the standby, an attempt could be made to resynchronize them.
> I modify the slotsync.c module
> +++ b/src/backend/replication/logical/slotsync.c
> @@ -649,24 +649,46 @@ synchronize_one_slot(RemoteSlot *remote_slot, Oid remote_dbid)
>
> return false;
> }
> -
> - /* Search for the named slot */
> + // Both local and remote slot have the same name
> if ((slot = SearchNamedReplicationSlot(remote_slot->name, true)))
> {
> bool synced;
> + bool failover_status = remote_slot->failover;
>
> SpinLockAcquire(&slot->mutex);
> synced = slot->data.synced;
> SpinLockRelease(&slot->mutex);
> +
> + if (!synced){
> +
> + Assert(!MyReplicationSlot);
> +
> + if (failover_status){
> +
> + ReplicationSlotAcquire(remote_slot->name, true, true);
> +
> + // Put the synced flag to true to attempt resynchronizing failover slot on the standby
> + MyReplicationSlot->data.synced = true;
> +
> + ReplicationSlotMarkDirty();
>
> - /* User-created slot with the same name exists, raise ERROR. */
> - if (!synced)
> - ereport(ERROR,
> + ReplicationSlotRelease();
> +
> + /* Get rid of a replication slot that is no longer wanted */
> + ereport(WARNING,
> + errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
> + errmsg("slot \"%s\" local slot has the same name as remote slot and they are in failover mode, try to synchronize them",
> + remote_slot->name));
> + return false; /* Going back to the main loop after droping the failover slot */
> + }
> + else
> + /* User-created slot with the same name exists, raise ERROR. */
> + ereport(ERROR,
> errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
> errmsg("exiting from slot synchronization because same"
> - " name slot \"%s\" already exists on the standby",
> - remote_slot->name));
> -
> + " name slot \"%s\" already exists on the standby",
> + remote_slot->name));
> + }
> /*
> * The slot has been synchronized before.
> *
> This message follows the discussions started in this thread:
> https://www.postgresql.org/message-id/CAA5-nLDvnqGtBsKu4T_s-cS%2BdGbpSLEzRwgep1XfYzGhQ4o65A%40mail.gmail.com
>
> Help would be appreciated to move this point forward
>
Thank You for starting a new thread and working on it.
I think you have given reference to the wrong thread, the correct one
is [1].
IIUC, the proposed fix is checking if remote_slot is failover-enabled
and the slot with the same name exists locally but has 'synced'=false,
enable the 'synced' flag and proceed with synchronization from next
cycle onward, else error out. But remote-slot's failover will always
be true if we have reached this stage. Did you actually mean to check
if the local slot is failover-enabled but has 'synced' set to false
(indicating it’s a new standby after a switchover)? Even with that
check, it might not be the correct thing to overwrite such a slot
internally. I think in [1], we discussed a couple of ideas related to
a GUC, alter-api, drop_all_slots API, but I don't see any of those
proposed here. Do we want to try any of that?
[1]:
https://www.postgresql.org/message-id/flat/CAA5-nLD0vKn6T1-OHROBNfN2Pxa17zVo4UoVBdfHn2y%3D7nKixA%40mail.gmail.com
thanks
Shveta
On Fri, Aug 8, 2025 at 7:01 PM Fabrice Chapuis <fabrice636861@gmail.com> wrote: > > Thanks Shveta for coming on this point again and fixing the link. > The idea is to check if the slot has same name to try to resynchronize it with the primary. > ok the check on the failover status for the remote slot is perhaps redundant. > I'm not sure what impact setting the synced flag to true might have. But if you run an additional switchover, it worksfine because the synced flag on the new primary is set to true now. > If we come back to the idea of the GUC or the API, adding an allow_overwrite parameter to the pg_create_logical_replication_slotfunction and removing the logical slot when set to true could be a suitable approach. > > What is your opinion? > If implemented as a GUC, it would address only a specific corner case, making it less suitable to be added as a GUC. OTOH, adding it as a slot's property makes more sense. You can start with introducing a new slot property, allow_overwrite. By default, this property will be set to false. a) The function pg_create_logical_replication_slot() can be extended to accept this parameter. b) A new API pg_alter_logical_replication_slot() can be introduced, to modify this property after slot creation if needed. c) The commands CREATE SUBSCRIPTION and ALTER SUBSCRIPTION are not needed to include an allow_overwrite parameter. When CREATE SUBSCRIPTION creates a slot, it will always set allow_overwrite to false by default. If users need to change this later, they can use the new API pg_alter_logical_replication_slot() to update the property. d) Additionally, pg_alter_logical_replication_slot() can serve as a generic API to modify other slot properties as well. This appears to be a reasonable idea with potential use cases beyond just allowing synchronization post switchover. Thoughts? ~~~ Another problem as you pointed out is inconsistent behaviour across switchovers. On the first switchover, we get the error on new standby: "Exiting from slot synchronization because a slot with the same name already exists on the standby." But in the case of a double switchover, this error does not occur. This is due to the 'synced' flag not set on new standby on first switchover while set in double switchover. I think the behaviour should be the same. In both cases, it should emit the same error. We are thinking of a potential solution here and will start a new thread if needed. thanks Shveta
Thanks for sharing this. In fact, I agree, introducing an allow_overwrite slot property makes seems cleaner than a GUC for this specific use case. a) At first, an extension of pg_create_logical_replication_slot() could be proposedb) pg_alter_logical_replication_slot(): that could be is a neat approach, a generic API for modifying other slot properties seems like a forward-looking design choice. However I don't know if slot properties could be modified easily because when slot is active, a process is holding them in memory.
c) CREATE SUBSCRIPTION and ALTER SUBSCRIPTION stay untouched
Regarding the switchover inconsistency, I agree that behavior should be uniform. The current discrepancy between the first and double switchover is confusing, and having the same error in both cases would improve predictability. It's up to you if you want to open a thread for this.
Regards,
Fabrice
On Fri, Aug 8, 2025 at 7:01 PM Fabrice Chapuis <fabrice636861@gmail.com> wrote:
>
> Thanks Shveta for coming on this point again and fixing the link.
> The idea is to check if the slot has same name to try to resynchronize it with the primary.
> ok the check on the failover status for the remote slot is perhaps redundant.
> I'm not sure what impact setting the synced flag to true might have. But if you run an additional switchover, it works fine because the synced flag on the new primary is set to true now.
> If we come back to the idea of the GUC or the API, adding an allow_overwrite parameter to the pg_create_logical_replication_slot function and removing the logical slot when set to true could be a suitable approach.
>
> What is your opinion?
>
If implemented as a GUC, it would address only a specific corner case,
making it less suitable to be added as a GUC.
OTOH, adding it as a slot's property makes more sense. You can start
with introducing a new slot property, allow_overwrite. By default,
this property will be set to false.
a) The function pg_create_logical_replication_slot() can be extended
to accept this parameter.
b) A new API pg_alter_logical_replication_slot() can be introduced, to
modify this property after slot creation if needed.
c) The commands CREATE SUBSCRIPTION and ALTER SUBSCRIPTION are not
needed to include an allow_overwrite parameter. When CREATE
SUBSCRIPTION creates a slot, it will always set allow_overwrite to
false by default. If users need to change this later, they can use the
new API pg_alter_logical_replication_slot() to update the property.
d) Additionally, pg_alter_logical_replication_slot() can serve as a
generic API to modify other slot properties as well.
This appears to be a reasonable idea with potential use cases beyond
just allowing synchronization post switchover. Thoughts?
~~~
Another problem as you pointed out is inconsistent behaviour across
switchovers. On the first switchover, we get the error on new standby:
"Exiting from slot synchronization because a slot with the same name
already exists on the standby."
But in the case of a double switchover, this error does not occur.
This is due to the 'synced' flag not set on new standby on first
switchover while set in double switchover. I think the behaviour
should be the same. In both cases, it should emit the same error. We
are thinking of a potential solution here and will start a new thread
if needed.
thanks
Shveta
On Wed, Aug 13, 2025 at 9:29 PM Fabrice Chapuis <fabrice636861@gmail.com> wrote: > > Thanks for sharing this. In fact, I agree, introducing an allow_overwrite slot property makes seems cleaner than a GUCfor this specific use case. > > a) At first, an extension of pg_create_logical_replication_slot() could be proposed > b) pg_alter_logical_replication_slot(): that could be is a neat approach, a generic API for modifying other slot propertiesseems like a forward-looking design choice. However I don't know if slot properties could be modified easily becausewhen slot is active, a process is holding them in memory. Right, it can not be.This API can be used when the slot is not acquired by any other process. We may have use-cases for that as well similar to one we have for this switchover case. thanks Shveta
On Wed, Aug 13, 2025 at 9:29 PM Fabrice Chapuis <fabrice636861@gmail.com> wrote:
>
> Thanks for sharing this. In fact, I agree, introducing an allow_overwrite slot property makes seems cleaner than a GUC for this specific use case.
>
> a) At first, an extension of pg_create_logical_replication_slot() could be proposed
> b) pg_alter_logical_replication_slot(): that could be is a neat approach, a generic API for modifying other slot properties seems like a forward-looking design choice. However I don't know if slot properties could be modified easily because when slot is active, a process is holding them in memory.
Right, it can not be.This API can be used when the slot is not
acquired by any other process. We may have use-cases for that as well
similar to one we have for this switchover case.
thanks
Shveta
On Fri, Aug 8, 2025 at 7:01 PM Fabrice Chapuis <fabrice636861@gmail.com> wrote:
>
> Thanks Shveta for coming on this point again and fixing the link.
> The idea is to check if the slot has same name to try to resynchronize it with the primary.
> ok the check on the failover status for the remote slot is perhaps redundant.
> I'm not sure what impact setting the synced flag to true might have. But if you run an additional switchover, it works fine because the synced flag on the new primary is set to true now.
> If we come back to the idea of the GUC or the API, adding an allow_overwrite parameter to the pg_create_logical_replication_slot function and removing the logical slot when set to true could be a suitable approach.
>
> What is your opinion?
>
If implemented as a GUC, it would address only a specific corner case,
making it less suitable to be added as a GUC.
OTOH, adding it as a slot's property makes more sense. You can start
with introducing a new slot property, allow_overwrite. By default,
this property will be set to false.
a) The function pg_create_logical_replication_slot() can be extended
to accept this parameter.
b) A new API pg_alter_logical_replication_slot() can be introduced, to
modify this property after slot creation if needed.
c) The commands CREATE SUBSCRIPTION and ALTER SUBSCRIPTION are not
needed to include an allow_overwrite parameter. When CREATE
SUBSCRIPTION creates a slot, it will always set allow_overwrite to
false by default. If users need to change this later, they can use the
new API pg_alter_logical_replication_slot() to update the property.
d) Additionally, pg_alter_logical_replication_slot() can serve as a
generic API to modify other slot properties as well.
This appears to be a reasonable idea with potential use cases beyond
just allowing synchronization post switchover. Thoughts?
~~~
Another problem as you pointed out is inconsistent behaviour across
switchovers. On the first switchover, we get the error on new standby:
"Exiting from slot synchronization because a slot with the same name
already exists on the standby."
But in the case of a double switchover, this error does not occur.
This is due to the 'synced' flag not set on new standby on first
switchover while set in double switchover. I think the behaviour
should be the same. In both cases, it should emit the same error. We
are thinking of a potential solution here and will start a new thread
if needed.
thanks
Shveta
On Wed, Aug 27, 2025 at 7:55 PM Fabrice Chapuis <fabrice636861@gmail.com> wrote: > > For the first step (a), the pg_create_logical_replication_slot interface is extended. > The slot on the new attached standby will be dropped and recreated if the flag allow_overwrite is set to true. > I tested the modified source, could you please give me a feedback on code changes. > Thanks for working on this. Can you please provide a patch instead of diff? It will be easier for reviewers to apply and test it then. thanks Shveta
On Wed, Aug 27, 2025 at 7:55 PM Fabrice Chapuis <fabrice636861@gmail.com> wrote:
>
> For the first step (a), the pg_create_logical_replication_slot interface is extended.
> The slot on the new attached standby will be dropped and recreated if the flag allow_overwrite is set to true.
> I tested the modified source, could you please give me a feedback on code changes.
>
Thanks for working on this. Can you please provide a patch instead of
diff? It will be easier for reviewers to apply and test it then.
thanks
Shveta
On Thu, Aug 28, 2025 at 9:11 PM Fabrice Chapuis <fabrice636861@gmail.com> wrote: > > What is the procedure to create this patch. Thank you for your help. > We use 'git format-patch' to generate formatted patches. I have given a few links ([1],[2],[3)] on know-how. I usually use these steps: git add <file1.c> <adds file to staging area> git add <file2.c> <once all the required files are added, create a commit> git commit <add the required message to describe the patch> git format-patch -1 -v1 <create the patch> -1-> to create patch for last 1 commit -v1 --> patch-prefix. I have used v1 which indicates version-1, when we upload the patch a second time, we can use -v2 and so on. [1]: https://stackoverflow.com/questions/6658313/how-can-i-generate-a-git-patch-for-a-specific-commit [2]: https://www.geeksforgeeks.org/git/how-to-generate-a-git-patch-for-a-specific-commit/ [3]: https://gist.github.com/ganapathichidambaram/3504e31fc8ba809762b3a0d9d9ef8ec2 thanks Shveta
On Thu, Aug 28, 2025 at 9:11 PM Fabrice Chapuis <fabrice636861@gmail.com> wrote:
>
> What is the procedure to create this patch. Thank you for your help.
>
We use 'git format-patch' to generate formatted patches. I have given
a few links ([1],[2],[3)] on know-how.
I usually use these steps:
git add <file1.c> <adds file to staging area>
git add <file2.c>
<once all the required files are added, create a commit>
git commit <add the required message to describe the patch>
git format-patch -1 -v1 <create the patch>
-1-> to create patch for last 1 commit
-v1 --> patch-prefix. I have used v1 which indicates version-1, when
we upload the patch a second time, we can use -v2 and so on.
[1]: https://stackoverflow.com/questions/6658313/how-can-i-generate-a-git-patch-for-a-specific-commit
[2]: https://www.geeksforgeeks.org/git/how-to-generate-a-git-patch-for-a-specific-commit/
[3]: https://gist.github.com/ganapathichidambaram/3504e31fc8ba809762b3a0d9d9ef8ec2
thanks
Shveta
Вложения
On Fri, 29 Aug 2025 at 19:28, Fabrice Chapuis <fabrice636861@gmail.com> wrote: > > Thanks Shveta for your procedure to follow, here is the version 1 of the patch > > Regards, > > Fabrice > > On Fri, Aug 29, 2025 at 5:34 AM shveta malik <shveta.malik@gmail.com> wrote: >> >> On Thu, Aug 28, 2025 at 9:11 PM Fabrice Chapuis <fabrice636861@gmail.com> wrote: >> > >> > What is the procedure to create this patch. Thank you for your help. >> > >> >> We use 'git format-patch' to generate formatted patches. I have given >> a few links ([1],[2],[3)] on know-how. >> >> I usually use these steps: >> >> >> git add <file1.c> <adds file to staging area> >> git add <file2.c> >> <once all the required files are added, create a commit> >> git commit <add the required message to describe the patch> >> git format-patch -1 -v1 <create the patch> >> -1-> to create patch for last 1 commit >> -v1 --> patch-prefix. I have used v1 which indicates version-1, when >> we upload the patch a second time, we can use -v2 and so on. >> >> [1]: https://stackoverflow.com/questions/6658313/how-can-i-generate-a-git-patch-for-a-specific-commit >> [2]: https://www.geeksforgeeks.org/git/how-to-generate-a-git-patch-for-a-specific-commit/ >> [3]: https://gist.github.com/ganapathichidambaram/3504e31fc8ba809762b3a0d9d9ef8ec2 >> >> thanks >> Shveta Hi Fabrice, Thanks for providing the patch. I reviewed your patch and have following comment: 1. I think we should add a commit message in the patch. It would help to give an understanding of the patch. 2. I tried applying patch on HEAD, it generates following warnings: Applying: fix failover slot issue when doing a switchover .git/rebase-apply/patch:31: trailing whitespace. bool allow_overwrite = false; .git/rebase-apply/patch:45: trailing whitespace. bool synced; .git/rebase-apply/patch:55: trailing whitespace. .git/rebase-apply/patch:56: trailing whitespace. if (!synced){ .git/rebase-apply/patch:57: trailing whitespace. /* warning: squelched 16 whitespace errors warning: 21 lines add whitespace errors. 3. I also tried to build the patch on HEAD and it is giving the following error: ubuntu@ubuntu-Virtual-Machine:~/Project/pg/postgres$ ./../../install-clean.sh pg_30_8 > warn.log launcher.c: In function ‘CreateConflictDetectionSlot’: launcher.c:1457:9: error: too few arguments to function ‘ReplicationSlotCreate’ 1457 | ReplicationSlotCreate(CONFLICT_DETECTION_SLOT, false, RS_PERSISTENT, false, | ^~~~~~~~~~~~~~~~~~~~~ In file included from launcher.c:35: ../../../../src/include/replication/slot.h:307:13: note: declared here 307 | extern void ReplicationSlotCreate(const char *name, bool db_specific, | ^~~~~~~~~~~~~~~~~~~~~ make[4]: *** [<builtin>: launcher.o] Error 1 make[3]: *** [../../../src/backend/common.mk:37: logical-recursive] Error 2 make[2]: *** [common.mk:37: replication-recursive] Error 2 make[1]: *** [Makefile:42: install-backend-recurse] Error 2 make: *** [GNUmakefile:11: install-src-recurse] Error 2 4. I have some general comments regarding formatting of the patch. + // Both local and remote slot have the same name We use following format for single line comments: /* comment text */ and for multi line comments we use following format: /* * comment text begins here * and continues here */ 5. We should use proper indentation here: + elog(LOG, "Logical replication slot %s created with option allow_overwrite to %s", + NameStr(slot->data.name), + slot->data.allow_overwrite ? "true" : "false"); 6. + if (!synced){ + /* + * Check if we need to overwrite an existing + * logical slot + */ We should start the parentheses from the next line. Also, indentation for comment is not correct here. 7. + if (allow_overwrite){ + /* + * Get rid of a replication slot that is no + *longer wanted + */ Similar comment as above. Please refer [1] [2] for proper formatting of the patch. [1]: https://www.postgresql.org/docs/devel/source-format.html [2]: https://wiki.postgresql.org/wiki/Creating_Clean_Patches Thanks, Shlok Kyal
On Sat, Aug 30, 2025 at 11:43 AM Shlok Kyal <shlok.kyal.oss@gmail.com> wrote:
>
> Hi Fabrice,
>
> Thanks for providing the patch. I reviewed your patch and have
> following comment:
>
> 1. I think we should add a commit message in the patch. It would help
> to give an understanding of the patch.
>
> 2. I tried applying patch on HEAD, it generates following warnings:
> Applying: fix failover slot issue when doing a switchover
> .git/rebase-apply/patch:31: trailing whitespace.
> bool allow_overwrite = false;
> .git/rebase-apply/patch:45: trailing whitespace.
> bool synced;
> .git/rebase-apply/patch:55: trailing whitespace.
>
> .git/rebase-apply/patch:56: trailing whitespace.
> if (!synced){
> .git/rebase-apply/patch:57: trailing whitespace.
> /*
> warning: squelched 16 whitespace errors
> warning: 21 lines add whitespace errors.
'git diff --check' can be used before 'git commit' to get info of
above issues before creating a patch.
> 3. I also tried to build the patch on HEAD and it is giving the following error:
> ubuntu@ubuntu-Virtual-Machine:~/Project/pg/postgres$
> ./../../install-clean.sh pg_30_8 > warn.log
> launcher.c: In function ‘CreateConflictDetectionSlot’:
> launcher.c:1457:9: error: too few arguments to function ‘ReplicationSlotCreate’
> 1457 | ReplicationSlotCreate(CONFLICT_DETECTION_SLOT, false,
> RS_PERSISTENT, false,
> | ^~~~~~~~~~~~~~~~~~~~~
> In file included from launcher.c:35:
> ../../../../src/include/replication/slot.h:307:13: note: declared here
> 307 | extern void ReplicationSlotCreate(const char *name, bool db_specific,
> | ^~~~~~~~~~~~~~~~~~~~~
> make[4]: *** [<builtin>: launcher.o] Error 1
> make[3]: *** [../../../src/backend/common.mk:37: logical-recursive] Error 2
> make[2]: *** [common.mk:37: replication-recursive] Error 2
> make[1]: *** [Makefile:42: install-backend-recurse] Error 2
> make: *** [GNUmakefile:11: install-src-recurse] Error 2
>
> 4. I have some general comments regarding formatting of the patch.
> + // Both local and remote slot have the same name
>
> We use following format for single line comments:
> /* comment text */
> and for multi line comments we use following format:
> /*
> * comment text begins here
> * and continues here
> */
>
> 5. We should use proper indentation here:
> + elog(LOG, "Logical replication slot %s created with option
> allow_overwrite to %s",
> + NameStr(slot->data.name),
> + slot->data.allow_overwrite ? "true" : "false");
>
src/tools/pgindent <file name>
can be used to do indentation before creating a patch.
> 6.
> + if (!synced){
> + /*
> + * Check if we need to overwrite an existing
> + * logical slot
> + */
> We should start the parentheses from the next line.
> Also, indentation for comment is not correct here.
>
> 7.
> + if (allow_overwrite){
> + /*
> + * Get rid of a replication slot that is no
> + *longer wanted
> + */
> Similar comment as above.
>
> Please refer [1] [2] for proper formatting of the patch.
> [1]: https://www.postgresql.org/docs/devel/source-format.html
> [2]: https://wiki.postgresql.org/wiki/Creating_Clean_Patches
>
> Thanks,
> Shlok Kyal
I will make de modifications based on the remarks you mentioned.
On Sat, Aug 30, 2025 at 11:43 AM Shlok Kyal <shlok.kyal.oss@gmail.com> wrote:
>
> Hi Fabrice,
>
> Thanks for providing the patch. I reviewed your patch and have
> following comment:
>
> 1. I think we should add a commit message in the patch. It would help
> to give an understanding of the patch.
>
> 2. I tried applying patch on HEAD, it generates following warnings:
> Applying: fix failover slot issue when doing a switchover
> .git/rebase-apply/patch:31: trailing whitespace.
> bool allow_overwrite = false;
> .git/rebase-apply/patch:45: trailing whitespace.
> bool synced;
> .git/rebase-apply/patch:55: trailing whitespace.
>
> .git/rebase-apply/patch:56: trailing whitespace.
> if (!synced){
> .git/rebase-apply/patch:57: trailing whitespace.
> /*
> warning: squelched 16 whitespace errors
> warning: 21 lines add whitespace errors.
'git diff --check' can be used before 'git commit' to get info of
above issues before creating a patch.
> 3. I also tried to build the patch on HEAD and it is giving the following error:
> ubuntu@ubuntu-Virtual-Machine:~/Project/pg/postgres$
> ./../../install-clean.sh pg_30_8 > warn.log
> launcher.c: In function ‘CreateConflictDetectionSlot’:
> launcher.c:1457:9: error: too few arguments to function ‘ReplicationSlotCreate’
> 1457 | ReplicationSlotCreate(CONFLICT_DETECTION_SLOT, false,
> RS_PERSISTENT, false,
> | ^~~~~~~~~~~~~~~~~~~~~
> In file included from launcher.c:35:
> ../../../../src/include/replication/slot.h:307:13: note: declared here
> 307 | extern void ReplicationSlotCreate(const char *name, bool db_specific,
> | ^~~~~~~~~~~~~~~~~~~~~
> make[4]: *** [<builtin>: launcher.o] Error 1
> make[3]: *** [../../../src/backend/common.mk:37: logical-recursive] Error 2
> make[2]: *** [common.mk:37: replication-recursive] Error 2
> make[1]: *** [Makefile:42: install-backend-recurse] Error 2
> make: *** [GNUmakefile:11: install-src-recurse] Error 2
>
> 4. I have some general comments regarding formatting of the patch.
> + // Both local and remote slot have the same name
>
> We use following format for single line comments:
> /* comment text */
> and for multi line comments we use following format:
> /*
> * comment text begins here
> * and continues here
> */
>
> 5. We should use proper indentation here:
> + elog(LOG, "Logical replication slot %s created with option
> allow_overwrite to %s",
> + NameStr(slot->data.name),
> + slot->data.allow_overwrite ? "true" : "false");
>
src/tools/pgindent <file name>
can be used to do indentation before creating a patch.
> 6.
> + if (!synced){
> + /*
> + * Check if we need to overwrite an existing
> + * logical slot
> + */
> We should start the parentheses from the next line.
> Also, indentation for comment is not correct here.
>
> 7.
> + if (allow_overwrite){
> + /*
> + * Get rid of a replication slot that is no
> + *longer wanted
> + */
> Similar comment as above.
>
> Please refer [1] [2] for proper formatting of the patch.
> [1]: https://www.postgresql.org/docs/devel/source-format.html
> [2]: https://wiki.postgresql.org/wiki/Creating_Clean_Patches
>
> Thanks,
> Shlok Kyal
In a next version of the patch, I think I have make change to call ReplicationSlotCreate() function with the value of the flag allow_overwrite be consistent and modify the interface ReplicationSlotCreate to display the attribute allow_overwrite.
Hi,
I will make de modifications based on the remarks you mentioned.Regards,FabriceOn Mon, Sep 1, 2025 at 5:45 AM shveta malik <shveta.malik@gmail.com> wrote:On Sat, Aug 30, 2025 at 11:43 AM Shlok Kyal <shlok.kyal.oss@gmail.com> wrote:
>
> Hi Fabrice,
>
> Thanks for providing the patch. I reviewed your patch and have
> following comment:
>
> 1. I think we should add a commit message in the patch. It would help
> to give an understanding of the patch.
>
> 2. I tried applying patch on HEAD, it generates following warnings:
> Applying: fix failover slot issue when doing a switchover
> .git/rebase-apply/patch:31: trailing whitespace.
> bool allow_overwrite = false;
> .git/rebase-apply/patch:45: trailing whitespace.
> bool synced;
> .git/rebase-apply/patch:55: trailing whitespace.
>
> .git/rebase-apply/patch:56: trailing whitespace.
> if (!synced){
> .git/rebase-apply/patch:57: trailing whitespace.
> /*
> warning: squelched 16 whitespace errors
> warning: 21 lines add whitespace errors.
'git diff --check' can be used before 'git commit' to get info of
above issues before creating a patch.
> 3. I also tried to build the patch on HEAD and it is giving the following error:
> ubuntu@ubuntu-Virtual-Machine:~/Project/pg/postgres$
> ./../../install-clean.sh pg_30_8 > warn.log
> launcher.c: In function ‘CreateConflictDetectionSlot’:
> launcher.c:1457:9: error: too few arguments to function ‘ReplicationSlotCreate’
> 1457 | ReplicationSlotCreate(CONFLICT_DETECTION_SLOT, false,
> RS_PERSISTENT, false,
> | ^~~~~~~~~~~~~~~~~~~~~
> In file included from launcher.c:35:
> ../../../../src/include/replication/slot.h:307:13: note: declared here
> 307 | extern void ReplicationSlotCreate(const char *name, bool db_specific,
> | ^~~~~~~~~~~~~~~~~~~~~
> make[4]: *** [<builtin>: launcher.o] Error 1
> make[3]: *** [../../../src/backend/common.mk:37: logical-recursive] Error 2
> make[2]: *** [common.mk:37: replication-recursive] Error 2
> make[1]: *** [Makefile:42: install-backend-recurse] Error 2
> make: *** [GNUmakefile:11: install-src-recurse] Error 2
>
> 4. I have some general comments regarding formatting of the patch.
> + // Both local and remote slot have the same name
>
> We use following format for single line comments:
> /* comment text */
> and for multi line comments we use following format:
> /*
> * comment text begins here
> * and continues here
> */
>
> 5. We should use proper indentation here:
> + elog(LOG, "Logical replication slot %s created with option
> allow_overwrite to %s",
> + NameStr(slot->data.name),
> + slot->data.allow_overwrite ? "true" : "false");
>
src/tools/pgindent <file name>
can be used to do indentation before creating a patch.
> 6.
> + if (!synced){
> + /*
> + * Check if we need to overwrite an existing
> + * logical slot
> + */
> We should start the parentheses from the next line.
> Also, indentation for comment is not correct here.
>
> 7.
> + if (allow_overwrite){
> + /*
> + * Get rid of a replication slot that is no
> + *longer wanted
> + */
> Similar comment as above.
>
> Please refer [1] [2] for proper formatting of the patch.
> [1]: https://www.postgresql.org/docs/devel/source-format.html
> [2]: https://wiki.postgresql.org/wiki/Creating_Clean_Patches
>
> Thanks,
> Shlok Kyal
Вложения
On Wed, Aug 13, 2025 at 11:34 AM shveta malik <shveta.malik@gmail.com> wrote: > > On Fri, Aug 8, 2025 at 7:01 PM Fabrice Chapuis <fabrice636861@gmail.com> wrote: > > > Another problem as you pointed out is inconsistent behaviour across > switchovers. On the first switchover, we get the error on new standby: > "Exiting from slot synchronization because a slot with the same name > already exists on the standby." > > But in the case of a double switchover, this error does not occur. > This is due to the 'synced' flag not set on new standby on first > switchover while set in double switchover. I think the behaviour > should be the same. In both cases, it should emit the same error. We > are thinking of a potential solution here and will start a new thread > if needed. > Started a thread [1] for this issue. [1]: https://www.postgresql.org/message-id/flat/CAJpy0uBJqTQdq%2BUDW55oK2bhxMig%3DFaEYFV%3D%2BZaKmtmgNWwsHw%40mail.gmail.com thanks Shveta
On Sun, Sep 7, 2025 at 1:32 PM Fabrice Chapuis <fabrice636861@gmail.com> wrote:
>
> Hi,
> Here the version 1 of the patch with the modifications.
> I do a git diff --check until there are no more errors.
> In a next version of the patch, I think I have make change to call ReplicationSlotCreate() function with the value of
theflag allow_overwrite be consistent
In which flow? I see all ReplicationSlotCreate() references already
accepting the value.
> and modify the interface ReplicationSlotCreate to display the attribute allow_overwrite.
>
Do you mean to modify pg_replication_slots?
Thank You for the patch. Please find a few comments:
1)
In both create_physical_replication_slot() and
create_physical_replication_slot():
+ false, false,false);
,false --> , false (space after comma is recommended)
2)
+ elog(DEBUG1, "logical replication slot %s created with option
allow_overwrite to %s",
+ NameStr(slot->data.name), slot->data.allow_overwrite ? "true" : "false");
IMO, we don't need this as we do not log other properties of the slot as well.
3)
We can have pg_replication_slots (pg_get_replication_slots) modified
to display the new property. Also we can modify docs to have the new
property defined.
4)
+ {
+ /* Check if we need to overwrite an existing logical slot */
+ if (allow_overwrite)
+ {
+ /* Get rid of a replication slot that is no longer wanted */
+ ReplicationSlotDrop(remote_slot->name,true);
+
+ /* Get rid of a replication slot that is no longer wanted */
+ ereport(WARNING,
+ errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+ errmsg("slot \"%s\" already exists"
+ " on the standby but it will be dropped because "
+ "flag allow_overwrite is set to true",
+ remote_slot->name));
+
+ /* Going back to the main loop after droping the failover slot */
+ return false;
Currently we are returning after dropping the slot. But I think we
shall drop the slot and proceed with new slot creation of the same
name otherwise we may be left with old slot dropped and no new slot
created specially when API is run.
Scenario:
--On primary, create 2 slots: slot1 and slot2.
--On standby, create one slot slot1 with allow_overwrite=true.
--Run 'SELECT pg_sync_replication_slots();' on standby.
At the end of API, expectation is that both slots will be present with
'synced'=true, but only slot2 is present. if we run this API next
time, slot1 is created. It should have been dropped and recreated (or
say overwritten) in the first run itself.
5)
IMO, LOG is sufficient here, as the action aligns with the
user-provided 'allow_overwrite' setting.
thanks
Shveta
On Sun, Sep 7, 2025 at 1:32 PM Fabrice Chapuis <fabrice636861@gmail.com> wrote:
>
> Hi,
> Here the version 1 of the patch with the modifications.
> I do a git diff --check until there are no more errors.
> In a next version of the patch, I think I have make change to call ReplicationSlotCreate() function with the value of the flag allow_overwrite be consistent
In which flow? I see all ReplicationSlotCreate() references already
accepting the value.
> and modify the interface ReplicationSlotCreate to display the attribute allow_overwrite.
>
Do you mean to modify pg_replication_slots?
Thank You for the patch. Please find a few comments:
1)
In both create_physical_replication_slot() and
create_physical_replication_slot():
+ false, false,false);
,false --> , false (space after comma is recommended)
2)
+ elog(DEBUG1, "logical replication slot %s created with option
allow_overwrite to %s",
+ NameStr(slot->data.name), slot->data.allow_overwrite ? "true" : "false");
IMO, we don't need this as we do not log other properties of the slot as well.
3)
We can have pg_replication_slots (pg_get_replication_slots) modified
to display the new property. Also we can modify docs to have the new
property defined.
4)
+ {
+ /* Check if we need to overwrite an existing logical slot */
+ if (allow_overwrite)
+ {
+ /* Get rid of a replication slot that is no longer wanted */
+ ReplicationSlotDrop(remote_slot->name,true);
+
+ /* Get rid of a replication slot that is no longer wanted */
+ ereport(WARNING,
+ errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+ errmsg("slot \"%s\" already exists"
+ " on the standby but it will be dropped because "
+ "flag allow_overwrite is set to true",
+ remote_slot->name));
+
+ /* Going back to the main loop after droping the failover slot */
+ return false;
Currently we are returning after dropping the slot. But I think we
shall drop the slot and proceed with new slot creation of the same
name otherwise we may be left with old slot dropped and no new slot
created specially when API is run.
Scenario:
--On primary, create 2 slots: slot1 and slot2.
--On standby, create one slot slot1 with allow_overwrite=true.
--Run 'SELECT pg_sync_replication_slots();' on standby.
At the end of API, expectation is that both slots will be present with
'synced'=true, but only slot2 is present. if we run this API next
time, slot1 is created. It should have been dropped and recreated (or
say overwritten) in the first run itself.
5)
IMO, LOG is sufficient here, as the action aligns with the
user-provided 'allow_overwrite' setting.
thanks
Shveta
Here is the version 1 of the patch with corrections
1)
In both create_physical_replication_slot() and
create_physical_replication_slot():
+ false, false,false);
,false --> , false (space after comma is recommended)
2)
+ elog(DEBUG1, "logical replication slot %s created with option
allow_overwrite to %s",
+ NameStr(slot->data.name), slot->data.allow_overwrite ? "true" : "false");
IMO, we don't need this as we do not log other properties of the slot as well.
Point 1-2 done
3)
We can have pg_replication_slots (pg_get_replication_slots) modified
to display the new property. Also we can modify docs to have the new
property defined.
+ {
+ /* Check if we need to overwrite an existing logical slot */
+ if (allow_overwrite)
+ {
+ /* Get rid of a replication slot that is no longer wanted */
+ ReplicationSlotDrop(remote_slot->name,true);
+
+ /* Get rid of a replication slot that is no longer wanted */
+ ereport(WARNING,
+ errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+ errmsg("slot \"%s\" already exists"
+ " on the standby but it will be dropped because "
+ "flag allow_overwrite is set to true",
+ remote_slot->name));
+
+ /* Going back to the main loop after droping the failover slot */
+ return false;
Currently we are returning after dropping the slot. But I think we
shall drop the slot and proceed with new slot creation of the same
name otherwise we may be left with old slot dropped and no new slot
created specially when API is run.
Scenario:
--On primary, create 2 slots: slot1 and slot2.
--On standby, create one slot slot1 with allow_overwrite=true.
--Run 'SELECT pg_sync_replication_slots();' on standby.
At the end of API, expectation is that both slots will be present with
'synced'=true, but only slot2 is present. if we run this API next
time, slot1 is created. It should have been dropped and recreated (or
say overwritten) in the first run itself.
After testing with syn_standby_slots disable on the standby, it works. I think the code for the synchronisation of the new slot could be factorised.
5)
IMO, LOG is sufficient here, as the action aligns with the
user-provided 'allow_overwrite' setting.
Thanks Shveta for all your comments. I'll prepare a version 2 of the patch including your proposals.RegardsFabriceOn Thu, 11 Sep 2025, 07:42 shveta malik <shveta.malik@gmail.com> wrote:On Sun, Sep 7, 2025 at 1:32 PM Fabrice Chapuis <fabrice636861@gmail.com> wrote:
>
> Hi,
> Here the version 1 of the patch with the modifications.
> I do a git diff --check until there are no more errors.
> In a next version of the patch, I think I have make change to call ReplicationSlotCreate() function with the value of the flag allow_overwrite be consistent
In which flow? I see all ReplicationSlotCreate() references already
accepting the value.
> and modify the interface ReplicationSlotCreate to display the attribute allow_overwrite.
>
Do you mean to modify pg_replication_slots?
Thank You for the patch. Please find a few comments:
1)
In both create_physical_replication_slot() and
create_physical_replication_slot():
+ false, false,false);
,false --> , false (space after comma is recommended)
2)
+ elog(DEBUG1, "logical replication slot %s created with option
allow_overwrite to %s",
+ NameStr(slot->data.name), slot->data.allow_overwrite ? "true" : "false");
IMO, we don't need this as we do not log other properties of the slot as well.
3)
We can have pg_replication_slots (pg_get_replication_slots) modified
to display the new property. Also we can modify docs to have the new
property defined.
4)
+ {
+ /* Check if we need to overwrite an existing logical slot */
+ if (allow_overwrite)
+ {
+ /* Get rid of a replication slot that is no longer wanted */
+ ReplicationSlotDrop(remote_slot->name,true);
+
+ /* Get rid of a replication slot that is no longer wanted */
+ ereport(WARNING,
+ errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+ errmsg("slot \"%s\" already exists"
+ " on the standby but it will be dropped because "
+ "flag allow_overwrite is set to true",
+ remote_slot->name));
+
+ /* Going back to the main loop after droping the failover slot */
+ return false;
Currently we are returning after dropping the slot. But I think we
shall drop the slot and proceed with new slot creation of the same
name otherwise we may be left with old slot dropped and no new slot
created specially when API is run.
Scenario:
--On primary, create 2 slots: slot1 and slot2.
--On standby, create one slot slot1 with allow_overwrite=true.
--Run 'SELECT pg_sync_replication_slots();' on standby.
At the end of API, expectation is that both slots will be present with
'synced'=true, but only slot2 is present. if we run this API next
time, slot1 is created. It should have been dropped and recreated (or
say overwritten) in the first run itself.
5)
IMO, LOG is sufficient here, as the action aligns with the
user-provided 'allow_overwrite' setting.
thanks
Shveta
Вложения
On Fri, Sep 26, 2025 at 9:52 PM Fabrice Chapuis <fabrice636861@gmail.com> wrote:
>
> Hi Shveta,
>
> Here is the version 1 of the patch with corrections
>
Thanks. We can bump the version to v2 now — that is, each time we post
a new version, we can increment the version number.
The patch fails to apply to the latest HEAD. Also it had some
'trailing whitespace' issues. Can you please rebase the patch and post
it again?
> 1)
> In both create_physical_replication_slot() and
> create_physical_replication_slot():
> + false, false,false);
>
> ,false --> , false (space after comma is recommended)
>
> 2)
> + elog(DEBUG1, "logical replication slot %s created with option
> allow_overwrite to %s",
> + NameStr(slot->data.name), slot->data.allow_overwrite ? "true" : "false");
>
> IMO, we don't need this as we do not log other properties of the slot as well.
> Point 1-2 done
> 3)
> We can have pg_replication_slots (pg_get_replication_slots) modified
> to display the new property. Also we can modify docs to have the new
> property defined.
>
> Point 3 Not yet implemented
>
> 4)
> + {
> + /* Check if we need to overwrite an existing logical slot */
> + if (allow_overwrite)
> + {
> + /* Get rid of a replication slot that is no longer wanted */
> + ReplicationSlotDrop(remote_slot->name,true);
> +
> + /* Get rid of a replication slot that is no longer wanted */
> + ereport(WARNING,
> + errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
> + errmsg("slot \"%s\" already exists"
> + " on the standby but it will be dropped because "
> + "flag allow_overwrite is set to true",
> + remote_slot->name));
> +
> + /* Going back to the main loop after droping the failover slot */
> + return false;
>
> Currently we are returning after dropping the slot. But I think we
> shall drop the slot and proceed with new slot creation of the same
> name otherwise we may be left with old slot dropped and no new slot
> created specially when API is run.
>
> Scenario:
> --On primary, create 2 slots: slot1 and slot2.
> --On standby, create one slot slot1 with allow_overwrite=true.
> --Run 'SELECT pg_sync_replication_slots();' on standby.
>
> At the end of API, expectation is that both slots will be present with
> 'synced'=true, but only slot2 is present. if we run this API next
> time, slot1 is created. It should have been dropped and recreated (or
> say overwritten) in the first run itself.
>
> Point 4:
> I put the creation slot under the allow_overwrite condition.
> After testing with syn_standby_slots disable on the standby, it works. I think the code for the synchronisation of
thenew slot could be factorised.
>
> 5)
> IMO, LOG is sufficient here, as the action aligns with the
> user-provided 'allow_overwrite' setting.
>
> Point 5
> Where is it in the code?
>
This one:
+ ereport(WARNING,
+ errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+ errmsg("slot \"%s\" already exists"
+ " on the standby but try to recreate it because "
+ "flag allow_overwrite is set to true",
+ remote_slot->name));
thanks
Shveta
Thanks
On Fri, Sep 26, 2025 at 9:52 PM Fabrice Chapuis <fabrice636861@gmail.com> wrote:
>
> Hi Shveta,
>
> Here is the version 1 of the patch with corrections
>
Thanks. We can bump the version to v2 now — that is, each time we post
a new version, we can increment the version number.
The patch fails to apply to the latest HEAD. Also it had some
'trailing whitespace' issues. Can you please rebase the patch and post
it again?
> 1)
> In both create_physical_replication_slot() and
> create_physical_replication_slot():
> + false, false,false);
>
> ,false --> , false (space after comma is recommended)
>
> 2)
> + elog(DEBUG1, "logical replication slot %s created with option
> allow_overwrite to %s",
> + NameStr(slot->data.name), slot->data.allow_overwrite ? "true" : "false");
>
> IMO, we don't need this as we do not log other properties of the slot as well.
> Point 1-2 done
> 3)
> We can have pg_replication_slots (pg_get_replication_slots) modified
> to display the new property. Also we can modify docs to have the new
> property defined.
>
> Point 3 Not yet implemented
>
> 4)
> + {
> + /* Check if we need to overwrite an existing logical slot */
> + if (allow_overwrite)
> + {
> + /* Get rid of a replication slot that is no longer wanted */
> + ReplicationSlotDrop(remote_slot->name,true);
> +
> + /* Get rid of a replication slot that is no longer wanted */
> + ereport(WARNING,
> + errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
> + errmsg("slot \"%s\" already exists"
> + " on the standby but it will be dropped because "
> + "flag allow_overwrite is set to true",
> + remote_slot->name));
> +
> + /* Going back to the main loop after droping the failover slot */
> + return false;
>
> Currently we are returning after dropping the slot. But I think we
> shall drop the slot and proceed with new slot creation of the same
> name otherwise we may be left with old slot dropped and no new slot
> created specially when API is run.
>
> Scenario:
> --On primary, create 2 slots: slot1 and slot2.
> --On standby, create one slot slot1 with allow_overwrite=true.
> --Run 'SELECT pg_sync_replication_slots();' on standby.
>
> At the end of API, expectation is that both slots will be present with
> 'synced'=true, but only slot2 is present. if we run this API next
> time, slot1 is created. It should have been dropped and recreated (or
> say overwritten) in the first run itself.
>
> Point 4:
> I put the creation slot under the allow_overwrite condition.
> After testing with syn_standby_slots disable on the standby, it works. I think the code for the synchronisation of the new slot could be factorised.
>
> 5)
> IMO, LOG is sufficient here, as the action aligns with the
> user-provided 'allow_overwrite' setting.
>
> Point 5
> Where is it in the code?
>
This one:
+ ereport(WARNING,
+ errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+ errmsg("slot \"%s\" already exists"
+ " on the standby but try to recreate it because "
+ "flag allow_overwrite is set to true",
+ remote_slot->name));
thanks
Shveta
Вложения
On Tue, Sep 30, 2025 at 12:17 PM Fabrice Chapuis <fabrice636861@gmail.com> wrote: > > Hi, > > Here the generated v2 of the Patch. > Thanks. I have refactored the code for synchronize_one_slot() as there was some code-repetition. Please take it if you find it okay. Also I felt that when we create a slot through slot-synchronization, we should create it with allow_overwrite as false. And thus in the attached patch, I have changed that part as well. It is a top up patch. Attached it as txt, please rename before applying atop your changes. Attached the steps for your reference. thanks Shveta
Вложения
On Tue, Sep 30, 2025 at 3:11 PM shveta malik <shveta.malik@gmail.com> wrote: > > On Tue, Sep 30, 2025 at 12:17 PM Fabrice Chapuis > <fabrice636861@gmail.com> wrote: > > > > Hi, > > > > Here the generated v2 of the Patch. > > > > Thanks. I have refactored the code for synchronize_one_slot() as there > was some code-repetition. Please take it if you find it okay. > > Also I felt that when we create a slot through slot-synchronization, > we should create it with allow_overwrite as false. And thus in the > attached patch, I have changed that part as well. > > It is a top up patch. Attached it as txt, please rename before > applying atop your changes. Attached the steps for your reference. > Next, pg_replication_slots (pg_get_replication_slots) can be modified to display the new property. And the corresponding doc for pg_replication_slots (doc/src/sgml/system-views.sgml) can be updated. thanks Shveta
was some code-repetition. Please take it if you find it okay.
Also I felt that when we create a slot through slot-synchronization,
we should create it with allow_overwrite as false. And thus in the
attached patch, I have changed that part as well.
At the moment we can let the slot creation with allow_overwrite set to false.
It is a top up patch. Attached it as txt, please rename before
applying atop your changes. Attached the steps for your reference.
On Tue, Sep 30, 2025 at 12:17 PM Fabrice Chapuis
<fabrice636861@gmail.com> wrote:
>
> Hi,
>
> Here the generated v2 of the Patch.
>
Thanks. I have refactored the code for synchronize_one_slot() as there
was some code-repetition. Please take it if you find it okay.
Also I felt that when we create a slot through slot-synchronization,
we should create it with allow_overwrite as false. And thus in the
attached patch, I have changed that part as well.
It is a top up patch. Attached it as txt, please rename before
applying atop your changes. Attached the steps for your reference.
thanks
Shveta
Вложения
to display the new property. And the corresponding doc for
pg_replication_slots (doc/src/sgml/system-views.sgml) can be updated.
On Tue, Sep 30, 2025 at 3:11 PM shveta malik <shveta.malik@gmail.com> wrote:
>
> On Tue, Sep 30, 2025 at 12:17 PM Fabrice Chapuis
> <fabrice636861@gmail.com> wrote:
> >
> > Hi,
> >
> > Here the generated v2 of the Patch.
> >
>
> Thanks. I have refactored the code for synchronize_one_slot() as there
> was some code-repetition. Please take it if you find it okay.
>
> Also I felt that when we create a slot through slot-synchronization,
> we should create it with allow_overwrite as false. And thus in the
> attached patch, I have changed that part as well.
>
> It is a top up patch. Attached it as txt, please rename before
> applying atop your changes. Attached the steps for your reference.
>
Next, pg_replication_slots (pg_get_replication_slots) can be modified
to display the new property. And the corresponding doc for
pg_replication_slots (doc/src/sgml/system-views.sgml) can be updated.
thanks
Shveta
On Tue, Sep 30, 2025 at 3:11 PM shveta malik <shveta.malik@gmail.com> wrote:
>
> On Tue, Sep 30, 2025 at 12:17 PM Fabrice Chapuis
> <fabrice636861@gmail.com> wrote:
> >
> > Hi,
> >
> > Here the generated v2 of the Patch.
> >
>
> Thanks. I have refactored the code for synchronize_one_slot() as there
> was some code-repetition. Please take it if you find it okay.
>
> Also I felt that when we create a slot through slot-synchronization,
> we should create it with allow_overwrite as false. And thus in the
> attached patch, I have changed that part as well.
>
> It is a top up patch. Attached it as txt, please rename before
> applying atop your changes. Attached the steps for your reference.
>
Next, pg_replication_slots (pg_get_replication_slots) can be modified
to display the new property. And the corresponding doc for
pg_replication_slots (doc/src/sgml/system-views.sgml) can be updated.
thanks
Shveta
Вложения
On Fri, Oct 3, 2025 at 1:28 PM Fabrice Chapuis <fabrice636861@gmail.com> wrote: > > Hi Shveta, > Here is the v4 of the patch with pg_replication_slots view modified to display the field allow_overwrite. Doc was alsoupdated. > The patch looks okay. The parameter name is still open for discussion, and the comments could be improved. But we can focus on these finer details once more reviewers start reviewing and there’s general agreement on the concept. One trivial comment: we can slightly modify the doc to have something like this: This parameter controls whether an existing logical replication slot on the standby (with synced=false) can be overwritten during logical replication slot synchronization (see Section 47.2.3). The default is false. When true, an existing user slot with the same name on the standby will be synchronized using the primary’s failover slot. <please see high-availability.sgml to find how 'Section 47.2.3' can be referenced in the doc> ~~ The next step will be to provide a way to modify this parameter via an alter API, say pg_alter_logical_replication_slot(). This API can later be extended to handle other parameters. This API can be implemented in patch002 for easier review. thanks Shveta
On Fri, Oct 3, 2025 at 1:28 PM Fabrice Chapuis <fabrice636861@gmail.com> wrote:
>
> Hi Shveta,
> Here is the v4 of the patch with pg_replication_slots view modified to display the field allow_overwrite. Doc was also updated.
>
The patch looks okay. The parameter name is still open for discussion,
and the comments could be improved. But we can focus on these finer
details once more reviewers start reviewing and there’s general
agreement on the concept.
One trivial comment: we can slightly modify the doc to have something like this:
This parameter controls whether an existing logical replication slot
on the standby (with synced=false) can be overwritten during logical
replication slot synchronization (see Section 47.2.3). The default is
false. When true, an existing user slot with the same name on the
standby will be synchronized using the primary’s failover slot.
<please see high-availability.sgml to find how 'Section 47.2.3' can be
referenced in the doc>
~~
The next step will be to provide a way to modify this parameter via an
alter API, say pg_alter_logical_replication_slot(). This API can later
be extended to handle other parameters. This API can be implemented in
patch002 for easier review.
thanks
Shveta
Вложения
On Thu, Oct 9, 2025 at 6:26 PM Fabrice Chapuis <fabrice636861@gmail.com> wrote: > > > > Hi, > Here is the patch V5, I change with your doc text proposition and the link. There is a trailing whitespace warning while applying the patch. Regarding: + on the standby (with synced=false) can be overwritten during logical Can we replace 'with synced=false' with: with <structfield>synced</structfield> field as <literal>false</literal> > At this stage, the patch can be submitted to the current commit fest for review? > Sure. Please register the patch to CF and continue developing it further. thanks Shveta
On Thu, Oct 9, 2025 at 6:26 PM Fabrice Chapuis <fabrice636861@gmail.com> wrote:
>
>
>
> Hi,
> Here is the patch V5, I change with your doc text proposition and the link.
There is a trailing whitespace warning while applying the patch.
Regarding:
+ on the standby (with synced=false) can be overwritten during logical
Can we replace 'with synced=false' with:
with <structfield>synced</structfield> field as <literal>false</literal>
> At this stage, the patch can be submitted to the current commit fest for review?
>
Sure. Please register the patch to CF and continue developing it further.
thanks
Shveta
Вложения
On Wed, Oct 15, 2025 at 3:29 PM Fabrice Chapuis <fabrice636861@gmail.com> wrote: > > Hi, > > Here is the v6 of the patch that I'll put in the current CF. > Okay, sounds good. thanks Shveta
Вложения
On Tue, Oct 28, 2025 at 6:33 PM Alexander Kukushkin <cyberdemn@gmail.com> wrote: > > Hi all, > > I don't think we really need to have allow_overwrite. > It is not possible to create logical slots on standby with failover=true, therefore we can safely rely on failover beingtrue to understand that at some point this node was a primary and that this slot is supposed to be synced. > Please see the patch attached. > We had discussed this point in another thread, please see [1]. After discussion it was decided to not go this way. [1]: https://www.postgresql.org/message-id/OS0PR01MB57161FF469DE049765DD53A89475A%40OS0PR01MB5716.jpnprd01.prod.outlook.com thanks Shveta
I indeed proposed a solution at the top of this thread to modify only the value of the synced attribute, but the discussion was redirected to adding an extra parameter to the function pg_create_logical_replication_slot() to overwrite a failover slot
On Tue, Oct 28, 2025 at 6:33 PM Alexander Kukushkin <cyberdemn@gmail.com> wrote:
>
> Hi all,
>
> I don't think we really need to have allow_overwrite.
> It is not possible to create logical slots on standby with failover=true, therefore we can safely rely on failover being true to understand that at some point this node was a primary and that this slot is supposed to be synced.
> Please see the patch attached.
>
We had discussed this point in another thread, please see [1]. After
discussion it was decided to not go this way.
[1]: https://www.postgresql.org/message-id/OS0PR01MB57161FF469DE049765DD53A89475A%40OS0PR01MB5716.jpnprd01.prod.outlook.com
thanks
Shveta
Hi,
I indeed proposed a solution at the top of this thread to modify only the value of the synced attribute, but the discussion was redirected to adding an extra parameter to the function pg_create_logical_replication_slot() to overwrite a failover slotWe had discussed this point in another thread, please see [1]. After
discussion it was decided to not go this way.
[1]: https://www.postgresql.org/message-id/OS0PR01MB57161FF469DE049765DD53A89475A%40OS0PR01MB5716.jpnprd01.prod.outlook.com
There was an argument that at some point we might allow creating logical failover slots on cascading standbys. However, if we consider all practical scenarios, it seems very unlikely that such a feature could work reliably with the current design.
Let me try to explain why.
Consider the following setup:
node2 - standby, replicating from node1
node3 - standby, replicating from node1, has logical slot foo (failover=true, synced=false)
node4 - standby, replicating from node3, has logical slot foo (failover=true, synced=true)
1.a) If we promote node2, we must first create a physical slot for node3, update primary_conninfo on node3 to point to node2, wait until node3 connects, and until catalog_xmin on the physical slot becomes non-NULL. Only then would it be safe to promote node2. This introduces unnecessary steps, complexity, and waiting — increasing downtime, which defeats the goal of high availability.
1.b) If we promote node3, promotion itself is fast, but subscribers will still be using the slot on the original primary. This again defeats the purpose of doing logical replication from a standby, and it won’t be possible to switch subscribers to node4 (see below).
One could argue that we could add a function to switch synced=true->false on a standby, but that would just add another workaround on top of an already fragile design, increasing operational complexity without solving the underlying issue.
The same applies to proposals like allow_overwrite. If such a flag is introduced, in practice it will almost always be used unconditionally, e.g.:
The only current workaround is to detect standbys with failover=true, synced=false and drop those slots, hoping they’ll be resynchronized. But resynchronization is asynchronous, unpredictable, and may take an unbounded amount of time. If the primary fails during that window, there might be no standby with ready logical slots.
Instead of dropping such slots, what we actually need is a way to safely set synced=false->true and continue operating.
Operating logical replication setups is already extremely complex and error-prone — this is not theoretical, it’s something many of us face daily.
So rather than adding more speculative features or workarounds, I think we should focus on addressing real operational pain points and the inconsistencies in the current design.
A slot created on the primary (which later becomes a standby) with failover=true has a very clear purpose. The failover flag already indicates that purpose; synced shouldn’t override it.
Thanks four your appreciation and feedback
Regards,
Hi,On Fri, 31 Oct 2025 at 09:16, Fabrice Chapuis <fabrice636861@gmail.com> wrote:Hi,
I indeed proposed a solution at the top of this thread to modify only the value of the synced attribute, but the discussion was redirected to adding an extra parameter to the function pg_create_logical_replication_slot() to overwrite a failover slotWe had discussed this point in another thread, please see [1]. After
discussion it was decided to not go this way.
[1]: https://www.postgresql.org/message-id/OS0PR01MB57161FF469DE049765DD53A89475A%40OS0PR01MB5716.jpnprd01.prod.outlook.comI’ve read through the referenced discussion, and my impression is that we might be trying to design a solution around assumptions that are unlikely to hold in practice.
There was an argument that at some point we might allow creating logical failover slots on cascading standbys. However, if we consider all practical scenarios, it seems very unlikely that such a feature could work reliably with the current design.
Let me try to explain why.
Consider the following setup:node1 - primary
node2 - standby, replicating from node1
node3 - standby, replicating from node1, has logical slot foo (failover=true, synced=false)
node4 - standby, replicating from node3, has logical slot foo (failover=true, synced=true)1) If node1 fails, we could promote either node2 or node3:
1.a) If we promote node2, we must first create a physical slot for node3, update primary_conninfo on node3 to point to node2, wait until node3 connects, and until catalog_xmin on the physical slot becomes non-NULL. Only then would it be safe to promote node2. This introduces unnecessary steps, complexity, and waiting — increasing downtime, which defeats the goal of high availability.
1.b) If we promote node3, promotion itself is fast, but subscribers will still be using the slot on the original primary. This again defeats the purpose of doing logical replication from a standby, and it won’t be possible to switch subscribers to node4 (see below).2) If node3 fails, we might want to replace it with node4. But node4 has a slot with failover=true and synced=true, and synced=true prevents it from being used for streaming because it’s a standby.In other words, with the current design, allowing creation of logical failover slots on standbys doesn’t bring any real benefit — such “synced” slots can’t actually be used later.
One could argue that we could add a function to switch synced=true->false on a standby, but that would just add another workaround on top of an already fragile design, increasing operational complexity without solving the underlying issue.
The same applies to proposals like allow_overwrite. If such a flag is introduced, in practice it will almost always be used unconditionally, e.g.:SELECT pg_create_logical_replication_slot('<name>', '<plugin>', failover := true, allow_overwrite := true);Right now, logical failover slots can’t be reused after a switchover, which is a perfectly normal operation.
The only current workaround is to detect standbys with failover=true, synced=false and drop those slots, hoping they’ll be resynchronized. But resynchronization is asynchronous, unpredictable, and may take an unbounded amount of time. If the primary fails during that window, there might be no standby with ready logical slots.
Instead of dropping such slots, what we actually need is a way to safely set synced=false->true and continue operating.
Operating logical replication setups is already extremely complex and error-prone — this is not theoretical, it’s something many of us face daily.
So rather than adding more speculative features or workarounds, I think we should focus on addressing real operational pain points and the inconsistencies in the current design.
A slot created on the primary (which later becomes a standby) with failover=true has a very clear purpose. The failover flag already indicates that purpose; synced shouldn’t override it.Regards,--Alexander Kukushkin
Thanks for your large analyze and explanation Alexander. If we go in the direction you propose and leave the option to use a suplementary flag allow_overwrite, may I propose you some modifications in the patch v0 you have attached:Why modifying this function?drop_local_obsolete_slots(List *remote_slot_list)
Hi Fabrice,On Fri, 31 Oct 2025 at 17:45, Fabrice Chapuis <fabrice636861@gmail.com> wrote:Thanks for your large analyze and explanation Alexander. If we go in the direction you propose and leave the option to use a suplementary flag allow_overwrite, may I propose you some modifications in the patch v0 you have attached:Why modifying this function?drop_local_obsolete_slots(List *remote_slot_list)Think about the following case:1. We have node1 (primary) and node2 (standby), with slot foo(failover=true)2. We stop node1 for maintenance and promote node23. DROP replication slot on node24. Start node1In this scenario the logical slot "foo" will be left unattended and it needs to be dropped, because it doesn't exist on the primary.As I said earlier, IMO it is only the "failover" flag that effectively indicates the purpose of the slot. The sync flag is purely informative.Regards,--Alexander Kukushkin
Hi Alexander,I understand your use case but I think it won't work like this because of how is implemented get_local_synced_slots()this function will return slots which synced flag to true only.
and
Changes in synchronize_one_slot() function are not necessary anymore.
--On Tue, 4 Nov 2025 at 12:15, Fabrice Chapuis <fabrice636861@gmail.com> wrote:Hi Alexander,I understand your use case but I think it won't work like this because of how is implemented get_local_synced_slots()this function will return slots which synced flag to true only.Yeah. Sorry, my bad, you are absolutely right about get_local_synced_slots() function.I didn't validate this part of the patch, because first I wanted to get feedback from hackers.However, it looks like the people who build it never run Postgres in production and therefore don't understand the problem and the pain.I will probably just add yet another workaround to Patroni and start dropping logical replication slots on standby with failover=true and synced=false.Regards,--Alexander Kukushkin
I propose to modifyget_local_synced_slots() => to get the local logical slots (the name of the function must be renamed)
andlocal_sync_slot_required() => to get also the slots that have the flag synced = falsethen in both our use cases the local slot will be dropped and recreated if it must be resynchronized.
Changes in synchronize_one_slot() function are not necessary anymore.static List *get_local_synced_slots(void){List *local_slots = NIL;LWLockAcquire(ReplicationSlotControlLock, LW_SHARED);for (int i = 0; i < max_replication_slots; i++){ReplicationSlot *s = &ReplicationSlotCtl->replication_slots[i];/* Check if it is a synchronized slot */if (s->in_use && (s->data.synced || s->data.failover)){Assert(SlotIsLogical(s));local_slots = lappend(local_slots, s);}}LWLockRelease(ReplicationSlotControlLock);return local_slots;}static boollocal_sync_slot_required(ReplicationSlot *local_slot, List *remote_slots){bool remote_exists = false;bool locally_invalidated = false;bool synced_slot = false;foreach_ptr(RemoteSlot, remote_slot, remote_slots){if (strcmp(remote_slot->name, NameStr(local_slot->data.name)) == 0){remote_exists = true;/** If remote slot is not invalidated but local slot is marked as* invalidated, then set locally_invalidated flag.*/SpinLockAcquire(&local_slot->mutex);locally_invalidated =(remote_slot->invalidated == RS_INVAL_NONE) &&(local_slot->data.invalidated != RS_INVAL_NONE);synced_slot = local_slot->data.synced;SpinLockRelease(&local_slot->mutex);break;}}return (remote_exists && !locally_invalidated && synced_slot);}I could propose a new patch in that sense.Regards,FabriceOn Tue, Nov 4, 2025 at 2:01 PM Alexander Kukushkin <cyberdemn@gmail.com> wrote:--On Tue, 4 Nov 2025 at 12:15, Fabrice Chapuis <fabrice636861@gmail.com> wrote:Hi Alexander,I understand your use case but I think it won't work like this because of how is implemented get_local_synced_slots()this function will return slots which synced flag to true only.Yeah. Sorry, my bad, you are absolutely right about get_local_synced_slots() function.I didn't validate this part of the patch, because first I wanted to get feedback from hackers.However, it looks like the people who build it never run Postgres in production and therefore don't understand the problem and the pain.I will probably just add yet another workaround to Patroni and start dropping logical replication slots on standby with failover=true and synced=false.Regards,--Alexander Kukushkin
Вложения
On Fri, Oct 31, 2025 at 2:58 PM Alexander Kukushkin <cyberdemn@gmail.com> wrote: > > Instead of dropping such slots, what we actually need is a way to safely set synced=false->true and continue operating. > > Operating logical replication setups is already extremely complex and error-prone — this is not theoretical, it’s somethingmany of us face daily. > So rather than adding more speculative features or workarounds, I think we should focus on addressing real operationalpain points and the inconsistencies in the current design. > > A slot created on the primary (which later becomes a standby) with failover=true has a very clear purpose. The failoverflag already indicates that purpose; synced shouldn’t override it. > I think this is not as clear as you are saying as compared to WAL. In failover cases, we bump the WAL timelines on new primary and also have facilities like pg_rewind to ensure that old primary can follow the new primary after divergence. For slots, there is no such facility, now, there is an argument that for slot's it is sufficient to match the name and failover to say that it is okay to overwrite the slot on old primary. However, it is not clear whether it is always safe to do so, for example, if the old primary ran after divergence for sometime and one has re-created the slot with same name and failover property, it will no longer be the same slot. Unlike WAL, we don't maintain the slot's history, so it is not equally clear that we can overwrite old primary's slot's as it is. -- With Regards, Amit Kapila.
Do you think we must come back to the allow_overwrite flag approach or another solution?
Fabrice
On Fri, Oct 31, 2025 at 2:58 PM Alexander Kukushkin <cyberdemn@gmail.com> wrote:
>
> Instead of dropping such slots, what we actually need is a way to safely set synced=false->true and continue operating.
>
> Operating logical replication setups is already extremely complex and error-prone — this is not theoretical, it’s something many of us face daily.
> So rather than adding more speculative features or workarounds, I think we should focus on addressing real operational pain points and the inconsistencies in the current design.
>
> A slot created on the primary (which later becomes a standby) with failover=true has a very clear purpose. The failover flag already indicates that purpose; synced shouldn’t override it.
>
I think this is not as clear as you are saying as compared to WAL. In
failover cases, we bump the WAL timelines on new primary and also have
facilities like pg_rewind to ensure that old primary can follow the
new primary after divergence. For slots, there is no such facility,
now, there is an argument that for slot's it is sufficient to match
the name and failover to say that it is okay to overwrite the slot on
old primary. However, it is not clear whether it is always safe to do
so, for example, if the old primary ran after divergence for sometime
and one has re-created the slot with same name and failover property,
it will no longer be the same slot. Unlike WAL, we don't maintain the
slot's history, so it is not equally clear that we can overwrite old
primary's slot's as it is.
--
With Regards,
Amit Kapila.
On Tue, Nov 11, 2025 at 9:27 PM Fabrice Chapuis <fabrice636861@gmail.com> wrote: > > if I resume your scenario > 1. A standby S has a failover slot slot1 synchronized with slot1 on primary P > 2. We promote S > 3. On P we drop slot1 and create slot1 again with failover mode (a subscriber exist on another instance by example) > 4. A rewind is performed on P the former primary to rejoin S the former standby > 5. On P slot1 is automatically dropped and recreated to be synchronized > > In which context this kind of scenario could happend? > It is difficult to tell when this can happen but you detailed there is a theoretical possibility of the same. If we had an in-core cluster tool that manages nodes on its own which doesn't allow such scenarios to happen then we could possibly say that using such a tool it is safe to overwrite old primary's slots. > Isn't the goal to find a solution for a switchover which is carried out for maintenance on a Postgres cluster, the aimis to find a compromise to cover the most likely scenarios. > Yes, that is why I thought of providing some form of UI that can allow outside cluster solutions to manage such slots. > Do you think we must come back to the allow_overwrite flag approach or another solution? > We can wait for a few days to see what others think on this matter. -- With Regards, Amit Kapila.
It is difficult to tell when this can happen but you detailed there is
a theoretical possibility of the same. If we had an in-core cluster
tool that manages nodes on its own which doesn't allow such scenarios
to happen then we could possibly say that using such a tool it is safe
to overwrite old primary's slots.
On Wed, Nov 12, 2025 at 12:58 PM Alexander Kukushkin <cyberdemn@gmail.com> wrote: > > On Wed, 12 Nov 2025 at 05:22, Amit Kapila <amit.kapila16@gmail.com> wrote: >> >> It is difficult to tell when this can happen but you detailed there is >> a theoretical possibility of the same. If we had an in-core cluster >> tool that manages nodes on its own which doesn't allow such scenarios >> to happen then we could possibly say that using such a tool it is safe >> to overwrite old primary's slots. > > > That's a lot of ifs, and none of them could be fulfilled in the foreseeable future. > > Situation you describe is impossible. > When there is a split-brain and someone drops and re-creating logical slots with the same names on the old primary - suchnode can't be joined as a standby without pg_rewind. > In its current state pg_rewind wipes the pg_replslot directory, and therefore there will be no replication slots. > Say, the only operations that happened are slot-drop-recreate and or some operations on unlogged tables. Why then pg_rewind is required? -- With Regards, Amit Kapila.
Say, the only operations that happened are slot-drop-recreate and or
some operations on unlogged tables. Why then pg_rewind is required?
On Thu, Nov 13, 2025 at 2:37 PM Alexander Kukushkin <cyberdemn@gmail.com> wrote: > > Hi Amit, > > On Thu, 13 Nov 2025 at 09:42, Amit Kapila <amit.kapila16@gmail.com> wrote: >> >> >> Say, the only operations that happened are slot-drop-recreate and or >> some operations on unlogged tables. Why then pg_rewind is required? > > > Clean shut down always writes checkpoint record to WAL and updates pg_control. > But the system can die/crash before shutdown. -- With Regards, Amit Kapila.
But the system can die/crash before shutdown.
pg_current_wal_insert_lsn
---------------------------
0/37F96F8
(1 row)
pg_create_logical_replication_slot
------------------------------------
(foo,0/37F9730)
(1 row)
pg_current_wal_insert_lsn
---------------------------
0/37F9730
(1 row)
On Thu, Nov 13, 2025 at 6:39 PM Alexander Kukushkin <cyberdemn@gmail.com> wrote:
>
>
>
>> But the system can die/crash before shutdown.
>
>
> You mean it will not write WAL?
> When a logical replication slot is created we build a snapshot and also write to WAL:
> postgres=# select pg_current_wal_insert_lsn(); select pg_create_logical_replication_slot('foo', 'pgoutput'); select
pg_current_wal_insert_lsn();
> pg_current_wal_insert_lsn
> ---------------------------
> 0/37F96F8
> (1 row)
>
> pg_create_logical_replication_slot
> ------------------------------------
> (foo,0/37F9730)
> (1 row)
>
> pg_current_wal_insert_lsn
> ---------------------------
> 0/37F9730
> (1 row)
>
> Only after that slot is marked as persistent.
>
There can be a scenario where a replication slot is dropped and
recreated, and its WAL is also replicated to the standby. However,
before the new slot state can be synchronized via slotsync, the
primary crashes and the standby is promoted. Later, the user manually
reconfigures the old primary to follow the newly promoted standby (no
pg-rewind in play). I was wondering whether in such a case, would it
be a good idea to overwrite the newly created slot on old primary with
promoted-standby's synced slot (old one) by default? Thoughts?
thanks
Shveta
On Thu, Nov 13, 2025 at 7:16 PM shveta malik <shveta.malik@gmail.com> wrote:
>
> On Thu, Nov 13, 2025 at 6:39 PM Alexander Kukushkin <cyberdemn@gmail.com> wrote:
> >
> >
> >
> >> But the system can die/crash before shutdown.
> >
> >
> > You mean it will not write WAL?
> > When a logical replication slot is created we build a snapshot and also write to WAL:
> > postgres=# select pg_current_wal_insert_lsn(); select pg_create_logical_replication_slot('foo', 'pgoutput'); select
pg_current_wal_insert_lsn();
> > pg_current_wal_insert_lsn
> > ---------------------------
> > 0/37F96F8
> > (1 row)
> >
> > pg_create_logical_replication_slot
> > ------------------------------------
> > (foo,0/37F9730)
> > (1 row)
> >
> > pg_current_wal_insert_lsn
> > ---------------------------
> > 0/37F9730
> > (1 row)
> >
> > Only after that slot is marked as persistent.
> >
>
> There can be a scenario where a replication slot is dropped and
> recreated, and its WAL is also replicated to the standby. However,
> before the new slot state can be synchronized via slotsync, the
> primary crashes and the standby is promoted. Later, the user manually
> reconfigures the old primary to follow the newly promoted standby (no
> pg-rewind in play). I was wondering whether in such a case, would it
> be a good idea to overwrite the newly created slot on old primary with
> promoted-standby's synced slot (old one) by default? Thoughts?
I think it's an extremely rare or a mostly wrong operation that after
failover (i.e., the old primary didn't shutdown gracefully) users have
the old primar rejoin to the replication as the new standby without
pg_rewind. I guess that pg_rewind should practically be used unless
the primary server gracefully shutdowns (i.e., in switchover case). In
failover cases, pg_rewind launches the server in single-user mode to
run the crash recovery, advancing its LSN and cleaning all existing
replication slots after rewinding the server. So I think that the
reported issue doesn't happen in failover cases and we can focus on
failover cases.
Regards,
--
Masahiko Sawada
Amazon Web Services: https://aws.amazon.com
So I think that the
reported issue doesn't happen in failover cases and we can focus on
failover cases.
On Fri, Nov 14, 2025 at 12:42 AM Alexander Kukushkin <cyberdemn@gmail.com> wrote: > > Hi Masahiko, > > On Fri, 14 Nov 2025 at 07:10, Masahiko Sawada <sawada.mshk@gmail.com> wrote: > >> >> So I think that the >> reported issue doesn't happen in failover cases and we can focus on >> failover cases. > > > I think you wanted to say "focus on *switchover* cases"? Oops, yes, "we can focus on switchover cases" is what I wanted to say. Regards, -- Masahiko Sawada Amazon Web Services: https://aws.amazon.com
On Fri, Nov 14, 2025 at 11:40 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:
>
> On Thu, Nov 13, 2025 at 7:16 PM shveta malik <shveta.malik@gmail.com> wrote:
> >
> > On Thu, Nov 13, 2025 at 6:39 PM Alexander Kukushkin <cyberdemn@gmail.com> wrote:
> > >
> > >
> > >
> > >> But the system can die/crash before shutdown.
> > >
> > >
> > > You mean it will not write WAL?
> > > When a logical replication slot is created we build a snapshot and also write to WAL:
> > > postgres=# select pg_current_wal_insert_lsn(); select pg_create_logical_replication_slot('foo', 'pgoutput');
selectpg_current_wal_insert_lsn();
> > > pg_current_wal_insert_lsn
> > > ---------------------------
> > > 0/37F96F8
> > > (1 row)
> > >
> > > pg_create_logical_replication_slot
> > > ------------------------------------
> > > (foo,0/37F9730)
> > > (1 row)
> > >
> > > pg_current_wal_insert_lsn
> > > ---------------------------
> > > 0/37F9730
> > > (1 row)
> > >
> > > Only after that slot is marked as persistent.
> > >
> >
> > There can be a scenario where a replication slot is dropped and
> > recreated, and its WAL is also replicated to the standby. However,
> > before the new slot state can be synchronized via slotsync, the
> > primary crashes and the standby is promoted. Later, the user manually
> > reconfigures the old primary to follow the newly promoted standby (no
> > pg-rewind in play). I was wondering whether in such a case, would it
> > be a good idea to overwrite the newly created slot on old primary with
> > promoted-standby's synced slot (old one) by default? Thoughts?
>
> I think it's an extremely rare or a mostly wrong operation that after
> failover (i.e., the old primary didn't shutdown gracefully) users have
> the old primar rejoin to the replication as the new standby without
> pg_rewind. I guess that pg_rewind should practically be used unless
> the primary server gracefully shutdowns (i.e., in switchover case). In
> failover cases, pg_rewind launches the server in single-user mode to
> run the crash recovery, advancing its LSN and cleaning all existing
> replication slots after rewinding the server. So I think that the
> reported issue doesn't happen in failover cases and we can focus on
> failover cases.
>
The point is quite fundamental, do you think we can sync to a
pre-existing slot with the same name and failover marked as true after
the first time the node joins a new primary? We don't provide any
switchover tools/utilities, so it doesn't appear straight-forward that
we can perform re-sync. If we would have a switchover tool, I think
one may have removed all existing slots before the old primary joins
the new primary because otherwise, there is always a chance that there
remain redundant slots which will prevent resource removal. Consider a
case where after switchover, the old primary decides to join a
different standby (new primary) than where slot-sync was earlier
happening. Now, it is possible that the old primary may have some
slots which should be removed.
--
With Regards,
Amit Kapila.
On Fri, Nov 14, 2025 at 2:39 AM Amit Kapila <amit.kapila16@gmail.com> wrote:
>
> On Fri, Nov 14, 2025 at 11:40 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:
> >
> > On Thu, Nov 13, 2025 at 7:16 PM shveta malik <shveta.malik@gmail.com> wrote:
> > >
> > > On Thu, Nov 13, 2025 at 6:39 PM Alexander Kukushkin <cyberdemn@gmail.com> wrote:
> > > >
> > > >
> > > >
> > > >> But the system can die/crash before shutdown.
> > > >
> > > >
> > > > You mean it will not write WAL?
> > > > When a logical replication slot is created we build a snapshot and also write to WAL:
> > > > postgres=# select pg_current_wal_insert_lsn(); select pg_create_logical_replication_slot('foo', 'pgoutput');
selectpg_current_wal_insert_lsn();
> > > > pg_current_wal_insert_lsn
> > > > ---------------------------
> > > > 0/37F96F8
> > > > (1 row)
> > > >
> > > > pg_create_logical_replication_slot
> > > > ------------------------------------
> > > > (foo,0/37F9730)
> > > > (1 row)
> > > >
> > > > pg_current_wal_insert_lsn
> > > > ---------------------------
> > > > 0/37F9730
> > > > (1 row)
> > > >
> > > > Only after that slot is marked as persistent.
> > > >
> > >
> > > There can be a scenario where a replication slot is dropped and
> > > recreated, and its WAL is also replicated to the standby. However,
> > > before the new slot state can be synchronized via slotsync, the
> > > primary crashes and the standby is promoted. Later, the user manually
> > > reconfigures the old primary to follow the newly promoted standby (no
> > > pg-rewind in play). I was wondering whether in such a case, would it
> > > be a good idea to overwrite the newly created slot on old primary with
> > > promoted-standby's synced slot (old one) by default? Thoughts?
> >
> > I think it's an extremely rare or a mostly wrong operation that after
> > failover (i.e., the old primary didn't shutdown gracefully) users have
> > the old primar rejoin to the replication as the new standby without
> > pg_rewind. I guess that pg_rewind should practically be used unless
> > the primary server gracefully shutdowns (i.e., in switchover case). In
> > failover cases, pg_rewind launches the server in single-user mode to
> > run the crash recovery, advancing its LSN and cleaning all existing
> > replication slots after rewinding the server. So I think that the
> > reported issue doesn't happen in failover cases and we can focus on
> > failover cases.
> >
>
> The point is quite fundamental, do you think we can sync to a
> pre-existing slot with the same name and failover marked as true after
> the first time the node joins a new primary?
Given the current behavior that we cannot create a logical slot with
failover=true on the standby, it makes sense to me that we overwrite
the pre-existing slot (with synced=false and failover=true) on the old
primary by the slot (with synced=true and failover=true) on the new
primary if their names, plugin and other properties matches and the
pre-existing slot has lesser LSNs and XIDs than the one on the new
primary. But at the same time, we need to consider the possible future
changes that allow users to create a slot with failover=true also on
the standby.
Alexander pointed out[1] that allowing to create a slot with
failover=true on the standby won't work with the current
implementation. I agree with his analysis, and I guess we would need
more changes than simply allowing it, regardless of accepting the
proposed change. We might need to introduce a replication slot origin
or a generation.
> We don't provide any
> switchover tools/utilities, so it doesn't appear straight-forward that
> we can perform re-sync. If we would have a switchover tool, I think
> one may have removed all existing slots before the old primary joins
> the new primary because otherwise, there is always a chance that there
> remain redundant slots which will prevent resource removal. Consider a
> case where after switchover, the old primary decides to join a
> different standby (new primary) than where slot-sync was earlier
> happening. Now, it is possible that the old primary may have some
> slots which should be removed.
Right, there are cases where users need to remove existing slots
before allowing the old primary to join the new primary. While
removing all slots would be the simplest solution that works in all
cases, IIUC the proposal here is to reuse pre-existing slots in
scenarios involving a switchover. The idea itself seems reasonable and
would help reduce the time spent in degraded operation mode (other
benefits too?).
Regards,
[1] https://www.postgresql.org/message-id/CAFh8B%3DkH4Nwd_69fWP4VxK9tjxxBUzdxEZLd%2BLCby9tbSMTcRA%40mail.gmail.com
--
Masahiko Sawada
Amazon Web Services: https://aws.amazon.com
Given the current behavior that we cannot create a logical slot with
failover=true on the standby, it makes sense to me that we overwrite
the pre-existing slot (with synced=false and failover=true) on the old
primary by the slot (with synced=true and failover=true) on the new
primary if their names, plugin and other properties matches and the
pre-existing slot has lesser LSNs and XIDs than the one on the new
primary.
remote_slot->two_phase != slot->data.two_phase ||
remote_slot->failover != slot->data.failover ||
strcmp(remote_slot->plugin, NameStr(slot->data.plugin)) != 0 ||
remote_slot->two_phase_at != slot->data.two_phase_at)
{
NameData plugin_name;
/* Avoid expensive operations while holding a spinlock. */
namestrcpy(&plugin_name, remote_slot->plugin);
SpinLockAcquire(&slot->mutex);
slot->data.plugin = plugin_name;
slot->data.database = remote_dbid;
slot->data.two_phase = remote_slot->two_phase;
slot->data.two_phase_at = remote_slot->two_phase_at;
slot->data.failover = remote_slot->failover;
SpinLockRelease(&slot->mutex);