Обсуждение: WAL replay is too slow on secondary server
2025-10-30 09:02:08 IST [27125]: user=,db=,app=,client=LOG: recovery restart point at 5B65/F1DAFA20
2025-10-30 09:02:08 IST [27125]: user=,db=,app=,client=DETAIL: Last completed transaction was at log time 2025-10-30 07:16:40.115131+05:30.
2025-10-30 09:08:23 IST [27125]: user=,db=,app=,client=LOG: restartpoint starting: time
2025-10-30 09:12:53 IST [27125]: user=,db=,app=,client=LOG: restartpoint complete: wrote 44067 buffers (2.1%); 1 WAL file(s) added, 73 removed, 0 recycled; write=269.362 s, sync=0.042 s, total=269.633 s; sync files=142, longest=0.005 s, average=0.001 s; distance=1197052 kB, estimate=1587336 kB; lsn=5B66/6C3082F8, redo lsn=5B66/3AEAEA60
2025-10-30 09:12:53 IST [27125]: user=,db=,app=,client=LOG: recovery restart point at 5B66/3AEAEA60
2025-10-30 09:12:53 IST [27125]: user=,db=,app=,client=DETAIL: Last completed transaction was at log time 2025-10-30 07:21:47.56674+05:30.
stats_reset | prefetch | hit | skip_init | skip_new | skip_fpw | skip_rep | wal_distance | block_distance | io_depth
----------------------------------+----------+-----------+-----------+----------+----------+-----------+--------------+----------------+----------
2025-10-29 23:02:21.396179+05:30 | 182762 | 251000856 | 3841721 | 1100099 | 3520777 | 137392573 | 8984 | 80 | 0
(1 row)
On Thu, 2025-10-30 at 10:06 +0530, OMPRAKASH SAHU wrote: > We have a postgresql cluster setup using patroni. > The DB is being used for heavy transactional application, now the problem is that on replica server WAL replay is too slow. > We have increased the IOPS to 6k and Throughput to 600 on nvme EBS volume of wal directory and 10k &800 on data directory. > > but the WAL is being accumulated on the replica as usual and applying wal is having no improvement. > changed the maintenance_io_concurrency on replica to 32. > CPU utilization max=20% , RAM utilization is also max 20. Is the disk saturated? > I would request your thoughts and suggestions if we can get rid of this slowness and get some speed. WAL replay during streaming replication is single-threaded. So you can only use a faster CPU or disk, depending on what is the bottleneck. Perhaps your shared buffers are too small, and you have cache contention. Yours, Laurenz Albe
Hi Team,Greetings!!We have a postgresql cluster setup using patroni.The DB is being used for heavy transactional application, now the problem is that on replica server WAL replay is too slow.We have increased the IOPS to 6k and Throughput to 600 on nvme EBS volume of wal directory and 10k &800 on data directory.but the WAL is being accumulated on the replica as usual and applying wal is having no improvement.changed the maintenance_io_concurrency on replica to 32.CPU utilization max=20% , RAM utilization is also max 20.see the below postgres logs that shows around 2hrs lagtail -f /var/log/postgresql/postgresql.log
2025-10-30 09:02:08 IST [27125]: user=,db=,app=,client=LOG: recovery restart point at 5B65/F1DAFA20
2025-10-30 09:02:08 IST [27125]: user=,db=,app=,client=DETAIL: Last completed transaction was at log time 2025-10-30 07:16:40.115131+05:30.
2025-10-30 09:08:23 IST [27125]: user=,db=,app=,client=LOG: restartpoint starting: time
2025-10-30 09:12:53 IST [27125]: user=,db=,app=,client=LOG: restartpoint complete: wrote 44067 buffers (2.1%); 1 WAL file(s) added, 73 removed, 0 recycled; write=269.362 s, sync=0.042 s, total=269.633 s; sync files=142, longest=0.005 s, average=0.001 s; distance=1197052 kB, estimate=1587336 kB; lsn=5B66/6C3082F8, redo lsn=5B66/3AEAEA60
2025-10-30 09:12:53 IST [27125]: user=,db=,app=,client=LOG: recovery restart point at 5B66/3AEAEA60
2025-10-30 09:12:53 IST [27125]: user=,db=,app=,client=DETAIL: Last completed transaction was at log time 2025-10-30 07:21:47.56674+05:30.recovery_prefetch output:postgres=# select * from pg_stat_recovery_prefetch;
stats_reset | prefetch | hit | skip_init | skip_new | skip_fpw | skip_rep | wal_distance | block_distance | io_depth
----------------------------------+----------+-----------+-----------+----------+----------+-----------+--------------+----------------+----------
2025-10-29 23:02:21.396179+05:30 | 182762 | 251000856 | 3841721 | 1100099 | 3520777 | 137392573 | 8984 | 80 | 0
(1 row)I would request your thoughts and suggestions if we can get rid of this slowness and get some speed.Regards,OM
On Thu, 2025-10-30 at 17:08 +0530, Shubhang Joshi wrote: > On Thu, 30 Oct, 2025, 10:07 am OMPRAKASH SAHU, <sahuop2121@gmail.com> wrote: > > We have a postgresql cluster setup using patroni. > > The DB is being used for heavy transactional application, now the problem is that on replica server WAL replay is tooslow. > > We have increased the IOPS to 6k and Throughput to 600 on nvme EBS volume of wal directory and 10k &800 on data directory. > > > > but the WAL is being accumulated on the replica as usual and applying wal is having no improvement. > > Please check the network speed — we faced a similar issue earlier, and it turned out to be related to network performance. > Kindly verify the network latency with your network team as well. If WAL is piling up on the standby, how can network speed be the problem? Yours, Laurenz Albe
Hi OM,
Hi Laurenz,
Thank you for your insights.
I apologize for my previous suggestion regarding network speed; upon further review, it was not the correct cause in this scenario.
Based on the current observations and system metrics, the accumulation of WAL on the standby server points to disk I/O limitations during replay—not network speed. CPU and RAM usage remain low, and WAL traffic is reaching the replica without delay, but replay/apply on disk is slow.
The root cause appears to be disk subsystem performance and the single-threaded nature of WAL replay in PostgreSQL recovery. Optimizing disk throughput or reconfiguring memory may help, but network latency does not seem to be affecting this scenario.
Regards,
Shubhang
On Thu, 2025-10-30 at 17:08 +0530, Shubhang Joshi wrote:
> On Thu, 30 Oct, 2025, 10:07 am OMPRAKASH SAHU, <sahuop2121@gmail.com> wrote:
> > We have a postgresql cluster setup using patroni.
> > The DB is being used for heavy transactional application, now the problem is that on replica server WAL replay is too slow.
> > We have increased the IOPS to 6k and Throughput to 600 on nvme EBS volume of wal directory and 10k &800 on data directory.
> >
> > but the WAL is being accumulated on the replica as usual and applying wal is having no improvement.
>
> Please check the network speed — we faced a similar issue earlier, and it turned out to be related to network performance.
> Kindly verify the network latency with your network team as well.
If WAL is piling up on the standby, how can network speed be the problem?
Yours,
Laurenz Albe
Hi OM,
Hi Laurenz,Thank you for your insights.
I apologize for my previous suggestion regarding network speed; upon further review, it was not the correct cause in this scenario.
Based on the current observations and system metrics, the accumulation of WAL on the standby server points to disk I/O limitations during replay—not network speed. CPU and RAM usage remain low, and WAL traffic is reaching the replica without delay, but replay/apply on disk is slow.
The root cause appears to be disk subsystem performance and the single-threaded nature of WAL replay in PostgreSQL recovery. Optimizing disk throughput or reconfiguring memory may help, but network latency does not seem to be affecting this scenario.
Regards,
ShubhangOn Thu, 30 Oct 2025 at 17:45, Laurenz Albe <laurenz.albe@cybertec.at> wrote:On Thu, 2025-10-30 at 17:08 +0530, Shubhang Joshi wrote:
> On Thu, 30 Oct, 2025, 10:07 am OMPRAKASH SAHU, <sahuop2121@gmail.com> wrote:
> > We have a postgresql cluster setup using patroni.
> > The DB is being used for heavy transactional application, now the problem is that on replica server WAL replay is too slow.
> > We have increased the IOPS to 6k and Throughput to 600 on nvme EBS volume of wal directory and 10k &800 on data directory.
> >
> > but the WAL is being accumulated on the replica as usual and applying wal is having no improvement.
>
> Please check the network speed — we faced a similar issue earlier, and it turned out to be related to network performance.
> Kindly verify the network latency with your network team as well.
If WAL is piling up on the standby, how can network speed be the problem?
Yours,
Laurenz Albe