On 2023-Dec-07, M Sarwar wrote:
> I agree with Tom. This is making the difference. I ran into this scenario several times in the past.
> But whole database is becoming slow when the dump is happening .
For large databases with very high rate of updates, a running pg_dump
can prevent vacuum from removing old versions of rows. This can make
the operations slower because of accumulation of bloat.
For such situations, pg_dump is not really recommended. It's better to
use a physical backup (say, pgbarman), or if you really need a pg_dump
output file for some reason, create a replica (with _no_
hot_standby_feedback) and run pg_dump there.
--
Álvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/
"I'm always right, but sometimes I'm more right than other times."
(Linus Torvalds)
https://lore.kernel.org/git/Pine.LNX.4.58.0504150753440.7211@ppc970.osdl.org/