Обсуждение: pg_dump
When you create 5000 schemas and 100 tables with 10 different data types in every schema and execute pg_dump -a --inserts -t schema1.table2 dbname, it executing around 2 min. How to make it more faster?
On Friday, May 8, 2020, Volodymyr Blahoi <vblagoi@gmail.com> wrote:
When you create 5000 schemas and 100 tables with 10 different data types in every schema and execute pg_dump -a --inserts -t schema1.table2 dbname, it executing around 2 min. How to make it more faster?
This isn’t a bug...and anyway you didn’t specify the important bit which is how big that table is...bUt probably “get better disk drive hardware” is an answer.
David J.
On 08/05/2020 12:18, Volodymyr Blahoi wrote: > When you create 5000 schemas and 100 tables with 10 different data types in every schema and execute pg_dump -a --inserts-t schema1.table2 dbname, it executing around 2 min. How to make it more faster? This is not the right ML to ask to, you might want to write to the "performance" ML instead. About your problem: one solution might be to make sure you are writing your dump to a separate set or disks than where yourdatabase reads data from. regards, fabio pardi