Re: pg_dump with 1100 schemas being a bit slow
От | Bill Moran |
---|---|
Тема | Re: pg_dump with 1100 schemas being a bit slow |
Дата | |
Msg-id | 20091007115454.5b5e369a.wmoran@potentialtech.com обсуждение исходный текст |
Ответ на | Re: pg_dump with 1100 schemas being a bit slow ("Loic d'Anterroches" <diaeresis@gmail.com>) |
Ответы |
Re: pg_dump with 1100 schemas being a bit slow
|
Список | pgsql-general |
In response to "Loic d'Anterroches" <diaeresis@gmail.com>: > On Wed, Oct 7, 2009 at 4:23 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote: > > "Loic d'Anterroches" <diaeresis@gmail.com> writes: > >> Each night I am running: > >> pg_dump --blobs --schema=%s --no-acl -U postgres indefero | gzip > > >> /path/to/backups/%s/%s-%s.sql.gz > >> this for each installation, so 1100 times. Substitution strings are to > >> timestamp and get the right schema. Have you tested the speed without the gzip? We found that compressing the dump takes considerably longer than pg_dump does, but pg_dump can't release its locks until gzip has completely processed all of the data, because of the pipe. By doing the pg_dump in a different step than the compression, we were able to eliminate our table locking issues, i.e.: pg_dump --blobs --schema=%s --no-acl -U postgres indefero > /path/to/backups/%s/%s-%s.sql && gzip /path/to/backups/%s/%s-%s.sql Of course, you'll need enough disk space to store the uncompressed dump while gzip works. -- Bill Moran http://www.potentialtech.com http://people.collaborativefusion.com/~wmoran/
В списке pgsql-general по дате отправления: