Re: pg_dump slower than pg_restore
От | David Wall |
---|---|
Тема | Re: pg_dump slower than pg_restore |
Дата | |
Msg-id | 53B5EC6A.9050806@computer.org обсуждение исходный текст |
Ответ на | Re: pg_dump slower than pg_restore (Bosco Rama <postgres@boscorama.com>) |
Ответы |
Re: pg_dump slower than pg_restore
Re: pg_dump slower than pg_restore |
Список | pgsql-general |
On 7/3/2014 10:36 AM, Bosco Rama wrote: > If those large objects are 'files' that are already compressed (e.g. > most image files and pdf's) you are spending a lot of time trying to > compress the compressed data ... and failing. > > Try setting the compression factor to an intermediate value, or even > zero (i.e. no dump compression). For example, to get the 'low hanging > fruit' compressed: > $ pg_dump -Z1 -Fc ... > > IIRC, the default value of '-Z' is 6. > > As usual your choice will be a run-time vs file-size trade-off so try > several values for '-Z' and see what works best for you. That's interesting. Since I gzip the resulting output, I'll give -Z0 a try. I didn't realize that any compression was on by default. Thanks for the tip...
В списке pgsql-general по дате отправления: