Re: Performance of pg_dump on PGSQL 8.0
От | Jim C. Nasby |
---|---|
Тема | Re: Performance of pg_dump on PGSQL 8.0 |
Дата | |
Msg-id | 20060614212507.GR34196@pervasive.com обсуждение исходный текст |
Ответ на | Re: Performance of pg_dump on PGSQL 8.0 ("John Vincent" <pgsql-performance@lusis.org>) |
Ответы |
Re: Performance of pg_dump on PGSQL 8.0
|
Список | pgsql-performance |
On Wed, Jun 14, 2006 at 05:18:14PM -0400, John Vincent wrote: > On 6/14/06, Jim C. Nasby <jnasby@pervasive.com> wrote: > > > >On Wed, Jun 14, 2006 at 02:11:19PM -0400, John Vincent wrote: > >> Out of curiosity, does anyone have any idea what the ratio of actual > >> datasize to backup size is if I use the custom format with -Z 0 > >compression > >> or the tar format? > > > >-Z 0 should mean no compression. > > > But the custom format is still a binary backup, no? I fail to see what that has to do with anything... > Something you can try is piping the output of pg_dump to gzip/bzip2. On > >some OSes, that will let you utilize 1 CPU for just the compression. If > >you wanted to get even fancier, there is a parallelized version of bzip2 > >out there, which should let you use all your CPUs. > > > >Or if you don't care about disk IO bandwidth, just compress after the > >fact (though, that could just put you in a situation where pg_dump > >becomes bandwidth constrained). > > > Unfortunately if we working with our current source box, the 1 CPU is > already the bottleneck in regards to compression. If I run the pg_dump from > the remote server though, I might be okay. Oh, right, forgot about that. Yeah, your best bet could be to use an external machine for the dump. -- Jim C. Nasby, Sr. Engineering Consultant jnasby@pervasive.com Pervasive Software http://pervasive.com work: 512-231-6117 vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461
В списке pgsql-performance по дате отправления: