Re: pg_dump far too slow
От | Scott Carey |
---|---|
Тема | Re: pg_dump far too slow |
Дата | |
Msg-id | 5F148C16-4368-4D50-A0E9-324AA8A056A5@richrelevance.com обсуждение исходный текст |
Ответ на | Re: pg_dump far too slow (David Newall <postgresql@davidnewall.com>) |
Список | pgsql-performance |
On Mar 21, 2010, at 8:50 AM, David Newall wrote: > Tom Lane wrote: >> I would bet that the reason for the slow throughput is that gzip >> is fruitlessly searching for compressible sequences. It won't find many. >> > > > Indeed, I didn't expect much reduction in size, but I also didn't expect > a four-order of magnitude increase in run-time (i.e. output at > 10MB/second going down to 500KB/second), particularly as my estimate was > based on gzipping a previously gzipped file. I think it's probably > pathological data, as it were. Might even be of interest to gzip's > maintainers. > gzip -9 is known to be very very inefficient. It hardly ever is more compact than -7, and often 2x slower or worse. Its almost never worth it to use unless you don't care how long the compression time is. Try -Z1 at level 1 compression the output will often be good enough compression at rather fast speeds. It is about 6x as fast asgzip -9 and typically creates result files 10% larger. For some compression/decompression speed benchmarks see: http://tukaani.org/lzma/benchmarks.html > -- > Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) > To make changes to your subscription: > http://www.postgresql.org/mailpref/pgsql-performance
В списке pgsql-performance по дате отправления: