Re: pg_dump 3 times as slow after 8.4 -> 9.5 upgrade
От | Matthew Hall |
---|---|
Тема | Re: pg_dump 3 times as slow after 8.4 -> 9.5 upgrade |
Дата | |
Msg-id | C5EC3C9D-755B-4356-B1EB-7822030D03A2@mhcomputing.net обсуждение исходный текст |
Ответ на | Re: pg_dump 3 times as slow after 8.4 -> 9.5 upgrade ("Henrik Cednert (Filmlance)" <henrik.cednert@filmlance.se>) |
Ответы |
Re: pg_dump 3 times as slow after 8.4 -> 9.5 upgrade
Re: pg_dump 3 times as slow after 8.4 -> 9.5 upgrade |
Список | pgsql-performance |
> On Nov 21, 2017, at 10:18 PM, Henrik Cednert (Filmlance) <henrik.cednert@filmlance.se> wrote: > > WHat's the normal way to deal with compression? Dump uncompressed and use something that threads better to compress thedump? I would say most likely your zlib is screwed up somehow, like maybe it didn't get optimized right by the C compiler or somethingelse sucks w/ the compression settings. The CPU should easily blast away at that faster than disks can read. I did do some studies of this previously some years ago, and I found gzip -6 offered the best ratio between size reductionand CPU time out of a very wide range of formats, but at the time xz was also not yet available. If I were you I would first pipe the uncompressed output through a separate compression command, then you can experimentwith the flags and threads, and you already get another separate process for the kernel to put on other CPUs asan automatic bonus for multi-core with minimal work. After that, xz is GNU standard now and has xz -T for cranking up some threads, with little extra effort for the user. Butit can be kind of slow so probably need to lower the compression level somewhat depending a bit on some time testing.I would try on some medium sized DB table, like a bit over the size of system RAM, instead of dumping this greatbig DB, in order to benchmark a couple times until it looks happy. Matthew
В списке pgsql-performance по дате отправления: