Re: Performance of pg_basebackup
От | Magnus Hagander |
---|---|
Тема | Re: Performance of pg_basebackup |
Дата | |
Msg-id | CABUevEx7Y5RjpJHUR20Xe0OpxD=0QKghu2tyx11EJPk0-15Q_g@mail.gmail.com обсуждение исходный текст |
Ответ на | Performance of pg_basebackup (Shaun Thomas <sthomas@optionshouse.com>) |
Ответы |
Re: Performance of pg_basebackup
|
Список | pgsql-performance |
On Tue, Jun 12, 2012 at 4:54 PM, Shaun Thomas <sthomas@optionshouse.com> wrote: > Hey everyone, > > I was wondering if anyone has found a way to get pg_basebackup to be... > faster. Currently we do our backups something like this: > > tar -c -I pigz -f /db/backup_yyyy-mm-dd.tar.gz -C /db pgdata > > Which basically calls pigz to do parallel compression because with RAIDs and > ioDrives all over the place, it's the compression that's the bottleneck. > Otherwise, only one of our 24 CPUs is actually doing anything. > > I can't seem to find anything like this for pg_basebackup. It just uses its > internal compression method. I could see this being the case for pg_dump, > but pg_basebackup just produces regular tar.gz files. Is there any way to > either fake a parallel compression here, or should this be a feature request > for pg_basebackup? If you have a single tablespace you can have pg_basebackup write the output to stdout and then pipe that through pigz. -- Magnus Hagander Me: http://www.hagander.net/ Work: http://www.redpill-linpro.com/
В списке pgsql-performance по дате отправления: