Re: pg_dump's over 2GB

Поиск
Список
Период
Сортировка
От Steve Wolfe
Тема Re: pg_dump's over 2GB
Дата
Msg-id 004101c02a33$1913ca80$50824e40@iboats.com
обсуждение исходный текст
Ответ на pg_dump's over 2GB  ("Bryan White" <bryan@arcamax.com>)
Список pgsql-general
> My current backups made with pg_dump are currently 1.3GB.  I am wondering
> what kind of headaches I will have to deal with once they exceed 2GB.
>
> What will happen with pg_dump on a Linux 2.2.14 i386 kernel when the
output
> exceeds 2GB?

  There are some ways around it if your program supports it, I'm not sure if
it works with redirects...

> Currently the dump file is later fed to a 'tar cvfz'.  I am thinking that
> instead I will need to pipe pg_dumps output into gzip thus avoiding the
> creation of a file of that size.

   Why not just pump the data right into gzip?  Something like:

pg_dumpall | gzip --stdout > pgdump.gz

  (I'm sure that the more efficient shell scripters will know a better way)

  If your data is anything like ours, you will get at least a 5:1
compression ratio, meaning you can actually dump around 10 gigs of data
before you hit the 2 gig file limit.

steve


В списке pgsql-general по дате отправления:

Предыдущее
От: Adam Haberlach
Дата:
Сообщение: Re: pg_dump's over 2GB
Следующее
От: "Adam Lang"
Дата:
Сообщение: Re: Redhat 7 and PgSQL