Re: [HACKERS] tables > 1 gig

Поиск
Список
Период
Сортировка
От Hannu Krosing
Тема Re: [HACKERS] tables > 1 gig
Дата
Msg-id 376B6492.8BE29B7F@trust.ee
обсуждение исходный текст
Ответ на Re: [HACKERS] tables > 1 gig  (Ole Gjerde <gjerde@icebox.org>)
Список pgsql-hackers
Ole Gjerde wrote:
> 
> On Fri, 18 Jun 1999, Bruce Momjian wrote:
> [snip - mdtruncate patch]
> 
> While talking about this whole issue, there is one piece missing.
> Currently there is no way to dump a database/table over 2 GB.
> When it hits the 2GB OS limit, it just silently stops and gives no
> indication that it didn't finish.
> 
> It's not a problem for me yet, but I'm getting very close.  I have one
> database with 3 tables over 2GB(in postgres space), but they still come
> out under 2GB after a dump.  I can't do a pg_dump on the whole database
> however, which would be very nice.
> 
> I suppose it wouldn't be overly hard to have pg_dump/pg_dumpall do
> something similar to what postgres does with segments.  I haven't looked
> at it yet however, so I can't say for sure.
> 
> Comments?

As pg_dump writes to stdout, you can just use standard *nix tools:

1. use compressed dumps

pg_dump really_big_db | gzip > really_big_db.dump.gz

reload with

gunzip -c really_big_db.dump.gz | psql newdb
or
cat really_big_db.dump.gz | gunzip | psql newdb

2. use split

pg_dump really_big_db | split -b 1m - really_big_db.dump.

reload with

cat really_big_db.dump.* | pgsql newdb

-----------------------
Hannu


В списке pgsql-hackers по дате отправления:

Предыдущее
От: Wayne Piekarski
Дата:
Сообщение: Update on my 6.4.2 progress
Следующее
От: Hannu Krosing
Дата:
Сообщение: Re: [HACKERS] Update on my 6.4.2 progress