Re: copy a large table raises out of memory exception
От | Tomasz Ostrowski |
---|---|
Тема | Re: copy a large table raises out of memory exception |
Дата | |
Msg-id | 20071213145007.GA22414@batory.org.pl обсуждение исходный текст |
Ответ на | copy a large table raises out of memory exception ("A. Ozen Akyurek" <ozen@ventura.com.tr>) |
Список | pgsql-general |
On Mon, 10 Dec 2007, A. Ozen Akyurek wrote: > We have a large table (about 9,000,000 rows and total size is about 2.8 GB) > which is exported to a binary file. How was it exported? With "COPY tablename TO 'filename' WITH BINARY"? "The BINARY key word causes all data to be stored/read as binary format rather than as text. It is somewhat faster than the normal text mode, but a binary-format file is less portable across machine architectures and PostgreSQL versions." http://www.postgresql.org/docs/8.2/static/sql-copy.html Maybe you are bitten by this "less portable". > When we run "copy tablename from filepath" command, (...) and > postgre raises exception "out of memory". I'd try to use pg_dump/pg_restore in custom format, like this: pg_dump -a -Fc -Z1 -f [filename] -t [tablename] [olddatabasename] pg_restore -1 -a -d [newdatabasename] [filename] Regards Tometzky -- ...although Eating Honey was a very good thing to do, there was a moment just before you began to eat it which was better than when you were... Winnie the Pooh
В списке pgsql-general по дате отправления: