Re: pg_dump and large files - is this a problem?
От | Philip Warner |
---|---|
Тема | Re: pg_dump and large files - is this a problem? |
Дата | |
Msg-id | 5.1.0.14.0.20021003230559.032fd028@mail.rhyme.com.au обсуждение исходный текст |
Ответ на | pg_dump and large files - is this a problem? (Philip Warner <pjw@rhyme.com.au>) |
Ответы |
Re: pg_dump and large files - is this a problem?
|
Список | pgsql-hackers |
At 11:06 AM 2/10/2002 -0400, Tom Lane wrote: >It needs to get done; AFAIK no one has stepped up to do it. Do you want >to? My limited reading of off_t stuff now suggests that it would be brave to assume it is even a simple 64 bit number (or even 3 32 bit numbers). One alternative, which I am not terribly fond of, is to have pg_dump write multiple files - when we get to 1 or 2GB, we just open another file, and record our file positions as a (file number, file position) pair. Low tech, but at least we know it would work. Unless anyone knows of a documented way to get 64 bit uint/int file offsets, I don't see we have mush choice. ---------------------------------------------------------------- Philip Warner | __---_____ Albatross Consulting Pty. Ltd. |----/ - \ (A.B.N. 75 008 659 498) | /(@) ______---_ Tel: (+61) 0500 83 82 81 | _________ \ Fax: (+61) 0500 83 82 82 | ___________ | Http://www.rhyme.com.au | / \| | --________-- PGP key available upon request, | / and from pgp5.ai.mit.edu:11371 |/
В списке pgsql-hackers по дате отправления: