Re: pg_dump / copy bugs with "big lines" ?
От | Craig Ringer |
---|---|
Тема | Re: pg_dump / copy bugs with "big lines" ? |
Дата | |
Msg-id | CAMsr+YHpGUVURXGFYV8Fje7KsniDPzJVUV=MU0Hxz3rWbQkKMg@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: pg_dump / copy bugs with "big lines" ? ("Daniel Verite" <daniel@manitou-mail.org>) |
Ответы |
Re: pg_dump / copy bugs with "big lines" ?
|
Список | pgsql-hackers |
On 24 March 2016 at 01:14, Daniel Verite <daniel@manitou-mail.org> wrote:
It provides a useful mitigation to dump/reload databases having
rows in the 1GB-2GB range, but it works under these limitations:
- no single field has a text representation exceeding 1GB.
- no row as text exceeds 2GB (\copy from fails beyond that. AFAICS we
could push this to 4GB with limited changes to libpq, by
interpreting the Int32 field in the CopyData message as unsigned).
This seems like worthwhile mitigation for an issue multiple people have hit in the wild, and more will.
Giving Pg more generally graceful handling for big individual datums is going to be a bit of work, though. Support for wide-row, big-Datum COPY in and out. Efficient lazy fetching of large TOASTed data by follow-up client requests. Range fetching of large compressed TOASTed values (possibly at the price of worse compression) without having to decompress the whole thing up to the start of the desired range. Lots of fun.
At least we have lob / pg_largeobject.
В списке pgsql-hackers по дате отправления: