Re: pg_dump slow with bytea data
От | Merlin Moncure |
---|---|
Тема | Re: pg_dump slow with bytea data |
Дата | |
Msg-id | AANLkTik0KHvwH1TbztVZvqq52S15wh1R7OEZT1k6YVQY@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: pg_dump slow with bytea data (Merlin Moncure <mmoncure@gmail.com>) |
Список | pgsql-general |
On Mon, Mar 7, 2011 at 8:52 AM, Merlin Moncure <mmoncure@gmail.com> wrote: > Well, that's a pretty telling case, although I'd venture to say not > typical. In average databases, I'd expect 10-50% range of improvement > going from text->binary which is often not enough to justify the > compatibility issues. Does it justify a 'binary' switch to pg_dump? > I'd say so -- as long as the changes required aren't to extensive > (although you can expect disagreement on that point). hm. i'll take a > look... The changes don't look too bad, but are not trivial. On the backup side, it just does a text/binary agnostic copy direct to stdout. You'd need to create a switch of course, and I'm assuming add a flag isbinary to ArchiveHandle and possibly a stream length to the tocEntry for each table (or should this just be header to the binary stream?). On the restore side it's a bit more complicated -- the current code is a completely text monster, grepping each line for unquoted newline, assuming ascii '0' is the end of the data, etc. You would need a completely separate code path for binary, but it would be much smaller and simpler (and faster!). There might be some other issues too, I just did a cursory scan of the code. merlin
В списке pgsql-general по дате отправления: