Re: [PATCH] COPY .. COMPRESSED
От | Claudio Freire |
---|---|
Тема | Re: [PATCH] COPY .. COMPRESSED |
Дата | |
Msg-id | CAGTBQpb8YbVX=BHyDPXQRcQMtbn4Ai2nYCXYwNWE7wNHPNjUaw@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: [PATCH] COPY .. COMPRESSED (Robert Haas <robertmhaas@gmail.com>) |
Список | pgsql-hackers |
On Wed, Jan 16, 2013 at 8:19 PM, Robert Haas <robertmhaas@gmail.com> wrote: > On Tue, Jan 15, 2013 at 4:50 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote: >> I find the argument that this supports compression-over-the-wire to be >> quite weak, because COPY is only one form of bulk data transfer, and >> one that a lot of applications don't ever use. If we think we need to >> support transmission compression for ourselves, it ought to be >> integrated at the wire protocol level, not in COPY. >> >> Just to not look like I'm rejecting stuff without proposing >> alternatives, here is an idea about a backwards-compatible design for >> doing that: we could add an option that can be set in the connection >> request packet. Say, "transmission_compression = gzip". > > But presumably this would transparently compress at one end and > decompress at the other end, which is again a somewhat different use > case. To get compressed output on the client side, you have to > decompress and recompress. Maybe that's OK, but it's not quite the > same thing. Well, libpq could give some access to raw compressed streams, but, really, even with double compression on the client, it solves the bandwidth issue, not only for pg_dump, pg_restore, and copy, but also for all other transfer-intensive applications. I do think it's the best option.
В списке pgsql-hackers по дате отправления: