Re: raw output from copy
От | Ants Aasma |
---|---|
Тема | Re: raw output from copy |
Дата | |
Msg-id | CA+CSw_s0=-QtC_c3_-MbQA_2dVGz3DyuRZSXzn7ZNK5pWRd+8w@mail.gmail.com обсуждение исходный текст |
Ответ на | raw output from copy (Pavel Stehule <pavel.stehule@gmail.com>) |
Ответы |
Re: raw output from copy
|
Список | pgsql-hackers |
On 8 Apr 2016 9:14 pm, "Pavel Stehule" <pavel.stehule@gmail.com> wrote: > 2016-04-08 20:54 GMT+02:00 Andrew Dunstan <andrew@dunslane.net>: >> I should add that I've been thinking about this some more, and that I now agree that something should be done to supportthis at the SQL level, mainly so that clients can manage very large pieces of data in a stream-oriented fashion ratherthan having to marshall the data in memory to load/unload via INSERT/SELECT. Anything that is client-side only is likelyto have this memory issue. >> >> At the same time I'm still not entirely convinced that COPY is a good vehicle for this. It's designed for bulk records,and already quite complex. Maybe we need something new that uses the COPY protocol but is more specifically tailoredfor loading or sending large singleton pieces of data. > > > Now it is little bit more time to think more about. But It is hard to design some more simpler than is COPY syntax. Whatwill support both directions. Sorry for arriving late and adding to the bikeshedding. Maybe the answer is to make COPY pluggable. It seems to me that it would be relatively straightforward to add an extension mechanism for copy output and input plugins that could support any format expressible as a binary stream. Raw output would then be an almost trivial plugin. Others could implement JSON, protocol buffers, Redis bulk load, BSON, ASN.1 or whatever else serialisation format du jour. It will still have the same backwards compatibility issues as adding the raw output, but the payoff is greater. Regards, Ants Aasma
В списке pgsql-hackers по дате отправления: