Re: [PATCH 4/4] Add tests to dblink covering use of COPY TO FUNCTION
От | Pavel Stehule |
---|---|
Тема | Re: [PATCH 4/4] Add tests to dblink covering use of COPY TO FUNCTION |
Дата | |
Msg-id | 162867790911242135q4acfac9xf861814b5a889339@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: [PATCH 4/4] Add tests to dblink covering use of COPY TO FUNCTION (Daniel Farina <drfarina@gmail.com>) |
Ответы |
Re: [PATCH 4/4] Add tests to dblink covering use of COPY TO
FUNCTION
Re: [PATCH 4/4] Add tests to dblink covering use of COPY TO FUNCTION |
Список | pgsql-hackers |
2009/11/25 Daniel Farina <drfarina@gmail.com>: > On Tue, Nov 24, 2009 at 8:45 PM, Pavel Stehule <pavel.stehule@gmail.com> wrote: >> It depends on design. I don't thing so internal is necessary. It is >> just wrong design. > > Depends on how lean you want to be when doing large COPY...right now > the cost is restricted to having to call a function pointer and a few > branches. If you want to take SQL values, then the semantics of > function calling over a large number of rows is probably notably more > expensive, although I make no argument against the fact that the > non-INTERNAL version would give a lot more people more utility. I believe so using an "internal" minimalize necessary changes in COPY implementation. Using a funcapi needs more work inside COPY - you have to take some functionality from COPY to stream functions. Probably the most slow operations is parsing - calling a input functions. This is called once every where. Second slow operation is reading from network - it is same. So I don't see too much reasons, why non internal implementation have to be significant slower than your actual implementation. I am sure, so it needs more work. What is significant - when I better join COPY and some streaming function, then I don't need use tuplestore - or SRF functions. COPY reads data directly. > > fdr >
В списке pgsql-hackers по дате отправления: