Re: FDW for PostgreSQL
От | Shigeru Hanada |
---|---|
Тема | Re: FDW for PostgreSQL |
Дата | |
Msg-id | CAEZqfEcvQQjot68R5BEjUzKZnzNAC6jTDGF6E0EwxfWus9Wdog@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: FDW for PostgreSQL (Kohei KaiGai <kaigai@kaigai.gr.jp>) |
Ответы |
Re: FDW for PostgreSQL
|
Список | pgsql-hackers |
On Wed, Nov 21, 2012 at 7:31 PM, Kohei KaiGai <kaigai@kaigai.gr.jp> wrote:
--
Shigeru HANADA
At execute_query(), it stores the retrieved rows onto tuplestore of
festate->tuples at once. Doesn't it make problems when remote-
table has very big number of rows?
No. postgres_fdw uses single-row processing mode of libpq when
retrieving query results in execute_query, so memory usage will
be stable at a certain level.
IIRC, the previous code used cursor feature to fetch a set of rows
to avoid over-consumption of local memory. Do we have some
restriction if we fetch a certain number of rows with FETCH?
It seems to me, we can fetch 1000 rows for example, and tentatively
store them onto the tuplestore within one PG_TRY() block (so, no
need to worry about PQclear() timing), then we can fetch remote
rows again when IterateForeignScan reached end of tuplestore.
As you say, postgres_fdw had used cursor to avoid possible memory
exhaust on large result set. I switched to single-row processing mode
(it could be said "protocol-level cursor"), which was added in 9.2,
because it accomplish same task with less SQL calls than cursor.
Regards,
Shigeru HANADA
В списке pgsql-hackers по дате отправления: