Re: [PERFORM] Correct use of cursors for very large result sets in Postgres
От | Tom Lane |
---|---|
Тема | Re: [PERFORM] Correct use of cursors for very large result sets in Postgres |
Дата | |
Msg-id | 17679.1487683929@sss.pgh.pa.us обсуждение исходный текст |
Ответ на | Re: [PERFORM] Correct use of cursors for very large result sets in Postgres (Mike Beaton <mjsbeaton@gmail.com>) |
Ответы |
Re: [PERFORM] Correct use of cursors for very large result sets in Postgres
|
Список | pgsql-performance |
Mike Beaton <mjsbeaton@gmail.com> writes: > New TL;DR (I'm afraid): PostgreSQL is always generating a huge buffer file > on `FETCH ALL FROM CursorToHuge`. I poked into this and determined that it's happening because pquery.c executes FETCH statements the same as it does with any other tuple-returning utility statement, ie "run it to completion and put the results in a tuplestore, then send the tuplestore contents to the client". I think the main reason nobody worried about that being non-optimal was that we weren't expecting people to FETCH very large amounts of data in one go --- if you want the whole query result at once, why are you bothering with a cursor? This could probably be improved, but it would (I think) require inventing an additional PortalStrategy specifically for FETCH, and writing associated code paths in pquery.c. Don't know when/if someone might get excited enough about it to do that. regards, tom lane
В списке pgsql-performance по дате отправления: