Re: [PERFORM] Correct use of cursors for very large result sets in Postgres
От | Tom Lane |
---|---|
Тема | Re: [PERFORM] Correct use of cursors for very large result sets in Postgres |
Дата | |
Msg-id | 26739.1487439829@sss.pgh.pa.us обсуждение исходный текст |
Ответ на | Re: [PERFORM] Correct use of cursors for very large result sets in Postgres (Mike Beaton <mjsbeaton@gmail.com>) |
Ответы |
Re: [PERFORM] Correct use of cursors for very large result sets in Postgres
|
Список | pgsql-performance |
Mike Beaton <mjsbeaton@gmail.com> writes: > One outstanding question I have. Based on a lot of helpful responses given > to the SO question I can now test and see what disk buffers are generated > (by setting `log_temp_files` to `0` and then `tail -f log`), as well as how > long it takes for results to start arriving. > With a large (10,000,000 row) test table, if I do `SELECT * FROM table` on > psql it starts to return results immediately with no disk buffer. If I do > `FETCH ALL FROM cursortotable` on psql it takes about 7.5 seconds to start > returning results, and generates a 14MB buffer. If I do `SELECT * FROM > table` on a correctly coded streaming client, it also starts to return > results immediately with no disk buffer. But if I do `FETCH ALL FROM > cursortotable` from my streaming client, it takes about 1.5 seconds for > results to start coming... but again with no disk buffer, as hoped Seems odd. Is your cursor just on "SELECT * FROM table", or is there some processing in there you're not mentioning? Maybe it's a cursor WITH HOLD and you're exiting the source transaction? regards, tom lane
В списке pgsql-performance по дате отправления: