Re: Practical impediment to supporting multiple SSL libraries
От | Greg Stark |
---|---|
Тема | Re: Practical impediment to supporting multiple SSL libraries |
Дата | |
Msg-id | 87psjkf1wy.fsf@stark.xeocode.com обсуждение исходный текст |
Ответ на | Re: Practical impediment to supporting multiple SSL libraries (Stephen Frost <sfrost@snowman.net>) |
Ответы |
Re: Practical impediment to supporting multiple SSL libraries
|
Список | pgsql-hackers |
Stephen Frost <sfrost@snowman.net> writes: > Another thought along these lines: Perhaps a 'PQgettuple' which can be > used to process one tuple at a time. This would be used in an ASYNC > fashion and libpq just wouldn't read/accept more than a tuple's worth > each time, which it could do into a fixed area (in general, for a > variable-length field it could default to an initial size and then only > grow it when necessary, and grow it larger than the current request by > some amount to hopefully avoid more malloc/reallocs later). I know DBD::Oracle uses an interface somewhat like this but more sophisticated. It provides a buffer and Oracle fills it with as many records as it can. It's blocking though (by default) and DBD::Oracle tries to adjust the size of the buffer to keep the network pipeline full, but if the application is slow at reading the data then the network buffers fill and it pushes back to the database which blocks writing. This is normally a good thing though. One of the main problems with the current libpq interface is that if you have a very large result set it flows in as fast as it can and the library buffers it *all*. If you're trying to avoid forcing the user to eat millions of records at once you don't want to be buffering them anywhere all at once. You want a constant pipeline of records streaming out as fast as they can be processed and no faster. -- greg
В списке pgsql-hackers по дате отправления: