Re: Incremental results from libpq
От | Guy Rouillier |
---|---|
Тема | Re: Incremental results from libpq |
Дата | |
Msg-id | CC1CF380F4D70844B01D45982E671B239E8C95@mtxexch01.add0.masergy.com обсуждение исходный текст |
Ответ на | Incremental results from libpq (Scott Lamb <slamb@slamb.org>) |
Список | pgsql-interfaces |
Peter Eisentraut wrote: > I'm at LinuxWorld Frankfurt and one of the Trolltech guys came over > to talk to me about this. He opined that it would be beneficial for > their purpose (in certain cases) if the server would first compute > the entire result set and keep it in the server memory (thus > eliminating potential errors of the 1/x kind) and then ship it to the > client in a way that the client would be able to fetch it piecewise. > Then, the client application could build the display incrementally > while the rest of the result set travels over the (slow) link. Does > that make sense? No. How would you handle the 6-million row result set? You want the server to cache that? Remember, the server authors have no way to predict client code efficiency. What if a poorly written client retrieves just 10 of those rows and decides it doesn't want any more, but doesn't free up the server connection? The server will be stuck holding those 6 million rows in memory for a long time. And readily available techniques exist for the client to handle this. Have one thread reading rows from the DB, and a second thread drawing the display. -- Guy Rouillier
В списке pgsql-interfaces по дате отправления: