Re: libpq custom row processing
От | Magnus Hagander |
---|---|
Тема | Re: libpq custom row processing |
Дата | |
Msg-id | CABUevEzCd8bEzisjhw2TRrYp+7zC5PHz_mXQ+AL=eFqJ_qhUAA@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: libpq custom row processing (Federico Di Gregorio <fog@dndg.it>) |
Список | psycopg |
On Tue, Aug 7, 2012 at 3:25 PM, Federico Di Gregorio <fog@dndg.it> wrote: > On 07/08/12 15:14, Marko Kreen wrote: >> My point is that the behavior is not something completely new, >> that no-one has seen before. >> >> But it's different indeed from libpq default, so it's not something >> psycopg can convert to using unconditionally. But as optional feature >> it should be quite useful. > > I agree. As an opt-in feature would be quite useful for large datasets > but then, named cursors already cover that ground. Not that I am against > it, just I'd like to see why: > > curs = conn.cursor(row_by_row=True) > > would be better than: > > curs = conn.cursor("row_by_row") > > Is row by row faster than fetching from a named cursor? Does it add less > overhead. If that's the case then would be nice to have it as a feature > for optimizing queries returning large datasets. A big win would be that you don't need to keep the whole dataset in memory, wouldn't it? As you're looping through it, you can throw away the old results... -- Magnus Hagander Me: http://www.hagander.net/ Work: http://www.redpill-linpro.com/
В списке psycopg по дате отправления: