Re: Large selects handled inefficiently?
От | Jules Bean |
---|---|
Тема | Re: Large selects handled inefficiently? |
Дата | |
Msg-id | 20000906093307.A31824@grommit.office.vi.net обсуждение исходный текст |
Ответ на | Re: Large selects handled inefficiently? (Zlatko Calusic <zlatko@iskon.hr>) |
Ответы |
Re: Large selects handled inefficiently?
|
Список | pgsql-general |
On Thu, Aug 31, 2000 at 12:55:35AM +0200, Zlatko Calusic wrote: > "Andrew Snow" <als@fl.net.au> writes: > > > > I believe I can work around this problem using cursors (although I > > > don't know how well DBD::Pg copes with cursors). However, that > > > doesn't seem right -- cursors should be needed to fetch a large query > > > without having it all in memory at once... > > > > Yes, I have noticed that particular bad behaviour, too. > With DBD::Pg and DBD::mysql. > > At the same time, DBD::Oracle, DBD::InterBase and DBD::Sybase work as > expected. Rows are fetched with fetchrow...() functions instead of all > being sucked up into memory at the time execute() is called. > > Anybody know why is that happening? Yes. It's a defect in 'libpq', the underlying PostgreSQL client library. This library is not capable of reading partial results - it always reads whole ones. > > > Actually, I think thats why cursors were invented in the first place ;-) A > > cursor is what you are using if you're not fetching all the results of a > > query. > > > > What bothers me is different behaviour of different DBD drivers. But, > yes, I have just subscribed to dbi-users list which is the right place > to ask that question. No, it's not really the DBD driver's fault. There is no (easy) way around it, since the flaw lies in the C library it uses. If the DBD driver wished to change behaviour, it could 'secretly' use cursors, but that would involve parsing queries to detect selects, which might be fragile. Jules
В списке pgsql-general по дате отправления: