Re: large table problem
От | Tom Lane |
---|---|
Тема | Re: large table problem |
Дата | |
Msg-id | 22832.1177105923@sss.pgh.pa.us обсуждение исходный текст |
Ответ на | large table problem ("Jason Nerothin" <jasonnerothin@gmail.com>) |
Список | pgsql-general |
"Jason Nerothin" <jasonnerothin@gmail.com> writes: > Attempt number 2, now underway, is to pass > LIMIT and OFFSET values to the query which Postgres handles quite > effectively as long as the OFFSET value is less than the total number of > rows in the table. When the value is greater than <num_rows>, the query > hangs for minutes. I don't actually believe the above; using successively larger offsets should get slower and slower in a smooth manner, because the only thing OFFSET does is throw away scanned rows just before they would have been returned to the client. I think you've confused yourself somehow. > the documentation suggests that cursor behavior is a little buggy for the > current postgres driver. How old a driver are you using? Because a cursor is definitely what you want to use for retrieving millions of rows. It strikes me that pgsql-jdbc might be a more suitable group of people to ask about this than the -general list ... regards, tom lane
В списке pgsql-general по дате отправления: