Re: Problem with 11 M records table
От | Scott Marlowe |
---|---|
Тема | Re: Problem with 11 M records table |
Дата | |
Msg-id | dcc563d10805131102y6a840088hd96c32a507b9bfb3@mail.gmail.com обсуждение исходный текст |
Ответ на | Problem with 11 M records table (idc danny <idcdanny@yahoo.com>) |
Список | pgsql-performance |
On Tue, May 13, 2008 at 10:57 AM, idc danny <idcdanny@yahoo.com> wrote: > Hi everybody, > > I'm fairly new to PostgreSQL and I have a problem with > a query: > > SELECT * FROM "LockerEvents" LIMIT 10000 OFFSET > 10990000 > > The table LockerEvents has 11 Mlillions records on it > and this query takes about 60 seconds to complete. > Moreover, even after making for each column in the > table a index the EXPLAIN still uses sequential scan > instead of indexes. Yep. The way offset limit works is it first materializes the data needed for OFFSET+LIMIT rows, then throws away OFFSET worth's of data. So, it has to do a lot of retrieving. Better off to use something like: select * from table order by indexfield where indexfield between 10000000 and 10001000; which can use an index on indexfield, as long as the amount of data is small enough, etc...
В списке pgsql-performance по дате отправления: