browsing table with 2 million records

Поиск
Список
Период
Сортировка
От aurora
Тема browsing table with 2 million records
Дата
Msg-id cbd177510510261341l4ed7a214lda9d67af12f2ec21@mail.gmail.com
обсуждение исходный текст
Ответы Re: browsing table with 2 million records  (Mark Lewis <mark.lewis@mir3.com>)
Re: browsing table with 2 million records  (Scott Marlowe <smarlowe@g2switchworks.com>)
Re: browsing table with 2 million records  ("Joshua D. Drake" <jd@commandprompt.com>)
Re: browsing table with 2 million records  (PFC <lists@boutiquenumerique.com>)
Re: browsing table with 2 million records  (Christopher Kings-Lynne <chriskl@familyhealth.com.au>)
Список pgsql-performance
I am running Postgre 7.4 on FreeBSD. The main table have 2 million record (we would like to do at least 10 mil or more). It is mainly a FIFO structure with maybe 200,000 new records coming in each day that displace the older records.

We have a GUI that let user browser through the record page by page at about 25 records a time. (Don't ask me why but we have to have this GUI). This translates to something like

  select count(*) from table   <-- to give feedback about the DB size
  select * from table order by date limit 25 offset 0

Tables seems properly indexed, with vacuum and analyze ran regularly. Still this very basic SQLs takes up to a minute run.

I read some recent messages that select count(*) would need a table scan for Postgre. That's disappointing. But I can accept an approximation if there are some way to do so. But how can I optimize select * from table order by date limit x offset y? One minute response time is not acceptable.

Any help would be appriciated.

Wy


В списке pgsql-performance по дате отправления:

Предыдущее
От: "Edward Di Geronimo Jr."
Дата:
Сообщение: Performance issues with custom functions
Следующее
От: Mark Lewis
Дата:
Сообщение: Re: browsing table with 2 million records