Re: Slow "Select count(*) ..." query on table with 60 Mio. rows
От | Matthew Wakeling |
---|---|
Тема | Re: Slow "Select count(*) ..." query on table with 60 Mio. rows |
Дата | |
Msg-id | alpine.DEB.2.00.1001141505300.6195@aragorn.flymine.org обсуждение исходный текст |
Ответ на | Slow "Select count(*) ..." query on table with 60 Mio. rows (tom <toabctl@googlemail.com>) |
Список | pgsql-performance |
On Thu, 14 Jan 2010, tom wrote: > i have a db-table "data_measurand" with about 60000000 (60 Millions) > rows and the following query takes about 20-30 seconds (with psql): > > mydb=# select count(*) from data_measurand; > count > ---------- > 60846187 > (1 row) Sounds pretty reasonable to me. Looking at your table, the rows are maybe 200 bytes wide? That's 12GB of data for Postgres to munch through. 30 seconds is really rather quick for that (400MB/s). What sort of RAID array is managing to give you that much? > I use a software raid and LVM for Logical Volume Management. Filesystem > is ext3 Ditch lvm. This is an FAQ. Counting the rows in a table is an expensive operation in Postgres. It can't be answered directly from an index. If you want, you can keep track of the number of rows yourself with triggers, but beware that this will slow down write access to the table. Matthew -- Nog: Look! They've made me into an ensign! O'Brien: I didn't know things were going so badly. Nog: Frightening, isn't it?
В списке pgsql-performance по дате отправления: