Re: moving data between tables causes the db to overwhelm the system
От | Kevin Kempter |
---|---|
Тема | Re: moving data between tables causes the db to overwhelm the system |
Дата | |
Msg-id | 200909010332.32431.kevink@consistentstate.com обсуждение исходный текст |
Ответ на | Re: moving data between tables causes the db to overwhelm the system (Pierre Frédéric Caillaud<lists@peufeu.com>) |
Ответы |
Re: moving data between tables causes the db to
overwhelm the system
Re: moving data between tables causes the db to overwhelm the system |
Список | pgsql-performance |
On Tuesday 01 September 2009 03:26:08 Pierre Frédéric Caillaud wrote: > > We have a table that's > 2billion rows big and growing fast. We've setup > > monthly partitions for it. Upon running the first of many select * from > > bigTable insert into partition statements (330million rows per month) the > > entire box eventually goes out to lunch. > > > > Any thoughts/suggestions? > > > > Thanks in advance > > Did you create the indexes on the partition before or after inserting the > 330M rows into it ? > What is your hardware config, where is xlog ? Indexes are on the partitions, my bad. Also HW is a Dell server with 2 quad cores and 32G of ram we have a DELL MD3000 disk array with an MD1000 expansion bay, 2 controllers, 2 hbs's/mount points runing RAID 10 The explain plan looks like this: explain SELECT * from bigTable where "time" >= extract ('epoch' from timestamp '2009-08-31 00:00:00')::int4 and "time" <= extract ('epoch' from timestamp '2009-08-31 23:59:59')::int ; QUERY PLAN ------------------------------------------------------------------------------------------------ Index Scan using bigTable_time_index on bigTable (cost=0.00..184.04 rows=1 width=129) Index Cond: (("time" >= 1251676800) AND ("time" <= 1251763199)) (2 rows)
В списке pgsql-performance по дате отправления: