Re: optimizing advice
От | Scott Marlowe |
---|---|
Тема | Re: optimizing advice |
Дата | |
Msg-id | dcc563d10912011409w47a251ao6b8fcdfb635c6c89@mail.gmail.com обсуждение исходный текст |
Ответ на | optimizing advice (Rüdiger Sörensen <r.soerensen@mpic.de>) |
Ответы |
Re: optimizing advice
|
Список | pgsql-general |
2009/12/1 Rüdiger Sörensen <r.soerensen@mpic.de>: > dear all, > > I am building a database that will be really huge and grow rapidly. It holds > data from satellite observations. Data is imported via a java application. > The import is organized via files, that are parsed by the application; each > file hods the data of one orbit of the satellite. > One of the tables will grow by about 40,000 rows per orbit, there are > roughly 13 orbits a day. The import of one day (13 orbits) into the database > takes 10 minutes at the moment. I will have to import data back to the year > 2000 or even older. > I think that there will be a performance issue when the table under question > grows, so I partitioned it using a timestamp column and one child table per > quarter. Unfortunately, the import of 13 orbits now takes 1 hour instead of > 10 minutes as before. I can live with that, if the import time will not > grow sigificantly as the table grows further. I'm gonna guess you're using rules instead of triggers for partitioning? Switching to triggers is a big help if you've got a large amount of data to import / store. If you need some help on writing the triggers shout back, I had to do this to our stats db this summer and it's been much faster with triggers.
В списке pgsql-general по дате отправления: