Re: Large number of tables slow insert
От | Matthew Wakeling |
---|---|
Тема | Re: Large number of tables slow insert |
Дата | |
Msg-id | alpine.DEB.1.10.0808261348190.4454@aragorn.flymine.org обсуждение исходный текст |
Ответ на | Large number of tables slow insert ("Loic Petit" <tls.wydd@free.fr>) |
Ответы |
Re: Large number of tables slow insert
|
Список | pgsql-performance |
On Sat, 23 Aug 2008, Loic Petit wrote: > I use Postgresql 8.3.1-1 to store a lot of data coming from a large amount of sensors. In order to have good > performances on querying by timestamp on each sensor, I partitionned my measures table for each sensor. Thus I create > a lot of tables. As far as I can see, you are having performance problems as a direct result of this design decision, so it may be wise to reconsider. If you have an index on both the sensor identifier and the timestamp, it should perform reasonably well. It would scale a lot better with thousands of sensors too. Matthew -- And why do I do it that way? Because I wish to remain sane. Um, actually, maybe I should just say I don't want to be any worse than I already am. - Computer Science Lecturer
В списке pgsql-performance по дате отправления: