Re: Using Postgres to store high volume streams of sensor readings
От | Diego Schulz |
---|---|
Тема | Re: Using Postgres to store high volume streams of sensor readings |
Дата | |
Msg-id | 47dcfe400811211226u625e96e7hd9694ef73d5aa612@mail.gmail.com обсуждение исходный текст |
Ответ на | Using Postgres to store high volume streams of sensor readings ("Ciprian Dorin Craciun" <ciprian.craciun@gmail.com>) |
Ответы |
Re: Using Postgres to store high volume streams of sensor readings
|
Список | pgsql-general |
On Fri, Nov 21, 2008 at 9:50 AM, Ciprian Dorin Craciun <ciprian.craciun@gmail.com> wrote:
Currently I'm benchmarking the following storage solutions for this:
* Hypertable (http://www.hypertable.org/) -- which has good insert
rate (about 250k inserts / s), but slow read rate (about 150k reads /
s); (the aggregates are manually computed, as Hypertable does not
support other queries except scanning (in fact min, and max are easy
beeing the first / last key in the ordered set, but avg must be done
by sequential scan);)
* BerkeleyDB -- quite Ok insert rate (about 50k inserts / s), but
fabulos read rate (about 2M reads / s); (the same issue with
aggregates;)
* Postgres -- which behaves quite poorly (see below)...
* MySQL -- next to be tested;
I think it'll be also interesting to see how SQLite 3 performs in this scenario. Any plans?
regards
diego
В списке pgsql-general по дате отправления: