Re: Streaming large data into postgres [WORM like applications]
От | Ron Johnson |
---|---|
Тема | Re: Streaming large data into postgres [WORM like applications] |
Дата | |
Msg-id | 46459761.7070508@cox.net обсуждение исходный текст |
Ответ на | Re: Streaming large data into postgres [WORM like applications] ("Dhaval Shah" <dhaval.shah.m@gmail.com>) |
Список | pgsql-general |
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 05/11/07 21:35, Dhaval Shah wrote: > I do care about the following: > > 1. Basic type checking > 2. Knowing failed inserts. > 3. Non-corruption > 4. Macro transactions. That is a minimal read consistency. > > The following is not necessary > > 1. Referential integrity > > In this particular scenario, > > 1. There is a sustained load and peak loads. As long as we can handle > peak loads, the sustained loads can be half of the quoted figure. > 2. The row size has limited columns. That is, it is spans at most a > dozen or so columns and most integer or varchar. > > It is more data i/o heavy rather than cpu heavy. Have you tested PG (and MySQL, for that matter) to determine what kind of load they can handle on existing h/w? Back to the original post: 100K inserts/second is 360 *million* inserts per hour. That's a *lot*. Even if the steady-state is 50K inserts/sec that's 180M inserts/hr. If each record is 120 bytes, that's 43 gigabytes per hour. Which is 12MB/second. No problem from a h/w standpoint. However, it will fill a 300GB HDD in 7 hours. - -- Ron Johnson, Jr. Jefferson LA USA Give a man a fish, and he eats for a day. Hit him with a fish, and he goes away for good! -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.6 (GNU/Linux) iD8DBQFGRZdhS9HxQb37XmcRAimAAJ9oi5AG0EcyATxeGDrlA1qdqU7krwCfc0k+ J7zMkiJiVKxS+DWM6I6Oujw= =D04k -----END PGP SIGNATURE-----
В списке pgsql-general по дате отправления: