Re: multi billion row tables: possible or insane?
| От | John Arbash Meinel |
|---|---|
| Тема | Re: multi billion row tables: possible or insane? |
| Дата | |
| Msg-id | 42249C0A.1090300@arbash-meinel.com обсуждение исходный текст |
| Ответ на | Re: multi billion row tables: possible or insane? (Markus Schaber <schabios@logi-track.com>) |
| Список | pgsql-performance |
Markus Schaber wrote: >Hi, John, > >John Arbash Meinel schrieb: > > > >>>I am doing research for a project of mine where I need to store >>>several billion values for a monitoring and historical tracking system >>>for a big computer system. My currect estimate is that I have to store >>>(somehow) around 1 billion values each month (possibly more). >>> >>> >>> >>If you have that 1 billion perfectly distributed over all hours of the >>day, then you need 1e9/30/24/3600 = 385 transactions per second. >> >> > >I hope that he does not use one transaction per inserted row. > >In your in-house tests, we got a speedup factor of up to some hundred >when bundling rows on insertions. The fastest speed was with using >bunches of some thousand rows per transaction, and running about 5 >processes in parallel. > > You're right. I guess it just depends on how the data comes in, and what you can do at the client ends. That is kind of where I was saying put a machine in front which gathers up the information, and then does a batch update. If your client can do this directly, then you have the same advantage. > > John =:->
Вложения
В списке pgsql-performance по дате отправления: