Re: Tetra-bytes database / large indexes needs
От | Hannu Krosing |
---|---|
Тема | Re: Tetra-bytes database / large indexes needs |
Дата | |
Msg-id | 1053633025.1788.30.camel@fuji.krosing.net обсуждение исходный текст |
Ответ на | Tetra-bytes database / large indexes needs (Jean-Michel POURE <jm.poure@freesurf.fr>) |
Ответы |
Re: Tetra-bytes database / large indexes needs
|
Список | pgsql-hackers |
Jean-Michel POURE kirjutas N, 22.05.2003 kell 11:38: > Dear all, > > A friend of mine needs to import and query a very large amount of data, coming > from real-time acquisition systems. What kind of querying does he need ? You could check out Telegraph for continuous queries http://telegraph.cs.berkeley.edu/ > The database is growing fast, several > Tetra-bytes a day. if you mean terabytes TB then 1 TB/day ~= 12.7 MB/sec, just about as fast as you can write on an average ide drive if you do nothing else, or what can come in over a 100 base T ethernet. even in ide disks (the biggest available disks apiece) you have to add 5 disks a day just to store the incoming 1TB/day. > What is the advancement of the community in the field of very large databases? > Could you point out to me some useful information, techdocs, etc...? no ;) > Are there working groups, private fundings in this precise field? dunno.. You are possibly on the verge of impossible ? If you can hardly write down the data due to physical constraints (memory and bus speeds) , it is very hard to also query it, unless you deploy some terribly clever techniques which extract and compress your data before it even gets to your DB ;) Perhaps you (or your friend) are after filtering and not queries at all ? -------------- Hannu
В списке pgsql-hackers по дате отправления: