Re: multi billion row tables: possible or insane?
| От | Markus Schaber |
|---|---|
| Тема | Re: multi billion row tables: possible or insane? |
| Дата | |
| Msg-id | 422475CE.9030507@logi-track.com обсуждение исходный текст |
| Ответ на | multi billion row tables: possible or insane? (Ramon Bastiaans <bastiaans@sara.nl>) |
| Список | pgsql-performance |
Hi, Ramon, Ramon Bastiaans schrieb: > The database's performance is important. There would be no use in > storing the data if a query will take ages. Query's should be quite fast > if possible. Which kind of query do you want to run? Queries that involve only a few rows should stay quite fast when you set up the right indices. However, queries that involve sequential scans over your table (like average computation) will take years. Get faaaaaast I/O for this. Or, better, use a multidimensional data warehouse engine. Those can precalculate needed aggregate functions and reports. But they need loads of storage (because of very redundant data storage), and I don't know any open source or cheap software. Markus -- markus schaber | dipl. informatiker logi-track ag | rennweg 14-16 | ch 8001 zürich phone +41-43-888 62 52 | fax +41-43-888 62 53 mailto:schabios@logi-track.com | www.logi-track.com
В списке pgsql-performance по дате отправления: