Large # of Tables, Getting ready for the enterprise
| От | carl garland |
|---|---|
| Тема | Large # of Tables, Getting ready for the enterprise |
| Дата | |
| Msg-id | F138sxpTljBOh8s4KWi00002b1a@hotmail.com обсуждение исходный текст |
| Ответы |
Re: Large # of Tables, Getting ready for the enterprise
Re: Large # of Tables, Getting ready for the enterprise |
| Список | pgsql-hackers |
As postgres becomes better and more in the spotlight there are a couple of issues I think that the hacker group might want to address to better prepare it for the enterprise and highend production systems. Currently postgres will support an incredible amount of tables whereas Interbase only supports 64K, but the efficiency and performance of the pg backend quickly degenerates after 1000 tables. I know that most people will think that filesystem will be the bottleneck but as XFS nears completion the problem will shift back to pg. It is my understanding that the system tables where lookups on tables occur are always done sequentially and not using any more optimized (btree etc) solution. I also think this may be applicable to the toastable objects where large # of objects occur. I want to start to look at the code to maybe help out but have a few questions: 1) When referencing a table is it only looked up once and then cached or does a scan of the system table occur only once per session. 2) Which files should I look at in tree. 3) Any tips, suggestions, pitfalls I should remember. Thanx for the pointers, Carl Garland ________________________________________________________________________ Get Your Private, Free E-mail from MSN Hotmail at http://www.hotmail.com
В списке pgsql-hackers по дате отправления: