Re: perf problem with huge table
От | Greg Smith |
---|---|
Тема | Re: perf problem with huge table |
Дата | |
Msg-id | 4B736075.1080704@2ndquadrant.com обсуждение исходный текст |
Ответ на | perf problem with huge table (rama <rama.rama@tiscali.it>) |
Список | pgsql-performance |
rama wrote: > in that way, when i need to do a query for a long ranges (ie: 1 year) i just take the rows that are contained to contab_y > if i need to got a query for a couple of days, i can go on ymd, if i need to get some data for the other timeframe, i cando some cool intersection between > the different table using some huge (but fast) queries. > > Now, the matter is that this design is hard to mantain, and the tables are difficult to check > You sound like you're trying to implement something like materialized views one at a time; have you considered adopting the more general techniques used to maintain those so that you're not doing custom development each time for the design? http://tech.jonathangardner.net/wiki/PostgreSQL/Materialized_Views http://www.pgcon.org/2008/schedule/events/69.en.html I think that sort of approach is more practical than it would have been for you in MySQL, so maybe this wasn't on your list of possibilities before. -- Greg Smith 2ndQuadrant Baltimore, MD PostgreSQL Training, Services and Support greg@2ndQuadrant.com www.2ndQuadrant.com
В списке pgsql-performance по дате отправления: