Re: Speeding up aggregates
От | Joe Conway |
---|---|
Тема | Re: Speeding up aggregates |
Дата | |
Msg-id | 3DF0E825.7020400@joeconway.com обсуждение исходный текст |
Ответ на | Speeding up aggregates ("Josh Berkus" <josh@agliodbs.com>) |
Список | pgsql-performance |
Josh Berkus wrote: > While effective, this approach is costly in terms of update/insert > processing. It is also limited to whatever aggregate requests you have > anticipated ... it does no good for aggregates over a user-defined > range. I think this is where Oracle's materialized views come into play. > > What have other Postgres users done to speed up aggregates on large > tables? I've found that in most real life applications, expensive aggregate queries tend to be needed for management reporting, which does not need to be based on up-to-the-second fresh data. Normally for these types of reports a summary through say last night at midnight is perfectly adequate. The simplest solution in these cases is to build a table to hold your partially or completely summarized data, then report off of that. Use cron to refresh these summary tables at convenient times (daily, every 2 hours, or whatever). Joe
В списке pgsql-performance по дате отправления: