Re: [GENERAL] Aggregate query on large tables
От | Adrian Klaver |
---|---|
Тема | Re: [GENERAL] Aggregate query on large tables |
Дата | |
Msg-id | 0f1cdb19-1f42-a108-35fd-22adb2fc82d1@aklaver.com обсуждение исходный текст |
Ответ на | [GENERAL] Aggregate query on large tables (Job <Job@colliniconsulting.it>) |
Список | pgsql-general |
On 04/09/2017 08:05 AM, Job wrote: > Hi, > > i have a table with about 400 millions of rows and i need to build some aggregate function for reporting. > I noticed that query performances are slowing down, even though indexes are present. > > Query is simple (i make an example, my table is in italian language): > select a,sum(b) from table where a=x and c=y group by a > > a is a varchar > b is an integer > > x and y are two field i use for filter results. > > I tried to create different indexes to try to speed up performance: > index1 (a) > index2 (c) > index3 (a,c) > > I noticed, with query planner, that the mulfi-field index is not used. > Postgresql 9.6.1 still use scan without indexes. At some point doing a scan over a table is better then using an index, as index use incurs a lookup cost. Unfortunately there is no way to determine what is actually happening in your case without more information. So: 1) Table schema(language does not matter, the names are just tags). 2) The actual query run through EXPLAIN ANALYZE. > > I obtain significant improvements only if i create a materialized view with aggregated data. > > But i would like to avoid - if possible - creating (and mantaining) the materialized view. > > Than you! > /F > -- Adrian Klaver adrian.klaver@aklaver.com
В списке pgsql-general по дате отправления: