Re: potential performance gain by query planner optimization
От | Tom Lane |
---|---|
Тема | Re: potential performance gain by query planner optimization |
Дата | |
Msg-id | 10770.1280255125@sss.pgh.pa.us обсуждение исходный текст |
Ответ на | potential performance gain by query planner optimization ("Kneringer, Armin" <Armin.Kneringer@fabasoft.com>) |
Список | pgsql-performance |
"Kneringer, Armin" <Armin.Kneringer@fabasoft.com> writes: > I think I found a potential performance gain if the query planner would be optimized. All Tests has been performed with8.4.1 (and earlier versions) on CentOS 5.3 (x64) > The following query will run on my database (~250 GB) for ca. 1600 seconds and the sort will result in a disk merge deployingca. 200 GB of data to the local disk (ca. 180.000 tmp-files) What have you got work_mem set to? It looks like you must be using an unreasonably large value, else the planner wouldn't have tried to use a hash join here: > -> Hash (cost=11917516.57..11917516.57 rows=55006045159 width=16) > -> Nested Loop (cost=0.00..11917516.57 rows=55006045159 width=16) > -> Seq Scan on atdateval t5 (cost=0.00...294152.40 rows=1859934 width=12) > Filter: (attrid = 281479288456447::bigint) > -> Index Scan using ind_ataggval on ataggval q1_1 (cost=0.00..6.20 rows=4 width=12) > Index Cond: ((q1_1.attrid = 281479288456451::bigint) AND (q1_1.aggval = t5.aggrid)) > Filter: (q1_1.aggrid = 0) Also, please try something newer than 8.4.1 --- this might be some already-fixed bug. regards, tom lane
В списке pgsql-performance по дате отправления: