Re: Query with large number of joins
От | Tom Lane |
---|---|
Тема | Re: Query with large number of joins |
Дата | |
Msg-id | 24746.1413849597@sss.pgh.pa.us обсуждение исходный текст |
Ответ на | Query with large number of joins (Marco Di Cesare <Marco.DiCesare@pointclickcare.com>) |
Ответы |
Re: Query with large number of joins
|
Список | pgsql-performance |
Marco Di Cesare <Marco.DiCesare@pointclickcare.com> writes: > We are using a BI tool that generates a query with an unusually large number of joins. My understanding is that with thismany joins Postgres query planner can't possibly use an exhaustive search so it drops into a heuristics algorithm. Unfortunately,the query runs quite slow (~35 seconds) and seems to ignore using primary keys and indexes where available. > Query plan here (sorry had to anonymize): > http://explain.depesz.com/s/Uml It's difficult to make any detailed comments when you've shown us only an allegedly-bad query plan, and not either the query itself or the table definitions. However, it appears to me that the query plan is aggregating over a rather large number of join rows, and there are very few constraints that would allow eliminating rows. So I'm not at all sure there is a significantly better plan available. Are you claiming this query was instantaneous on SQL Server? The only thing that jumps out at me as possibly improvable is that with a further increase in work_mem, you could probably get it to change the last aggregation step from Sort+GroupAggregate into HashAggregate, which'd likely run faster ... assuming you can spare some more memory. regards, tom lane
В списке pgsql-performance по дате отправления: