Re: Slow query - lots of temporary files.
От | Tomas Vondra |
---|---|
Тема | Re: Slow query - lots of temporary files. |
Дата | |
Msg-id | 55784ECA.6000406@2ndquadrant.com обсуждение исходный текст |
Ответ на | Re: Slow query - lots of temporary files. (Johann Spies <johann.spies@gmail.com>) |
Ответы |
Re: Slow query - lots of temporary files.
|
Список | pgsql-performance |
On 06/10/15 15:42, Johann Spies wrote: > On 10 June 2015 at 15:02, Claudio Freire <klaussfreire@gmail.com > <mailto:klaussfreire@gmail.com>> wrote: > > > The joins are different on both versions, and the most likely culprit > is the join against D. It's probably wrong, and the first query is > building a cartesian product. > > Without more information about the schema it's difficult to be sure > though. > > > Thanks for your reply. I will experiment futher with different joins. I don't know what you mean by "experimenting with joins" - that should be determined by the schema. The problematic piece of the explain plan is this: -> Merge Join (cost=4384310.92..21202716.78 rows=6664163593 width=390)" Output: a.ut, c.gt, b.go, b.gn, d.au" Merge Cond: ((c.ut)::text = (d.rart_id)::text)" That is, the planner expects ~6.7 billion rows, each ~390B wide. That's ~2.5TB of data that needs to be stored to disk (so that the sort can process it). The way the schema is designed might be one of the issues - ISTM the 'ut' column is somehow universal, mixing values referencing different columns in multiple tables. Not only that's utterly misleading for the planner (and may easily cause issues with huge intermediate results), but it also makes formulating the queries very difficult. And of course, the casting between text and int is not very good either. Fix the schema to follow relational best practices - separate the values into multiple columns, and most of this will go away. regards -- Tomas Vondra http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
В списке pgsql-performance по дате отправления: