Re: Odd sorting behaviour
От | Tom Lane |
---|---|
Тема | Re: Odd sorting behaviour |
Дата | |
Msg-id | 16637.1089867158@sss.pgh.pa.us обсуждение исходный текст |
Ответ на | Re: Odd sorting behaviour (Josh Berkus <josh@agliodbs.com>) |
Ответы |
Re: Odd sorting behaviour
|
Список | pgsql-performance |
Josh Berkus <josh@agliodbs.com> writes: >> - The "subquery scan o12" phase outputs 1186 rows, yet 83792 are sorted. > Where >> do the other ~82000 rows come from? > I'm puzzled by the "83792" rows as well. I've a feeling that Explain > Analyze is failing to output a step. No, it's not missing anything. The number being reported here is the number of rows pulled from the plan node --- but this plan node is on the inside of a merge join, and one of the properties of merge join is that it will do partial rescans of its inner input in the presence of equal keys in the outer input. If you have, say, 10 occurrences of "42" in the outer input, then any "42" rows in the inner input have to be rescanned 10 times. EXPLAIN ANALYZE will count each of them as 10 rows returned by the input node. The large multiple here (80-to-one overscan) says that you've got a lot of duplicate values in the outer input. This is generally a good situation to *not* use a mergejoin in ;-). We do have some logic in the planner that attempts to estimate the extra cost involved in such rescanning, but I'm not sure how accurate the cost model is. > Most of your time is spent in that merge join. Why don't you try doubling > sort_mem temporarily to see how it does? Or even raising shared_buffers? Raising shared_buffers seems unlikely to help. I do agree with raising sort_mem --- not so much to make the merge faster as to encourage the thing to try a hash join instead. regards, tom lane
В списке pgsql-performance по дате отправления: