Re: Re: [COMMITTERS] pgsql: Rewrite GEQO's gimme_tree function so that it always finds a
От | marcin mank |
---|---|
Тема | Re: Re: [COMMITTERS] pgsql: Rewrite GEQO's gimme_tree function so that it always finds a |
Дата | |
Msg-id | b1b9fac60911271633s17fef172o145dbd157bdacab2@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: Re: [COMMITTERS] pgsql: Rewrite GEQO's gimme_tree function so that it always finds a (Tom Lane <tgl@sss.pgh.pa.us>) |
Ответы |
Re: Re: [COMMITTERS] pgsql: Rewrite GEQO's gimme_tree function so that it always finds a
|
Список | pgsql-hackers |
On Sat, Nov 28, 2009 at 12:04 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote: > It's not so much so-many-paths as so-many-join-relations that's killing > it. I put some instrumentation into join_search_one_level to count the > number of joinrels it was creating, and got this before getting bored: This is pretty off-topic, but if we had some upper bound on the cost of the complete plan, we could discard pieces of the plan that already cost more. One way to get the upper bound is to generate the plan in depth-first fashion, instead of the current breadth-first. Instead of bottom-up dynamic programming, employ memoization. The doubt I have is that this could show to not be a win because to discard a sub-plan we would have to consider the startup cost, not the total cost, and therefore we would be discarding not enough to make it worthwile. But I thought I`d mention it anyway, in case someone has a better idea :) Greetings Marcin Mańk
В списке pgsql-hackers по дате отправления: