Re: Fuzzy cost comparison to eliminate redundant planning
От | Tom Lane |
---|---|
Тема | Re: Fuzzy cost comparison to eliminate redundant planning |
Дата | |
Msg-id | 22511.1080543292@sss.pgh.pa.us обсуждение исходный текст |
Ответ на | Re: Fuzzy cost comparison to eliminate redundant planning (Bruce Momjian <pgman@candle.pha.pa.us>) |
Ответы |
Re: Fuzzy cost comparison to eliminate redundant planning
|
Список | pgsql-hackers |
Bruce Momjian <pgman@candle.pha.pa.us> writes: > Tom Lane wrote: >> Right. There are potentially some ranges of LIMIT for which it could >> win, I believe. > What if we take the total cost and divide it by the number of rows returned --- > then we have a per-row cost for each plan. Then we subtract the two, and > that difference compared to the difference in startup costs tell us how > many rows could potentially use this plan. You're missing the point. We are comparing plans where one has a higher start cost and a lower total cost than the other. For example (pardon the crummy ASCII art): A A A A B A B A B A B AB BA B A B AB A A A A A where the X-axis is the percentage of tuples we expect to fetch, and the Y-axis is the estimated cost. We have to keep both plans since upper levels might want to fetch different percentages depending on what plan is being considered up there; so either A or B might be the thing to use. Now if we consider *three* plans: A A A A B A B A B A B AB C BA C B CA C B AB A A A A A Here, plan B loses everywhere: to A at lower percentages and to C at higher ones. But I don't see how to eliminate B on the basis of one-at-a-time comparisons. It seems like getting rid of B would require turning add_path into an O(N^3) instead of O(N^2) algorithm ... which would almost certainly eat more cycles than it'd save. regards, tom lane
В списке pgsql-hackers по дате отправления: