Re: Shouldn't we have a way to avoid "risky" plans?
От | Claudio Freire |
---|---|
Тема | Re: Shouldn't we have a way to avoid "risky" plans? |
Дата | |
Msg-id | AANLkTi=GAz7gFBCoXQeRDN3PUA0fXxRyP_=DxS4Y1tJU@mail.gmail.com обсуждение исходный текст |
Ответ на | Shouldn't we have a way to avoid "risky" plans? (Josh Berkus <josh@agliodbs.com>) |
Ответы |
Re: Shouldn't we have a way to avoid "risky" plans?
|
Список | pgsql-performance |
On Wed, Mar 23, 2011 at 2:12 PM, Josh Berkus <josh@agliodbs.com> wrote: > Folks, > >... > It really seems like we should be able to detect an obvious high-risk > situation like this one. Or maybe we're just being too optimistic about > discarding subplans? Why not letting the GEQO learn from past mistakes? If somehow a post-mortem analysis of queries can be done and accounted for, then these kinds of mistakes would be a one-time occurrence. Ideas: * you estimate cost IFF there's no past experience. * if rowcount estimates miss by much, a correction cache could be populated with extra (volatile - ie in shared memory) statistics * or, if rowcount estimates miss by much, autoanalyze could be scheduled * consider plan bailout: execute a tempting plan, if it takes too long or its effective cost raises well above the expected cost, bail to a safer plan * account for worst-case performance when evaluating plans
В списке pgsql-performance по дате отправления: