Re: Introducing floating point cast into filter drastically changes row estimate
От | Merlin Moncure |
---|---|
Тема | Re: Introducing floating point cast into filter drastically changes row estimate |
Дата | |
Msg-id | CAHyXU0xpEWSDYiJzoaUqcHEZDE7jxS3sN3mkYY1fJ-+5P4g+ig@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: Introducing floating point cast into filter drastically changes row estimate (Merlin Moncure <mmoncure@gmail.com>) |
Список | pgsql-bugs |
On Wed, Oct 24, 2012 at 5:40 PM, Merlin Moncure <mmoncure@gmail.com> wrote: > On Wed, Oct 24, 2012 at 3:51 PM, Merlin Moncure <mmoncure@gmail.com> wrote: >> On Wed, Oct 24, 2012 at 3:33 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote: >>> Merlin Moncure <mmoncure@gmail.com> writes: >>>> Yeah -- I have a case where a large number of joins are happening that >>>> have a lot of filtering based on expressions and things like that. >>> >>> Might be worth your while to install some indexes on those expressions, >>> if only to trigger collection of stats about them. >> >> Not practical -- these expressions are all about 'outlier culling'. >> It's just wasteful to materialize indexes for stastical purposes only. >> Anyways, in this case, I just refactored the query into a CTE. Apologies for blabbing, but I was wondering if a solution to this problem might be to have the planner identify low cost/high impact scenarios that would qualify for simply running some of the stored statistical values through qualifying stable expressions, particularly when the input variables are constant or single sourced from a table. Over the years, the planner has been getting very precise in terms of algorithm choice and this is making the costs of statistics misses increasingly dangerous, a trend which I think has been reflected by regression reports on -performance. merlin
В списке pgsql-bugs по дате отправления: