Re: [PATCHES] Minimum selectivity estimate for LIKE 'prefix%'
От | Tom Lane |
---|---|
Тема | Re: [PATCHES] Minimum selectivity estimate for LIKE 'prefix%' |
Дата | |
Msg-id | 14836.1206983761@sss.pgh.pa.us обсуждение исходный текст |
Ответ на | Re: [PATCHES] Minimum selectivity estimate for LIKE 'prefix%' (Peter Eisentraut <peter_e@gmx.net>) |
Список | pgsql-hackers |
Peter Eisentraut <peter_e@gmx.net> writes: > Am Montag, 31. M�rz 2008 schrieb Tom Lane: >> I've forgotten the context ... what's the whole query and plan again? >> And which PG version exactly? > Please see http://archives.postgresql.org/pgsql-hackers/2008-01/msg00048.php Hm. Now that I think about it, the index scan cost estimate is made using a separate estimate of rows fetched (since this will depend on the specific index qual clauses in use, whereas the overall row estimate for the relation doesn't vary with index). For the case at hand, the index quals that it's looking at are the >= and < clauses with close-together comparison values, and so it comes out with a rock-bottom rowcount estimate. The clamping occuring over in prefix_selectivity isn't relevant here. Your original complaint was that the bad overall rowcount estimate was leading to a bad join plan, and that should be fixed even though the cost estimate for the indexscan itself is unrealistically small. Changing the indexscan cost estimate would require patching the main range-constraint-estimation code in clausesel.c. I don't see any very good fix for that, since it has to deal with much more general cases than this. In particular it doesn't really know whether it's dealing with >= or >. regards, tom lane
В списке pgsql-hackers по дате отправления: