Re: Index Scans become Seq Scans after VACUUM ANALYSE
От | Tom Lane |
---|---|
Тема | Re: Index Scans become Seq Scans after VACUUM ANALYSE |
Дата | |
Msg-id | 6991.1019059828@sss.pgh.pa.us обсуждение исходный текст |
Ответ на | Re: Index Scans become Seq Scans after VACUUM ANALYSE (mlw <markw@mohawksoft.com>) |
Список | pgsql-hackers |
mlw <markw@mohawksoft.com> writes: > On borderline conditions, wrongly using an index does not result in as bad > performance as wrongly not using an index, You're arguing from a completely false premise. It might be true on the particular cases you've looked at, but in general an indexscan-based plan can be many times worse than a seqscan plan. In particular this is likely to hold when the plan has to access most or all of the table. I still remember the first time I got my nose rubbed in this unfortunate fact. I had spent a lot of work improving the planner's handling of sort ordering to the point where it could use an indexscan in place of seqscan-and-sort to handle ORDER BY queries. I proudly committed it, and immediately got complaints that ORDER BY was slower than before on large tables. Considering how slow a large sort operation is, that should give you pause. As for "borderline conditions", how is the planner supposed to know what is borderline? I cannot see any rational justification for putting a thumb on the scales on the side of indexscan (or any other specific plan type) as you've proposed. Thomas correctly points out that you'll just move the planner failures from one area to another. If we can identify a reason why the planner tends to overestimate the costs of indexscan vs seqscan, by all means let's fix that. But let's not derive cost estimates that are the best we know how to make and then ignore them. regards, tom lane
В списке pgsql-hackers по дате отправления: