Re: Adding additional index causes 20,000x slowdown for certain select queries - postgres 9.0.3
От | Kevin Grittner |
---|---|
Тема | Re: Adding additional index causes 20,000x slowdown for certain select queries - postgres 9.0.3 |
Дата | |
Msg-id | 4D80A1C2020000250003B99D@gw.wicourts.gov обсуждение исходный текст |
Ответ на | Re: Adding additional index causes 20,000x slowdown for certain select queries - postgres 9.0.3 (Timothy Garnett <tgarnett@panjiva.com>) |
Ответы |
Re: Re: Adding additional index causes 20,000x slowdown for certain select queries - postgres 9.0.3
|
Список | pgsql-performance |
Timothy Garnett <tgarnett@panjiva.com> wrote: >>> -> Index Scan Backward using >>> index_customs_records_on_month_and_bl_number on customs_records >>> (cost=0.00..78426750.74 rows=48623 width=908) (actual >>> time=171344.182..3858893.588 rows=100 loops=1) We've seen a lot of those lately -- Index Scan Backward performing far worse than alternatives. One part of it is that disk sectors are arranged for optimal performance on forward scans; but I don't think we've properly accounted for the higher cost of moving backward through our btree indexes, either. To quote from the README for the btree AM: | A backwards scan has one additional bit of complexity: after | following the left-link we must account for the possibility that | the left sibling page got split before we could read it. So, we | have to move right until we find a page whose right-link matches | the page we came from. (Actually, it's even harder than that; see | deletion discussion below.) I'm wondering whether the planner should have some multiplier or other adjustment to attempt to approximate the known higher cost of backward scans. -Kevin
В списке pgsql-performance по дате отправления: