Re: [PERFORM] Slow query: bitmap scan troubles
От | Tom Lane |
---|---|
Тема | Re: [PERFORM] Slow query: bitmap scan troubles |
Дата | |
Msg-id | 23869.1358184197@sss.pgh.pa.us обсуждение исходный текст |
Ответ на | Re: [PERFORM] Slow query: bitmap scan troubles (Robert Haas <robertmhaas@gmail.com>) |
Ответы |
Re: [PERFORM] Slow query: bitmap scan troubles
|
Список | pgsql-hackers |
Robert Haas <robertmhaas@gmail.com> writes: > I'm not sure I have anything intelligent to add to this conversation - > does that make me the wisest of all the Greeks? - but I do think it > worth mentioning that I have heard occasional reports within EDB of > the query planner refusing to use extremely large indexes no matter > how large a hammer was applied. I have never been able to obtain > enough details to understand the parameters of the problem, let alone > reproduce it, but I thought it might be worth mentioning anyway in > case it's both real and related to the case at hand. Basically I > guess that boils down to: it would be good to consider whether the > costing model is correct for an index of, say, 1TB. Well, see the cost curves at http://www.postgresql.org/message-id/13967.1357866454@sss.pgh.pa.us The old code definitely had an unreasonably large charge for indexes exceeding 1e8 or so tuples. This wouldn't matter that much for simple single-table lookup queries, but I could easily see it putting the kibosh on uses of an index on the inside of a nestloop. It's possible that the new code goes too far in the other direction: we're now effectively assuming that all inner btree pages stay in cache no matter how large the index is. At some point it'd likely be appropriate to start throwing in some random_page_cost charges for inner pages beyond the third/fourth/fifth(?) level, as Simon speculated about upthread. But I thought we could let that go until we start seeing complaints traceable to it. regards, tom lane
В списке pgsql-hackers по дате отправления: