Re: Looks like merge join planning time is too big, 55 seconds
От | Jeff Janes |
---|---|
Тема | Re: Looks like merge join planning time is too big, 55 seconds |
Дата | |
Msg-id | CAMkU=1x51iVmUcLewMUBLB3fKW9tkpfsL0iYQuXp33aTAiQVPA@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: Looks like merge join planning time is too big, 55 seconds (Sergey Burladyan <eshkinkot@gmail.com>) |
Ответы |
Re: Looks like merge join planning time is too big, 55 seconds
|
Список | pgsql-performance |
On Thu, Aug 1, 2013 at 5:16 PM, Sergey Burladyan <eshkinkot@gmail.com> wrote: > I also find this trace for other query: > explain select * from xview.user_items_v v where ( v.item_id = 132358330 ); > > > If I not mistaken, may be two code paths like this here: > (1) mergejoinscansel -> scalarineqsel-> ineq_histogram_selectivity -> get_actual_variable_range -> index_getnext > (2) scalargtsel -> scalarineqsel -> ineq_histogram_selectivity -> get_actual_variable_range -> index_getnext Yeah, I think you are correct. > And may be get_actual_variable_range() function is too expensive for > call with my bloated table items with bloated index items_user_id_idx on it? But why is it bloated in this way? It must be visiting many thousands of dead/invisible rows before finding the first visible one. But, Btree index have a mechanism to remove dead tuples from indexes, so it doesn't follow them over and over again (see "kill_prior_tuple"). So is that mechanism not working, or are the tuples not dead but just invisible (i.e. inserted by a still open transaction)? Cheers, Jeff
В списке pgsql-performance по дате отправления: