Re: Help Me Understand Why I'm Getting a Bad Query Plan
От | Bryan Murphy |
---|---|
Тема | Re: Help Me Understand Why I'm Getting a Bad Query Plan |
Дата | |
Msg-id | 7fd310d10903242043x28d24adenb729c0a77a21ce9@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: Help Me Understand Why I'm Getting a Bad Query Plan (Josh Berkus <josh@agliodbs.com>) |
Ответы |
Re: Help Me Understand Why I'm Getting a Bad Query Plan
|
Список | pgsql-performance |
On Tue, Mar 24, 2009 at 9:30 PM, Josh Berkus <josh@agliodbs.com> wrote: > For some reason, your first post didn't make it to the list, which is why > nobody responded. Weird... I've been having problems with gmail and google reader all week. >>> I've got a query on our production system that isn't choosing a good >>> plan. I can't see why it's choosing to do a sequential scan on the >>> ItemExperienceLog table. That table is about 800mb and has about 2.5 >>> million records. This example query only returns 4 records. I've >>> tried upping the statics for ItemExperienceLog.VistorId and >>> ItemExperienceLog.ItemId to 1000 (from out default of 100) with no >>> success. > > Yes, that is kind of inexplicable. For some reason, it's assigning a very > high cost to the nestloops, which is why it wants to avoid them with a seq > scan. Can you try lowering cpu_index_cost to 0.001 and see how that affects > the plan? I'm assuming you meant cpu_index_tuple_cost. I changed that to 0.001 as you suggested, forced postgres to reload it's configuration and I'm still getting the same execution plan. Looking through our configuration one more time, I see that at some point I set random_page_cost to 2.0, but I don't see any other changes to query planner settings from their default values. Bryan
В списке pgsql-performance по дате отправления: