Re: An "obvious" index not being used
От | Tom Lane |
---|---|
Тема | Re: An "obvious" index not being used |
Дата | |
Msg-id | 10923.1213839814@sss.pgh.pa.us обсуждение исходный текст |
Ответ на | An "obvious" index not being used (Daniele Varrazzo <piro@develer.com>) |
Ответы |
Re: An "obvious" index not being used
|
Список | pgsql-performance |
Daniele Varrazzo <piro@develer.com> writes: > There is an index in the field "foo.account_id" but is not used. The resulting > query plan is: > Aggregate (cost=300940.70..300940.71 rows=1 width=0) (actual > time=13412.088..13412.089 rows=1 loops=1) > -> Hash IN Join (cost=11.97..299858.32 rows=432953 width=0) (actual > time=0.678..13307.074 rows=92790 loops=1) > Hash Cond: (foo.account_id = accounts.id) > -> Seq Scan on foo (cost=0.00..275591.14 rows=5313514 width=4) > (actual time=0.014..7163.538 rows=5313514 loops=1) Well, if the estimate of 432953 rows selected were correct, it'd be right not to use the index. Fetching one row in ten is not a chore for an indexscan. (I'm not sure it'd prefer an indexscan even with an accurate 92K-row estimate, but at least you'd be in the realm where tweaking random_page_cost would make a difference.) I'm not sure why that estimate is so bad, given that you said you increased the stats target on the table. Is there anything particularly skewed about the distribution of the account IDs? regards, tom lane
В списке pgsql-performance по дате отправления: