Re: Optimizer use of index slows down query by factor
От | Tom Lane |
---|---|
Тема | Re: Optimizer use of index slows down query by factor |
Дата | |
Msg-id | 381.1261669585@sss.pgh.pa.us обсуждение исходный текст |
Ответ на | Optimizer use of index slows down query by factor (Michael Ruf <mrf@inxmail.de>) |
Ответы |
Re: Optimizer use of index slows down query by factor
|
Список | pgsql-performance |
Michael Ruf <mrf@inxmail.de> writes: > we experience some strange performance problems, we've already found a > workaround for us, but are curious if it's a known problem of the optimizer. I think you need to see about getting this rowcount estimate to be more accurate: > -> Index Scan using idx_link_1 on link > (cost=0.00..680.51 rows=13477 width=26) (actual time=5.707..12.043 > rows=126 loops=1) > Index Cond: (task_id = 1556) > Filter: (((deletable IS NULL) OR (NOT > deletable)) AND ((link_type = 8) OR (link_type = 9))) If it realized there'd be only 126 rows out of that scan, it'd probably have gone for a nestloop join against the big table, which I think would be noticeably faster than either of the plans you show here. You already did crank up default_statistics_target, so I'm not sure if raising it further would help any. What I'd suggest is trying to avoid using non-independent AND/OR conditions. For instance recasting the first OR as just "deletable is not true" would probably result in a better estimate. The size of the error seems to be more than that would account for though, so I suspect that the deletable and link_type conditions are interdependent. Is it practical to recast your data representation to avoid that? regards, tom lane
В списке pgsql-performance по дате отправления: