Re: strange estimate for number of rows
От | Tom Lane |
---|---|
Тема | Re: strange estimate for number of rows |
Дата | |
Msg-id | 11160.1068754748@sss.pgh.pa.us обсуждение исходный текст |
Ответ на | Re: strange estimate for number of rows (Daniel Manley <dmanley@libertyrms.info>) |
Ответы |
Re: strange estimate for number of rows
|
Список | pgsql-performance |
Daniel Manley <dmanley@libertyrms.info> writes: > The product_id alone gives a difference of a millions rows from estimate > to actual, vs. the factor of 2 from the transaction_date. You should be thinking in terms of ratios, not absolute difference. The rows estimate for product_id doesn't look too bad to me; the one for transaction_date is much further off (a factor of 2). Which is odd, because the system can usually do all right on range estimates if you've let it run an ANALYZE with enough histogram bins. Could we see the pg_stats row for transaction_date? > We tried a couple of scenarios with effective_cache_size=60000, > cpu_index_tuple_cost=0.0001 and random_page_cost=2 with no change in the > plan. Since you need about a factor of 3 change in the cost estimate to get it to switch plans, changing random_page_cost by a factor of 2 ain't gonna do it (the other two numbers are second-order adjustments unlikely to have much effect, I think). Try 1.5, or even less ... of course, you have to keep an eye on your other queries and make sure they don't go nuts, but from what I've heard about your hardware setup a low random_page_cost isn't out of line with the physical realities. regards, tom lane
В списке pgsql-performance по дате отправления: