Re: SeqScan costs
От | Jeff Davis |
---|---|
Тема | Re: SeqScan costs |
Дата | |
Msg-id | 1218590084.550.21.camel@dell.linuxdev.us.dell.com обсуждение исходный текст |
Ответ на | Re: SeqScan costs (Gregory Stark <stark@enterprisedb.com>) |
Список | pgsql-hackers |
On Tue, 2008-08-12 at 23:58 +0100, Gregory Stark wrote: > People lower random_page_cost because we're not doing a good job > estimating how much of a table is in cache. Is it because of a bad estimate about how much of a table is in cache, or a bad assumption about the distribution of access to a table? If the planner were to know, for example, that 10-20% of the table is likely to be in cache, will that really make a difference in the plan? I suspect that it would mostly only matter when the entire table is cached, the correlation is low, and the query is somewhat selective (which is a possible use case, but fairly narrow). I suspect that this has more to do with the fact that some data is naturally going to be accessed much more frequently than other data in a large table. But how do you determine, at plan time, whether the query will be mostly accessing hot data, or cold data? Regards,Jeff Davis
В списке pgsql-hackers по дате отправления: