Re: Risk Estimation WAS: Planner hints in Postgresql
От | Robert Haas |
---|---|
Тема | Re: Risk Estimation WAS: Planner hints in Postgresql |
Дата | |
Msg-id | CA+TgmobayUB_7yu95fQVuMqZEP_5cOKkc5KJo1O9kY36K5Nbig@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: Risk Estimation WAS: Planner hints in Postgresql (Tom Lane <tgl@sss.pgh.pa.us>) |
Список | pgsql-hackers |
On Thu, Mar 20, 2014 at 10:45 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote: > Robert Haas <robertmhaas@gmail.com> writes: >> So you might think that the problem here is that we're assuming >> uniform density. Let's say there are a million rows in the table, and >> there are 100 that match our criteria, so the first one is going to >> happen 1/10,000'th of the way through the table. Thus we set SC = >> 0.0001 * TC, and that turns out to be an underestimate if the >> distribution isn't as favorable as we're hoping. However, that is NOT >> what we are doing. What we are doing is setting SC = 0. I mean, not >> quite 0, but yeah, effectively 0. Essentially we're assuming that no >> matter how selective the filter condition may be, we assume that it >> will match *the very first row*. > > I think this is wrong. Yeah, the SC may be 0 or near it, but the time to > fetch the first tuple is estimated as SC + (TC-SC)/N. Hmm, you're right, and experimentation confirms that the total cost of the limit comes out to about TC/selectivity. So scratch that theory. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
В списке pgsql-hackers по дате отправления: