Re: [SQL] Stats on new tables
От | Tom Lane |
---|---|
Тема | Re: [SQL] Stats on new tables |
Дата | |
Msg-id | 4801.940795770@sss.pgh.pa.us обсуждение исходный текст |
Ответ на | Stats on new tables (Michael Richards <miker@scifair.acadiau.ca>) |
Список | pgsql-sql |
Michael Richards <miker@scifair.acadiau.ca> writes: > I think I've found something of interest. > When I create a new table, the plan for it believes that the table > contains 2 rows, not 0. No, actually the initial default assumption under 6.5 & up is that the table contains 1000 rows. (It *used* to be that relntuples started off zero, but that led the optimizer to pick plans that were only suitable for very small tables, which led to horrible performance if you loaded up a table without doing a vacuum.) The particular example you're looking at involves a default assumption about the selectivity of an '=' WHERE condition as well as a default assumption about the total table size. The 'rows' field of a plan node shows the estimated number of *output* tuples, not the number of tuples that need to be scanned. > Also, since the engine knows when we run an > insert, and also knows how many rows a delete nukes, wouldn't it be better > to update the stats after every select and delete? A lot easier said than done (consider concurrent transactions some of which may abort). We've discussed making it happen, but personally I don't believe that the bookkeeping costs that would be incurred could possibly be justified. Certainly the optimizer's results wouldn't be materially better with exact tuple counts than with approximate ones. regards, tom lane
В списке pgsql-sql по дате отправления: