Cool!!
Can you guess where the limit is?
ten thousand is not enought obviously, and putting 10000
more in there takes 10 minutes... even on a clean database.
That is another problem however, bevuase on a slower
machine it takes 13 seconds
> Mike Mascari <mascarm@mascari.com> writes:
> > I *think* the result of assuming 0 rows in a newly
created
> > table, until the next vacuum, would yield a significant
increase in
> > mailing-list traffic complaints to the tune of:
> > "Why isn't PostgreSQL using my index?"
> > followed by the usual
> > "Did you run VACUUM ANALYZE?"
> > So an assumption of 1000 rows was made, with 10 rows
matching your WHERE
> > clause.
>
> Yup, exactly. The initial default statistics are set
(with malice
> aforethought) to provoke an indexscan. After you VACUUM,
the optimizer
> knows how large the table really is (ie, tiny), and so it
decides that
> looking at the index is a waste of time, it might as well
just scan the
> table. Load up some more data, VACUUM again, and you'll
probably see an
> indexscan used.
>
> > after a refresh database the explain yields:
> > index scan using xXxX (cost=0.00..8.14 rows=10
width=147)
> > after a vacuum + vacuum analyze the explain yields:
> > seq scan on acc xXxX A(cost=0.00..1.23 rows=1 width=147)
>
> BTW, comparing those two cost numbers is pretty pointless
since they are
> based on different information about the size of the
table.
>
> regards, tom lane
>