Re: index vs. seq scan choice?
От | John D. Burger |
---|---|
Тема | Re: index vs. seq scan choice? |
Дата | |
Msg-id | CB2207B5-1E40-4C9B-9ABF-49AB813BFD51@mitre.org обсуждение исходный текст |
Ответ на | Re: index vs. seq scan choice? (Steve Atkins <steve@blighty.com>) |
Список | pgsql-general |
Steve Atkins wrote: > Would it be possible to look at a much larger number of samples > during analyze, > then look at the variation in those to generate a reasonable number of > pg_statistic "samples" to represent our estimate of the actual > distribution? > More datapoints for tables where the planner might benefit from it, > fewer > where it wouldn't. You could definitely try to measure the variance of the statistics (using, say, bootstrap resampling), and change the target 'til you got a "good" tradeoff between small sample size and adequate representation of the distribution. Unfortunately, I think the definition of "good" depends strongly on the kinds of queries that get run. Basically, you want the statistics target to be just big enough that more stats wouldn't change the plans for common queries. Remember, too, that this is not just one number, it'd be different for each column (perhaps zero for most). I could imagine hillclimbing the stats targets by storing common queries and then replaying them, while varying the sample size. There was a discussion last year related to all of this, see: http://archives.postgresql.org/pgsql-general/2006-10/msg00526.php - John D. Burger MITRE
В списке pgsql-general по дате отправления: