Re: Overhauling GUCS
От | Hakan Kocaman |
---|---|
Тема | Re: Overhauling GUCS |
Дата | |
Msg-id | 48ca23600806091417u546f3b6dm6314bc55aaa147f6@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: Overhauling GUCS (Gregory Stark <stark@enterprisedb.com>) |
Ответы |
Re: Overhauling GUCS
|
Список | pgsql-hackers |
On 6/9/08, Gregory Stark <stark@enterprisedb.com> wrote:
"Josh Berkus" <josh@agliodbs.com> writes:
> Where analyze does systematically fall down is with databases over 500GB in
> size, but that's not a function of d_s_t but rather of our tiny sample size.
n_distinct. For that Josh is right, we *would* need a sample size proportional
to the whole data set which would practically require us to scan the whole
table (and have a technique for summarizing the results in a nearly constant
sized data structure).
Hi,
is this (summarizing results in a constant sized data structure) something which could be achived by Bloom-Filters ?
http://archives.postgresql.org/pgsql-general/2008-06/msg00076.php
Kind regards
Hakan Kocaman
В списке pgsql-hackers по дате отправления: