Re: A better way than tweaking NTUP_PER_BUCKET
От | Heikki Linnakangas |
---|---|
Тема | Re: A better way than tweaking NTUP_PER_BUCKET |
Дата | |
Msg-id | 51C71013.2080802@vmware.com обсуждение исходный текст |
Ответ на | Re: A better way than tweaking NTUP_PER_BUCKET (Simon Riggs <simon@2ndQuadrant.com>) |
Ответы |
Re: A better way than tweaking NTUP_PER_BUCKET
|
Список | pgsql-hackers |
On 23.06.2013 01:48, Simon Riggs wrote: > On 22 June 2013 21:40, Stephen Frost<sfrost@snowman.net> wrote: > >> I'm actually not a huge fan of this as it's certainly not cheap to do. If it >> can be shown to be better than an improved heuristic then perhaps it would >> work but I'm not convinced. > > We need two heuristics, it would seem: > > * an initial heuristic to overestimate the number of buckets when we > have sufficient memory to do so > > * a heuristic to determine whether it is cheaper to rebuild a dense > hash table into a better one. > > Although I like Heikki's rebuild approach we can't do this every x2 > overstretch. Given large underestimates exist we'll end up rehashing > 5-12 times, which seems bad. It's not very expensive. The hash values of all tuples have already been calculated, so rebuilding just means moving the tuples to the right bins. > Better to let the hash table build and > then re-hash once, it we can see it will be useful. That sounds even less expensive, though. - Heikki
В списке pgsql-hackers по дате отправления: