Re: A better way than tweaking NTUP_PER_BUCKET
От | Simon Riggs |
---|---|
Тема | Re: A better way than tweaking NTUP_PER_BUCKET |
Дата | |
Msg-id | CA+U5nM+aTcGSYc=fcNFUUDiYi7Gp3EGBXHJWkq6xm-mMhQXdrQ@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: A better way than tweaking NTUP_PER_BUCKET (Stephen Frost <sfrost@snowman.net>) |
Ответы |
Re: A better way than tweaking NTUP_PER_BUCKET
Re: A better way than tweaking NTUP_PER_BUCKET |
Список | pgsql-hackers |
On 22 June 2013 21:40, Stephen Frost <sfrost@snowman.net> wrote: > I'm actually not a huge fan of this as it's certainly not cheap to do. If it > can be shown to be better than an improved heuristic then perhaps it would > work but I'm not convinced. We need two heuristics, it would seem: * an initial heuristic to overestimate the number of buckets when we have sufficient memory to do so * a heuristic to determine whether it is cheaper to rebuild a dense hash table into a better one. Although I like Heikki's rebuild approach we can't do this every x2 overstretch. Given large underestimates exist we'll end up rehashing 5-12 times, which seems bad. Better to let the hash table build and then re-hash once, it we can see it will be useful. OK? -- Simon Riggs http://www.2ndQuadrant.com/PostgreSQL Development, 24x7 Support, Training & Services
В списке pgsql-hackers по дате отправления: