Re: tweaking NTUP_PER_BUCKET
От | Simon Riggs |
---|---|
Тема | Re: tweaking NTUP_PER_BUCKET |
Дата | |
Msg-id | CA+U5nMLaNa2vWcUPeTjDKQxqLCmJTasdAtyW0mZFwybDq-GtCg@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: tweaking NTUP_PER_BUCKET (Tomas Vondra <tv@fuzzy.cz>) |
Ответы |
Re: tweaking NTUP_PER_BUCKET
|
Список | pgsql-hackers |
On 9 July 2014 18:54, Tomas Vondra <tv@fuzzy.cz> wrote: > (1) size the buckets for NTUP_PER_BUCKET=1 (and use whatever number > of batches this requires) If we start off by assuming NTUP_PER_BUCKET = 1, how much memory does it save to recalculate the hash bucket at 10 instead? Resizing sounds like it will only be useful of we only just overflow our limit. If we release next version with this as a hardcoded change, my understanding is that memory usage for hash joins will leap upwards, even if the run time of queries reduces. It sounds like we need some kind of parameter to control this. "We made it faster" might not be true if we run this on servers that are already experiencing high memory pressure. -- Simon Riggs http://www.2ndQuadrant.com/PostgreSQL Development, 24x7 Support, Training & Services
В списке pgsql-hackers по дате отправления: