Re: DBT-3 with SF=20 got failed
От | Kohei KaiGai |
---|---|
Тема | Re: DBT-3 with SF=20 got failed |
Дата | |
Msg-id | CADyhKSX2dg4EGUVGiZ1nyJx0-0hPHQKEv3y_WsrpuCWJnK-LSw@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: DBT-3 with SF=20 got failed (Tomas Vondra <tomas.vondra@2ndquadrant.com>) |
Список | pgsql-hackers |
2015-06-11 23:33 GMT+09:00 Tomas Vondra <tomas.vondra@2ndquadrant.com>: > Hi, > > On 06/11/15 16:20, Jan Wieck wrote: >> >> On 06/11/2015 09:53 AM, Kouhei Kaigai wrote: >>>> >>>> curious: what was work_mem set to? >>>> >>> work_mem=48GB >>> >>> My machine mounts 256GB physical RAM. >> >> >> work_mem can be allocated several times per backend. Nodes like sort >> and hash_aggregate may each allocate that much. You should set >> work_mem to a fraction of physical-RAM / concurrent-connections >> depending on the complexity of your queries. 48GB does not sound >> reasonable. > > > That's true, but there are cases where values like this may be useful (e.g. > for a particular query). We do allow such work_mem values, so I consider > this failure to be a bug. > > It probably existed in the past, but was amplified by the hash join > improvements I did for 9.5, because that uses NTUP_PER_BUCKET=1 instead of > NTUP_PER_BUCKET=10. So the arrays of buckets are much larger, and we also > much more memory than we had in the past. > > Interestingly, the hash code checks for INT_MAX overflows on a number of > places, but does not check for this ... > Which number should be changed in this case? Indeed, nbuckets is declared as int, so INT_MAX is hard limit of hash-slot. However, some extreme usage can easily create a situation that we shall touch this restriction. Do we have nbuckets using long int? -- KaiGai Kohei <kaigai@kaigai.gr.jp>
В списке pgsql-hackers по дате отправления: