Re: DBT-3 with SF=20 got failed
От | Tomas Vondra |
---|---|
Тема | Re: DBT-3 with SF=20 got failed |
Дата | |
Msg-id | 55799C2D.3040803@2ndquadrant.com обсуждение исходный текст |
Ответ на | Re: DBT-3 with SF=20 got failed (Jan Wieck <jan@wi3ck.info>) |
Ответы |
Re: DBT-3 with SF=20 got failed
Re: DBT-3 with SF=20 got failed |
Список | pgsql-hackers |
Hi, On 06/11/15 16:20, Jan Wieck wrote: > On 06/11/2015 09:53 AM, Kouhei Kaigai wrote: >>> curious: what was work_mem set to? >>> >> work_mem=48GB >> >> My machine mounts 256GB physical RAM. > > work_mem can be allocated several times per backend. Nodes like sort > and hash_aggregate may each allocate that much. You should set > work_mem to a fraction of physical-RAM / concurrent-connections > depending on the complexity of your queries. 48GB does not sound > reasonable. That's true, but there are cases where values like this may be useful (e.g. for a particular query). We do allow such work_mem values, so I consider this failure to be a bug. It probably existed in the past, but was amplified by the hash join improvements I did for 9.5, because that uses NTUP_PER_BUCKET=1 instead of NTUP_PER_BUCKET=10. So the arrays of buckets are much larger, and we also much more memory than we had in the past. Interestingly, the hash code checks for INT_MAX overflows on a number of places, but does not check for this ... regards -- Tomas Vondra http://www.2ndQuadrant.com PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
В списке pgsql-hackers по дате отправления: