Re: Bug? ExecChooseHashTableSize() got assertion failed with crazy number of rows
От | David Rowley |
---|---|
Тема | Re: Bug? ExecChooseHashTableSize() got assertion failed with crazy number of rows |
Дата | |
Msg-id | CAKJS1f87w-aQX+ba3TkYPY8-LOvQKtvH2vSjQDMjR2Jsi_9BGg@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: Bug? ExecChooseHashTableSize() got assertion failed with crazy number of rows (Kouhei Kaigai <kaigai@ak.jp.nec.com>) |
Список | pgsql-hackers |
On 19 August 2015 at 12:23, Kouhei Kaigai <kaigai@ak.jp.nec.com> wrote:
Hmm. Why I could set work_mem = '96GB' without error.> -----Original Message-----
> From: David Rowley [mailto:david.rowley@2ndquadrant.com]
> Sent: Wednesday, August 19, 2015 9:00 AM
> The size of your hash table is 101017630802 bytes, which is:
>
> david=# select pg_size_pretty(101017630802);
>
> pg_size_pretty
> ----------------
> 94 GB
> (1 row)
>
> david=# set work_mem = '94GB';
> ERROR: 98566144 is outside the valid range for parameter "work_mem" (64 ..
> 2097151)
>
It was described in the postgresql.conf.
postgres=# SHOW work_mem;
work_mem
----------
96GB
(1 row)
> So I think the only way the following could cause an error, is if bucket_size
> was 1, which it can't be.
>
> lbuckets = 1 << my_log2(hash_table_bytes / bucket_size);
>
>
> I think one day soon we'll need to allow larger work_mem sizes, but I think there's
> lots more to do than this change.
>
I oversight this limitation, but why I can bypass GUC limitation check?
I'm unable to get the server to start if I set work_mem that big. I also tried starting the start with 1GB work_mem then doing a pg_ctl reload. With each of these I get the same error message that I would have gotten if I had done; set work_mem = '96GB';
Which version are you running?
Are you sure there's not changes in guc.c for work_mem's range?
Regards
David Rowley
--
David Rowley http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
В списке pgsql-hackers по дате отправления: