Re: BUG #15225: [XX000] ERROR: invalid DSA memory alloc request size1073741824 / Where: parallel worker

Поиск
Список
Период
Сортировка
От Thomas Munro
Тема Re: BUG #15225: [XX000] ERROR: invalid DSA memory alloc request size1073741824 / Where: parallel worker
Дата
Msg-id CAEepm=1BYtYQSBJ7c=LrePT52Y5D+BeFjjHWOx3ACHDJJHBAig@mail.gmail.com
обсуждение исходный текст
Ответ на BUG #15225: [XX000] ERROR: invalid DSA memory alloc request size1073741824 / Where: parallel worker  (PG Bug reporting form <noreply@postgresql.org>)
Ответы Re: BUG #15225: [XX000] ERROR: invalid DSA memory alloc request size1073741824 / Where: parallel worker  (Frits Jalvingh <jal@etc.to>)
Список pgsql-bugs
On Sun, Jun 3, 2018 at 10:13 PM, PG Bug reporting form
<noreply@postgresql.org> wrote:
> The following bug has been logged on the website:
>
> Bug reference:      15225
> Logged by:          Frits Jalvingh
> Email address:      jal@etc.to
> PostgreSQL version: 11beta1
> Operating system:   Ubuntu 18.04 64bit
> Description:
> ...
>
> Running the following:
> causes an abort after just a few seconds:
> [XX000] ERROR: invalid DSA memory alloc request size 1073741824
> Where: parallel worker

> work_mem = 2GB

> max_parallel_workers_per_gather = 2     # taken from max_parallel_workers

> psql (PostgreSQL) 11beta1 (Ubuntu 11~beta1-2.pgdg18.04+1)

>                       "Node Type": "Hash Join",
>                       "Parallel Aware": true,

Thanks for the report.  I think it is probably trying to allocate 1GB
worth of hash table buckets and failing, here:

    batch->buckets =
        dsa_allocate(hashtable->area, sizeof(dsa_pointer_atomic) * nbuckets);

sizeof(dsa_pointer_atomic) is 8 on your system, so it must be 128
million buckets.  We could fix that by changing this to use
dsa_allocate_extended(..., DSA_ALLOC_HUGE), and that may be a change
we want to consider to support super-sized hash joins on large memory
machines.  But the first question is whether it's actually reasonable
to be trying to do 128 million buckets in your case, and if not, what
has gone wrong.  That's a couple of orders of magnitude higher than
the number of rows it estimated would be inserted into the hash table,
so presumably it was in the process of increasing the number of
buckets.

Could you please try running with SET enable_parallel_hash = off, so
we can see the estimated and actual row counts?

-- 
Thomas Munro
http://www.enterprisedb.com


В списке pgsql-bugs по дате отправления:

Предыдущее
От: Tom Lane
Дата:
Сообщение: Re: BUG #15228: pgbench custom script numbering off-by-one
Следующее
От: "Suwalka, Kriti"
Дата:
Сообщение: Re: JDBC Driver42.2.2 throws error when dealing with money datatype