Re: Death postgres
От | Marc Millas |
---|---|
Тема | Re: Death postgres |
Дата | |
Msg-id | CADX_1aZk7TJMrLUjtLC=Bsb9XmM3W8eyU2AyOrXTj1=ZnMzTHQ@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: Death postgres ("Peter J. Holzer" <hjp-pgsql@hjp.at>) |
Ответы |
Re: Death postgres
|
Список | pgsql-general |
On Wed, May 10, 2023 at 7:24 PM Peter J. Holzer <hjp-pgsql@hjp.at> wrote:
On 2023-05-10 16:35:04 +0200, Marc Millas wrote:
> Unique (cost=72377463163.02..201012533981.80 rows=1021522829864 width=97)
> -> Gather Merge (cost=72377463163.02..195904919832.48 rows=1021522829864 width=97)
...
> -> Parallel Hash Left Join (cost=604502.76..1276224253.51 rows=204304565973 width=97)
> Hash Cond: ((t1.col_ano)::text = (t2.col_ano)::text)
...
>
> //so.. the planner guess that those 2 join will generate 1000 billions rows...
Are some of the col_ano values very frequent? If say the value 42 occurs
1 million times in both table_a and table_b, the join will create 1
trillion rows for that value alone. That doesn't explain the crash or the
disk usage, but it would explain the crazy cost (and would probably be a
hint that this query is unlikely to finish in any reasonable time).
hp
good guess, even if a bit surprising: there is one (and only one) "value" which fit your supposition: NULL
750000 in each table which perfectly fit the planner rows estimate.
One question: what is postgres doing when it planned to hash 1000 billions rows ?
Did postgres create an appropriate ""space"" to handle those 1000 billions hash values ?
thanks,
MM
--
_ | Peter J. Holzer | Story must make more sense than reality.
|_|_) | |
| | | hjp@hjp.at | -- Charles Stross, "Creative writing
__/ | http://www.hjp.at/ | challenge!"
В списке pgsql-general по дате отправления: