Re: Hash Aggregate plan picked for very large table == out of memory

Поиск
Список
Период
Сортировка
От Tom Lane
Тема Re: Hash Aggregate plan picked for very large table == out of memory
Дата
Msg-id 20937.1181857220@sss.pgh.pa.us
обсуждение исходный текст
Ответ на Re: Hash Aggregate plan picked for very large table == out of memory  ("Mason Hale" <masonhale@gmail.com>)
Список pgsql-general
"Mason Hale" <masonhale@gmail.com> writes:
>> What's the actual number of groups
>> (target_page_id values)?

> Approximately 40 million (I'll have a more precise number when the query
> finishes running ).

Ouch.  The discrepancy between that and the 550K estimate is what's
killing you --- the hash table will be 80 times bigger than the planner
thinks.  You sure that increasing the stats target doesn't improve
matters?   (I suppose the distribution is one of these "long tail"
affairs...)

The best solution might be to reduce work_mem --- 40MB is fairly high
for a general-purpose setting anyway IMO, since you need to worry about
complex queries eating multiples of work_mem.

            regards, tom lane

В списке pgsql-general по дате отправления:

Предыдущее
От: "Mason Hale"
Дата:
Сообщение: Re: Hash Aggregate plan picked for very large table == out of memory
Следующее
От: Francisco Reyes
Дата:
Сообщение: Re: pg_restore out of memory