Re: Large objects and out-of-memory

Поиск
Список
Период
Сортировка
От Tom Lane
Тема Re: Large objects and out-of-memory
Дата
Msg-id 543675.1608575245@sss.pgh.pa.us
обсуждение исходный текст
Ответ на Large objects and out-of-memory  (Konstantin Knizhnik <k.knizhnik@postgrespro.ru>)
Ответы Re: Large objects and out-of-memory  (Konstantin Knizhnik <k.knizhnik@postgrespro.ru>)
Список pgsql-bugs
Konstantin Knizhnik <k.knizhnik@postgrespro.ru> writes:
> The following sequence of command cause backend's memory to exceed 10Gb:

> INSERT INTO image1 SELECT lo_creat(-1) FROM generate_series(1,10000000);
> REASSIGN OWNED BY alice TO testlo;

[ shrug... ]  You're asking to change the ownership of 10000000 objects.
This is not going to be a cheap operation.  AFAIK it's not going to be
any more expensive than changing the ownership of 10000000 tables, or
any other kind of object.

The argument for allowing large objects to have per-object ownership and
permissions in the first place was that useful scenarios wouldn't have a
huge number of them (else you'd run out of disk space, if they're actually
"large"), so we needn't worry too much about the overhead.

We could possibly bound the amount of space used in the inval queue by
switching to an "invalidate all" approach once we got to an unreasonable
amount of space.  But this will do nothing for the other costs involved,
and I'm not really sure it's worth adding complexity for.

            regards, tom lane



В списке pgsql-bugs по дате отправления:

Предыдущее
От: PG Bug reporting form
Дата:
Сообщение: BUG #16784: Server crash in ExecReScanAgg()
Следующее
От: Tom Lane
Дата:
Сообщение: Re: BUG #16784: Server crash in ExecReScanAgg()