Re: pg_upgrade failing for 200+ million Large Objects

Поиск
Список
Период
Сортировка
От Jan Wieck
Тема Re: pg_upgrade failing for 200+ million Large Objects
Дата
Msg-id 5bdcb010-ecdd-c69a-b441-68002fc38483@wi3ck.info
обсуждение исходный текст
Ответ на Re: pg_upgrade failing for 200+ million Large Objects  (Andrew Dunstan <andrew@dunslane.net>)
Ответы Re: pg_upgrade failing for 200+ million Large Objects  (Andrew Dunstan <andrew@dunslane.net>)
Список pgsql-hackers
On 3/21/21 7:47 AM, Andrew Dunstan wrote:
> One possible (probable?) source is the JDBC driver, which currently
> treats all Blobs (and Clobs, for that matter) as LOs. I'm working on
> improving that some: <https://github.com/pgjdbc/pgjdbc/pull/2093>

You mean the user is using OID columns pointing to large objects and the 
JDBC driver is mapping those for streaming operations?

Yeah, that would explain a lot.


Thanks, Jan

-- 
Jan Wieck
Principle Database Engineer
Amazon Web Services



В списке pgsql-hackers по дате отправления:

Предыдущее
От: Jan Wieck
Дата:
Сообщение: Fix pg_upgrade to preserve datdba (was: Re: pg_upgrade failing for 200+ million Large Objects)
Следующее
От: Tom Lane
Дата:
Сообщение: Re: Fix pg_upgrade to preserve datdba (was: Re: pg_upgrade failing for 200+ million Large Objects)