Re: pg_upgrade failing for 200+ million Large Objects

Поиск
Список
Период
Сортировка
От Andrew Dunstan
Тема Re: pg_upgrade failing for 200+ million Large Objects
Дата
Msg-id ee7d96b8-7b0e-bb76-9724-900606efe69a@dunslane.net
обсуждение исходный текст
Ответ на Re: pg_upgrade failing for 200+ million Large Objects  (Jan Wieck <jan@wi3ck.info>)
Ответы Re: pg_upgrade failing for 200+ million Large Objects  (Zhihong Yu <zyu@yugabyte.com>)
Список pgsql-hackers
On 3/21/21 12:56 PM, Jan Wieck wrote:
> On 3/21/21 7:47 AM, Andrew Dunstan wrote:
>> One possible (probable?) source is the JDBC driver, which currently
>> treats all Blobs (and Clobs, for that matter) as LOs. I'm working on
>> improving that some: <https://github.com/pgjdbc/pgjdbc/pull/2093>
>
> You mean the user is using OID columns pointing to large objects and
> the JDBC driver is mapping those for streaming operations?
>
> Yeah, that would explain a lot.
>
>
>


Probably in most cases the database is designed by Hibernate, and the
front end programmers know nothing at all of Oids or LOs, they just ask
for and get a Blob.


cheers


andrew


--
Andrew Dunstan
EDB: https://www.enterprisedb.com




В списке pgsql-hackers по дате отправления:

Предыдущее
От: Jan Wieck
Дата:
Сообщение: Re: Fix pg_upgrade to preserve datdba
Следующее
От: Tom Lane
Дата:
Сообщение: Re: Fix pg_upgrade to preserve datdba