Re: pg_upgrade failing for 200+ million Large Objects
| От | Andrew Dunstan |
|---|---|
| Тема | Re: pg_upgrade failing for 200+ million Large Objects |
| Дата | |
| Msg-id | c2a43a97-e551-ea6d-7a4f-a4709b4e0cbd@dunslane.net обсуждение исходный текст |
| Ответ на | Re: pg_upgrade failing for 200+ million Large Objects (Jan Wieck <jan@wi3ck.info>) |
| Ответы |
Re: pg_upgrade failing for 200+ million Large Objects
|
| Список | pgsql-hackers |
On 3/20/21 12:55 PM, Jan Wieck wrote: > On 3/20/21 11:23 AM, Tom Lane wrote: >> Jan Wieck <jan@wi3ck.info> writes: >>> All that aside, the entire approach doesn't scale. >> >> Yeah, agreed. When we gave large objects individual ownership and ACL >> info, it was argued that pg_dump could afford to treat each one as a >> separate TOC entry because "you wouldn't have that many of them, if >> they're large". The limits of that approach were obvious even at the >> time, and I think now we're starting to see people for whom it really >> doesn't work. > > It actually looks more like some users have millions of "small > objects". I am still wondering where that is coming from and why they > are abusing LOs in that way, but that is more out of curiosity. Fact > is that they are out there and that they cannot upgrade from their 9.5 > databases, which are now past EOL. > One possible (probable?) source is the JDBC driver, which currently treats all Blobs (and Clobs, for that matter) as LOs. I'm working on improving that some: <https://github.com/pgjdbc/pgjdbc/pull/2093> cheers andrew -- Andrew Dunstan EDB: https://www.enterprisedb.com
В списке pgsql-hackers по дате отправления: