Re: Upgrade from PG12 to PG
От | Scott Ribe |
---|---|
Тема | Re: Upgrade from PG12 to PG |
Дата | |
Msg-id | B6D3FD80-5794-47B8-9074-E10C4945951B@elevated-dev.com обсуждение исходный текст |
Ответ на | Re: Upgrade from PG12 to PG (Jef Mortelle <jefmortelle@gmail.com>) |
Ответы |
Re: Upgrade from PG12 to PG
Re: Upgrade from PG12 to PG |
Список | pgsql-admin |
> On Jul 20, 2023, at 7:46 AM, Jef Mortelle <jefmortelle@gmail.com> wrote: > > So: not possible to have very little downtime if you have a database with al lot rows containing text as datatype, aspg_upgrade needs 12hr for 24 milj rows in pg_largeobject. We need to get terminology straight, as at the moment your posts are very confusing. In PostgreSQL large objects and textare not the same. Text is basically varchar without a specified length limit. Large object is a blob (but not what SQLcalls a BLOB)--it is kind of like a file stored outside the normal table mechanism, and provides facilities for partialreads, etc: https://www.postgresql.org/docs/15/largeobjects.html. There are a number of ways to wind up with referencesto large objects all deleted, but the orphaned large objects still in the database. First thing you should do: run lovacuum -n to find out if you have orphaned large objects. If so, start cleaning those up,then see how long pg_upgrade takes. Second, what's your hardware? I really don't see dump & restore of a 1TB database taking 6 hours. > Alsready tried to use --link and --jobs, but you cannot ommit the "select lo_unlink ...." for every rows containingdatatype text in your database that the pg_* program creates in the export/dump file. Terminology again, or are you conflating two different issues? pg_upgrade --link does not create a dump file.
В списке pgsql-admin по дате отправления: