Re: Upgrade from PG12 to PG
От | Ron |
---|---|
Тема | Re: Upgrade from PG12 to PG |
Дата | |
Msg-id | 9502f877-c699-4f28-4bb3-4cd3753c14da@gmail.com обсуждение исходный текст |
Ответ на | Re: Upgrade from PG12 to PG (Jef Mortelle <jefmortelle@gmail.com>) |
Список | pgsql-admin |
Don't use pg_dumpall. Use this instead: pg_dump --format=directory --jobs=X --verbose On 7/20/23 08:46, Jef Mortelle wrote: > Hi, > > Many thanks for your answer. > > So: not possible to have very little downtime if you have a database with > al lot rows containing text as datatype, as pg_upgrade needs 12hr for 24 > milj rows in pg_largeobject. > > Testing now with pg_dumpall en pg_restore .... > > > I think, postgresql should take this in high priority to resolve this > problem. > > I have to make a choice in the near future: Postgres or Oracle, and that > database would have a lot of datatype text. > Database would have 1 TB. > It seems me a little bit tricky/dangerous to use Postgres, just for being > able to upgrade to a newer version. > > Kind regards. > > On 20/07/2023 13:43, Ilya Kosmodemiansky wrote: >> Hi Jef, >> >> >> On Thu, Jul 20, 2023 at 1:23 PM Jef Mortelle <jefmortelle@gmail.com> wrote: >>> Looking at the dump file: man many lines like SELECT >>> pg_catalog.lo_unlink('100000'); >>> >>> >>> I have the same issue with /usr/lib/postgresql15/bin/pg_upgrade -v -p >>> 5431 -P 5432 -k >>> >>> >>> Whats going on ? >> pg_upgrade is known to be problematic with large objects. >> Please take a look here to start with: >> https://www.postgresql.org/message-id/20210309200819.GO2021%40telsasoft.com >> >>> >>> Kind regards >>> >>> >>> > > -- Born in Arizona, moved to Babylonia.
В списке pgsql-admin по дате отправления: