Re: pg_upgrade failing for 200+ million Large Objects
От | Jacob Champion |
---|---|
Тема | Re: pg_upgrade failing for 200+ million Large Objects |
Дата | |
Msg-id | CAAWbhmgUb8p7ff_ZX5jCvqM=ipPxbbDJTXMNVzH-Ho_CXVkRHA@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: pg_upgrade failing for 200+ million Large Objects (Nathan Bossart <nathandbossart@gmail.com>) |
Ответы |
Re: pg_upgrade failing for 200+ million Large Objects
Re: pg_upgrade failing for 200+ million Large Objects |
Список | pgsql-hackers |
On Thu, Sep 8, 2022 at 4:18 PM Nathan Bossart <nathandbossart@gmail.com> wrote: > IIUC the main benefit of this approach is that it isn't dependent on > binary-upgrade mode, which seems to be a goal based on the discussion > upthread [0]. To clarify, I agree that pg_dump should contain the core fix. What I'm questioning is the addition of --dump-options to make use of that fix from pg_upgrade, since it also lets the user do "exciting" new things like --exclude-schema and --include-foreign-data and so on. I don't think we should let them do that without a good reason. Thanks, --Jacob
В списке pgsql-hackers по дате отправления: