Re: pg_upgrade failing for 200+ million Large Objects
От | Alexander Korotkov |
---|---|
Тема | Re: pg_upgrade failing for 200+ million Large Objects |
Дата | |
Msg-id | CAPpHfduszAcQk9YAQd8PyUhtzY=SFXhBHjrhRzBn0vLaVJny2g@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: pg_upgrade failing for 200+ million Large Objects (Tom Lane <tgl@sss.pgh.pa.us>) |
Список | pgsql-hackers |
On Mon, Jul 29, 2024 at 12:24 AM Tom Lane <tgl@sss.pgh.pa.us> wrote: > So I'm forced to the conclusion that we'd better make the transaction > size adaptive as per Alexander's suggestion. > > In addition to the patches attached, I experimented with making > dumpTableSchema fold all the ALTER TABLE commands for a single table > into one command. That's do-able without too much effort, but I'm now > convinced that we shouldn't. It would break the semicolon-counting > hack for detecting that tables like these involve extra work. > I'm also not very confident that the backend won't have trouble with > ALTER TABLE commands containing hundreds of subcommands. That's > something we ought to work on probably, but it's not a project that > I want to condition v17 pg_upgrade's stability on. > > Anyway, proposed patches attached. 0001 is some trivial cleanup > that I noticed while working on the failed single-ALTER-TABLE idea. > 0002 merges the catalog-UPDATE commands that dumpTableSchema issues, > and 0003 is Alexander's suggestion. Nice to see you picked up my idea. I took a look over the patchset. Looks good to me. ------ Regards, Alexander Korotkov Supabase
В списке pgsql-hackers по дате отправления: