Re: pg_upgrade failing for 200+ million Large Objects
От | Alexander Korotkov |
---|---|
Тема | Re: pg_upgrade failing for 200+ million Large Objects |
Дата | |
Msg-id | CAPpHfdu63MK5k4qBNc4YV14MFdNq8rCsvz0x7+Z=ijbFm1y6wQ@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: pg_upgrade failing for 200+ million Large Objects (Tom Lane <tgl@sss.pgh.pa.us>) |
Ответы |
Re: pg_upgrade failing for 200+ million Large Objects
|
Список | pgsql-hackers |
On Sat, Jul 27, 2024 at 2:06 AM Tom Lane <tgl@sss.pgh.pa.us> wrote: > Alexander Korotkov <aekorotkov@gmail.com> writes: > > On Sat, Jul 27, 2024 at 1:37 AM Tom Lane <tgl@sss.pgh.pa.us> wrote: > >> It's fairly easy to fix things so that this example doesn't cause > >> that to happen: we just need to issue these updates as one command > >> not N commands per table. > > > I was thinking about counting actual number of queries, not TOC > > entries for transaction number as a more universal solution. But that > > would require usage of psql_scan() or writing simpler alternative for > > this particular purpose. That looks quite annoying. What do you > > think? > > The assumption underlying what we're doing now is that the number > of SQL commands per TOC entry is limited. I'd prefer to fix the > code so that that assumption is correct, at least in normal cases. > I confess I'd not looked closely enough at the binary-upgrade support > code to realize it wasn't correct already :-(. If we go that way, > we can fix this while also making pg_upgrade faster rather than > slower. I also expect that it'll be a lot simpler than putting > a full SQL parser in pg_restore. I'm good with that as soon as we're not going to meet many cases of high number SQL commands per TOC entry. J4F, I have an idea to count number of ';' sings and use it for transaction size counter, since it is as upper bound estimate of number of SQL commands :-) ------ Regards, Alexander Korotkov Supabase
В списке pgsql-hackers по дате отправления: