Re: pg_upgrade failing for 200+ million Large Objects
От | Jan Wieck |
---|---|
Тема | Re: pg_upgrade failing for 200+ million Large Objects |
Дата | |
Msg-id | 872315a8-99fc-da4e-463d-784cfb5a025d@wi3ck.info обсуждение исходный текст |
Ответ на | Re: pg_upgrade failing for 200+ million Large Objects (Tom Lane <tgl@sss.pgh.pa.us>) |
Ответы |
Re: pg_upgrade failing for 200+ million Large Objects
|
Список | pgsql-hackers |
On 3/23/21 3:35 PM, Tom Lane wrote: > Jan Wieck <jan@wi3ck.info> writes: >> The problem here is that pg_upgrade itself is invoking a shell again. It >> is not assembling an array of arguments to pass into exec*(). I'd be a >> happy camper if it did the latter. But as things are we'd have to add >> full shell escapeing for arbitrary strings. > > Surely we need that (and have it already) anyway? There are functions to shell escape a single string, like appendShellString() but that is hardly enough when a single optarg for --restore-option could look like any of --jobs 8 --jobs=8 --jobs='8' --jobs '8' --jobs "8" --jobs="8" --dont-bother-about-jobs When placed into a shell string, those things have very different effects on your args[]. I also want to say that we are overengineering this whole thing. Yes, there is the problem of shell quoting possibly going wrong as it passes from one shell to another. But for now this is all about passing a few numbers down from pg_upgrade to pg_restore (and eventually pg_dump). Have we even reached a consensus yet on that doing it the way, my patch is proposing, is the right way to go? Like that emitting BLOB TOC entries into SECTION_DATA when in binary upgrade mode is a good thing? Or that bunching all the SQL statements for creating the blob, changing the ACL and COMMENT and SECLABEL all in one multi-statement-query is. Maybe we should focus on those details before getting into all the parameter naming stuff. Regards, Jan -- Jan Wieck Principle Database Engineer Amazon Web Services
В списке pgsql-hackers по дате отправления: