Обсуждение: Non-text mode for pg_dumpall
Tom and Nathan opined recently that providing for non-text mode for pg_dumpall would be a Good Thing (TM). Not having it has been a long-standing complaint, so I've decided to give it a go. I think we would need to restrict it to directory mode, at least to begin with. I would have a toc.dat with a different magic block (say "PGGLO" instead of "PGDMP") containing the global entries (roles, tablespaces, databases). Then for each database there would be a subdirectory (named for its toc entry) with a standard directory mode dump for that database. These could be generated in parallel (possibly by pg_dumpall calling pg_dump for each database). pg_restore on detecting a global type toc.data would restore the globals and then each of the databases (again possibly in parallel). I'm sure there are many wrinkles I haven't thought of, but I don't see any insurmountable obstacles, just a significant amount of code. Barring the unforeseen my main is to have a preliminary patch by the September CF. Following that I would turn my attention to using it in pg_upgrade. cheers andrew -- Andrew Dunstan EDB: https://www.enterprisedb.com
On Mon, Jun 10, 2024 at 08:58:49AM -0400, Andrew Dunstan wrote: > Tom and Nathan opined recently that providing for non-text mode for > pg_dumpall would be a Good Thing (TM). Not having it has been a > long-standing complaint, so I've decided to give it a go. Thank you! > I think we would need to restrict it to directory mode, at least to begin > with. I would have a toc.dat with a different magic block (say "PGGLO" > instead of "PGDMP") containing the global entries (roles, tablespaces, > databases). Then for each database there would be a subdirectory (named for > its toc entry) with a standard directory mode dump for that database. These > could be generated in parallel (possibly by pg_dumpall calling pg_dump for > each database). pg_restore on detecting a global type toc.data would restore > the globals and then each of the databases (again possibly in parallel). I'm curious why we couldn't also support the "custom" format. > Following that I would turn my attention to using it in pg_upgrade. +1 -- nathan
On 2024-06-10 Mo 10:14, Nathan Bossart wrote: > On Mon, Jun 10, 2024 at 08:58:49AM -0400, Andrew Dunstan wrote: >> Tom and Nathan opined recently that providing for non-text mode for >> pg_dumpall would be a Good Thing (TM). Not having it has been a >> long-standing complaint, so I've decided to give it a go. > Thank you! > >> I think we would need to restrict it to directory mode, at least to begin >> with. I would have a toc.dat with a different magic block (say "PGGLO" >> instead of "PGDMP") containing the global entries (roles, tablespaces, >> databases). Then for each database there would be a subdirectory (named for >> its toc entry) with a standard directory mode dump for that database. These >> could be generated in parallel (possibly by pg_dumpall calling pg_dump for >> each database). pg_restore on detecting a global type toc.data would restore >> the globals and then each of the databases (again possibly in parallel). > I'm curious why we couldn't also support the "custom" format. We could, but the housekeeping would be a bit harder. We'd need to keep pointers to the offsets of the per-database TOCs (I don't want to have a single per-cluster TOC). And we can't produce it in parallel, so I'd rather start with something we can produce in parallel. cheers andrew -- Andrew Dunstan EDB: https://www.enterprisedb.com
On Mon, Jun 10, 2024 at 08:58:49AM -0400, Andrew Dunstan wrote:
> Tom and Nathan opined recently that providing for non-text mode for
> pg_dumpall would be a Good Thing (TM). Not having it has been a
> long-standing complaint, so I've decided to give it a go.
Thank you!
> I think we would need to restrict it to directory mode, at least to begin
> with. I would have a toc.dat with a different magic block (say "PGGLO"
> instead of "PGDMP") containing the global entries (roles, tablespaces,
> databases). Then for each database there would be a subdirectory (named for
> its toc entry) with a standard directory mode dump for that database. These
> could be generated in parallel (possibly by pg_dumpall calling pg_dump for
> each database). pg_restore on detecting a global type toc.data would restore
> the globals and then each of the databases (again possibly in parallel).
I'm curious why we couldn't also support the "custom" format.
On Mon, Jun 10, 2024 at 10:51:42AM -0400, Andrew Dunstan wrote: > On 2024-06-10 Mo 10:14, Nathan Bossart wrote: >> I'm curious why we couldn't also support the "custom" format. > > We could, but the housekeeping would be a bit harder. We'd need to keep > pointers to the offsets of the per-database TOCs (I don't want to have a > single per-cluster TOC). And we can't produce it in parallel, so I'd rather > start with something we can produce in parallel. Got it. -- nathan
On Mon, Jun 10, 2024 at 04:52:06PM +0200, Magnus Hagander wrote: > On Mon, Jun 10, 2024 at 4:14 PM Nathan Bossart <nathandbossart@gmail.com> > wrote: >> I'm curious why we couldn't also support the "custom" format. > > Or maybe even a combo - a directory of custom format files? Plus that one > special file being globals? I'd say that's what most use cases I've seen > would prefer. Is there a particular advantage to that approach as opposed to just using "directory" mode for everything? I know pg_upgrade uses "custom" mode for each of the databases, so a combo approach would be a closer match to the existing behavior, but that doesn't strike me as an especially strong reason to keep doing it that way. -- nathan
On Mon, Jun 10, 2024 at 04:52:06PM +0200, Magnus Hagander wrote:
> On Mon, Jun 10, 2024 at 4:14 PM Nathan Bossart <nathandbossart@gmail.com>
> wrote:
>> I'm curious why we couldn't also support the "custom" format.
>
> Or maybe even a combo - a directory of custom format files? Plus that one
> special file being globals? I'd say that's what most use cases I've seen
> would prefer.
Is there a particular advantage to that approach as opposed to just using
"directory" mode for everything? I know pg_upgrade uses "custom" mode for
each of the databases, so a combo approach would be a closer match to the
existing behavior, but that doesn't strike me as an especially strong
reason to keep doing it that way.
On Mon, Jun 10, 2024 at 05:45:19PM +0200, Magnus Hagander wrote: > On Mon, Jun 10, 2024 at 5:03 PM Nathan Bossart <nathandbossart@gmail.com> > wrote: >> Is there a particular advantage to that approach as opposed to just using >> "directory" mode for everything? I know pg_upgrade uses "custom" mode for >> each of the databases, so a combo approach would be a closer match to the >> existing behavior, but that doesn't strike me as an especially strong >> reason to keep doing it that way. > > A gazillion files to deal with? Much easier to work with individual custom > files if you're moving databases around and things like that. > Much easier to monitor eg sizes/dates if you're using it for backups. > > It's not things that are make-it-or-break-it or anything, but there are > some smaller things that definitely can be useful. Makes sense, thanks for elaborating. -- nathan
Magnus Hagander <magnus@hagander.net> writes: > On Mon, Jun 10, 2024 at 5:03 PM Nathan Bossart <nathandbossart@gmail.com> > wrote: >> Is there a particular advantage to that approach as opposed to just using >> "directory" mode for everything? > A gazillion files to deal with? Much easier to work with individual custom > files if you're moving databases around and things like that. > Much easier to monitor eg sizes/dates if you're using it for backups. You can always tar up the directory tree after-the-fact if you want one file. Sure, that step's not parallelized, but I think we'd need some non-parallelized copying to create such a file anyway. regards, tom lane
On 2024-06-10 Mo 12:21, Tom Lane wrote: > Magnus Hagander <magnus@hagander.net> writes: >> On Mon, Jun 10, 2024 at 5:03 PM Nathan Bossart <nathandbossart@gmail.com> >> wrote: >>> Is there a particular advantage to that approach as opposed to just using >>> "directory" mode for everything? >> A gazillion files to deal with? Much easier to work with individual custom >> files if you're moving databases around and things like that. >> Much easier to monitor eg sizes/dates if you're using it for backups. > You can always tar up the directory tree after-the-fact if you want > one file. Sure, that step's not parallelized, but I think we'd need > some non-parallelized copying to create such a file anyway. > > Yeah. I think I can probably allow for Magnus' suggestion fairly easily, but if I have to choose I'm going to go for the format that can be produced with the maximum parallelism. cheers andrew -- Andrew Dunstan EDB: https://www.enterprisedb.com
Magnus Hagander <magnus@hagander.net> writes:
> On Mon, Jun 10, 2024 at 5:03 PM Nathan Bossart <nathandbossart@gmail.com>
> wrote:
>> Is there a particular advantage to that approach as opposed to just using
>> "directory" mode for everything?
> A gazillion files to deal with? Much easier to work with individual custom
> files if you're moving databases around and things like that.
> Much easier to monitor eg sizes/dates if you're using it for backups.
You can always tar up the directory tree after-the-fact if you want
one file. Sure, that step's not parallelized, but I think we'd need
some non-parallelized copying to create such a file anyway.
On Thu, Jan 02, 2025 at 02:05:13AM +0530, Mahendra Singh Thalor wrote: > Here, I am attaching an updated patch. I fixed some bugs of v01 patch and > did some code cleanup also. Thank you for picking this up! I started to review it, but the documentation changes didn't build, and a few tests in check-world are failing. Would you mind resolving those issues? Also, if you haven't already, please add an entry to the next commitfest [0] to ensure that 1) this feature is tracked and 2) the automated tests will run. + if (dbfile) + { + printfPQExpBuffer(&cmd, "\"%s\" %s %s", pg_dump_bin, + dbfile, create_opts); + appendPQExpBufferStr(&cmd, " -F d "); + } Have you given any thought to allowing a directory of custom format files, as discussed upthread [1]? Perhaps that is better handled as a follow-up patch, but it'd be good to understand the plan, anyway. [0] https://commitfest.postgresql.org [1] https://postgr.es/m/CABUevExoQ26jo%2BaQ9QZq%2BUMA1aD6gfpm9xBnh_t5e0DhaCeRYA%40mail.gmail.com -- nathan
On Wed, 8 Jan 2025 at 20:07, Mahendra Singh Thalor <mahi6run@gmail.com> wrote:
>
> Hi all,
>
> On Wed, 8 Jan 2025 at 00:34, Mahendra Singh Thalor <mahi6run@gmail.com> wrote:
> >
> > On Mon, 6 Jan 2025 at 23:05, Nathan Bossart <nathandbossart@gmail.com> wrote:
> > >
> > > On Thu, Jan 02, 2025 at 02:05:13AM +0530, Mahendra Singh Thalor wrote:
> > > > Here, I am attaching an updated patch. I fixed some bugs of v01 patch and
> > > > did some code cleanup also.
> > >
> > > Thank you for picking this up! I started to review it, but the
> > > documentation changes didn't build, and a few tests in check-world are
> > > failing. Would you mind resolving those issues? Also, if you haven't
> > > already, please add an entry to the next commitfest [0] to ensure that 1)
> > > this feature is tracked and 2) the automated tests will run.
> >
> > Thanks Nathan for the quick response.
> >
> > I fixed bugs of documentation changes and check-world in the latest patch. Now docs are building and check-world is passing.
> >
> > I added entry into commitfest for this patch.[0]
> >
> > >
> > > + if (dbfile)
> > > + {
> > > + printfPQExpBuffer(&cmd, "\"%s\" %s %s", pg_dump_bin,
> > > + dbfile, create_opts);
> > > + appendPQExpBufferStr(&cmd, " -F d ");
> > > + }
> > >
> > > Have you given any thought to allowing a directory of custom format files,
> > > as discussed upthread [1]? Perhaps that is better handled as a follow-up
> > > patch, but it'd be good to understand the plan, anyway.
> >
> > I will make these changes and will test. I will update my findings after doing some testing.
>
> In the latest patch, I added dump and restoring for directory/custom/tar/plain formats. Please consider this patch for review and testing.
>
> Design:
> When we give --format=d|c|t then we are dumping all global sql commands in global.dat in plain sql format and we are making a map.dat file with dbname and dboid. For each database, we are making separate subdirectory with dboid under databases directory and dumping as per archive format(d|c|t).
> While restoring, first we are restoring all global sql commands from global.dat and then we are restoring one by one all databases. As we are supporting --exclude-database with pg_dumpall, the same we are supporting with pg_restore also to skip restoring on some specified database patterns.
> If we want to restore a single database, then we can specided particular subdirectory from the databases folder. To get file name, we refer dbname into map.file.
>
> TODO: Now I will work on test cases for these new added options to the pg_dumpall and pg_restore.
>
> Here, I am attaching the v04 patch for testing and review.
Sorry. My mistake.
v04 was the delta patch on the top of v03.
Here, I am attaching the v05 patch for testing and review.
On Sat, 11 Jan 2025 at 11:19, jian he <jian.universality@gmail.com> wrote: > Thanks Jian for the review and testing. > in src/bin/pg_dump/pg_dumpall.c main > i think you need do > > archDumpFormat = parseDumpFormat(formatName); > /* > * Open the output file if required, otherwise use stdout. If required, > * then create new files with global.dat and map.dat names. > */ > if (archDumpFormat != archNull) > { > char toc_path[MAXPGPATH]; > /* > * If directory/tar/custom format is specified then we must provide the > * file name to create one main directory. > */ > if (!filename || strcmp(filename, "") == 0) > pg_fatal("no output directory specified"); > /* TODO: accept the empty existing directory. */ > if (mkdir(filename, 0700) < 0) > pg_fatal("could not create directory \"%s\": %m", > filename); > snprintf(toc_path, MAXPGPATH, "%s/global.dat", filename); > OPF = fopen(toc_path, "w"); > if (!OPF) > pg_fatal("could not open global.dat file: %s", strerror(errno)); > } > else if (filename) > { > OPF = fopen(filename, PG_BINARY_W); > if (!OPF) > pg_fatal("could not open output file \"%s\": %m", > filename); > } > else > OPF = stdout; > > before connectDatabase call. Okay. I will add an error check before connectDatabase call in the next version. > > otherwise if the cluster is not setting up. > ``pg_dumpall --format=d`` > error would be about connection error, not > "pg_dumpall: error: no output directory specified" > > we want ``pg_dumpall --format`` invalid options > to error out even if the cluster is not setting up. > > attached are two invalid option test cases. Thanks. I am also working on test cases. I will add all error test cases in the next version and will include these two also. > > you also need change > <varlistentry> > <term><option>-f <replaceable > class="parameter">filename</replaceable></option></term> > <term><option>--file=<replaceable > class="parameter">filename</replaceable></option></term> > <listitem> > <para> > Send output to the specified file. If this is omitted, the > standard output is used. > </para> > </listitem> > </varlistentry> > ? > > since if --format=d, > <option>--file=<replaceable class="parameter">filename</replaceable></option> > can not be omitted. Okay. I will fix it. -- Thanks and Regards Mahendra Singh Thalor EnterpriseDB: http://www.enterprisedb.com
Hmm, this patch adds a function connectDatabase() to pg_restore, but a function that's almost identical already exists in pg_dumpall. I suggest they should be unified. Maybe create a new file for connection management routines? (since this clearly doesn't fit common.c nor dumputils.c). -- Álvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/ "In Europe they call me Niklaus Wirth; in the US they call me Nickel's worth. That's because in Europe they call me by name, and in the US by value!"
Hmm, this patch adds a function connectDatabase() to pg_restore, but a
function that's almost identical already exists in pg_dumpall.
suggest they should be unified. Maybe create a new file for connection
management routines? (since this clearly doesn't fit common.c nor
dumputils.c).
--
Álvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/
"In Europe they call me Niklaus Wirth; in the US they call me Nickel's worth.
That's because in Europe they call me by name, and in the US by value!"
hi. the following two tests, you can add to src/bin/pg_dump/t/001_basic.pl command_fails_like( [ 'pg_restore', '--globals-only', '-f', 'xxx' ], qr/\Qpg_restore: error: option -g\/--globals-only requires option -d\/--dbname\E/, 'pg_restore: error: option -g/--globals-only requires option -d/--dbname' ); command_fails_like( [ 'pg_restore', '--globals-only', '--file=xxx', '--exclude-database=x',], qr/\Qpg_restore: error: option --exclude-database cannot be used together with -g\/--globals-only\E/, 'pg_restore: error: option --exclude-database cannot be used together with -g/--globals-only' ); in pg_restore.sgml. <varlistentry> <term><option>--exclude-database=<replaceable class="parameter">pattern</replaceable></option></term> <listitem> the position should right after <varlistentry> <term><option>-d <replaceable class="parameter">dbname</replaceable></option></term> <term><option>--dbname=<replaceable class="parameter">dbname</replaceable></option></term> should pg_restore --globals-only pg_restore --exclude-database=pattern be in a separate patch? i am also wondering what will happen: pg_restore --exclude-database=pattern --dbname=pattern
hi.
the following two tests, you can add to src/bin/pg_dump/t/001_basic.pl
command_fails_like(
[ 'pg_restore', '--globals-only', '-f', 'xxx' ],
qr/\Qpg_restore: error: option -g\/--globals-only requires option
-d\/--dbname\E/,
'pg_restore: error: option -g/--globals-only requires option -d/--dbname'
);
command_fails_like(
[ 'pg_restore', '--globals-only', '--file=xxx', '--exclude-database=x',],
qr/\Qpg_restore: error: option --exclude-database cannot be used
together with -g\/--globals-only\E/,
'pg_restore: error: option --exclude-database cannot be used
together with -g/--globals-only'
);
in pg_restore.sgml.
<varlistentry>
<term><option>--exclude-database=<replaceable
class="parameter">pattern</replaceable></option></term>
<listitem>
the position should right after
<varlistentry>
<term><option>-d <replaceable
class="parameter">dbname</replaceable></option></term>
<term><option>--dbname=<replaceable
class="parameter">dbname</replaceable></option></term>
should
pg_restore --globals-only
pg_restore --exclude-database=pattern
be in a separate patch?
i am also wondering what will happen:
pg_restore --exclude-database=pattern --dbname=pattern
On Sun, Jan 12, 2025 at 5:31 AM Mahendra Singh Thalor <mahi6run@gmail.com> wrote: > > > > > you also need change > > <varlistentry> > > <term><option>-f <replaceable > > class="parameter">filename</replaceable></option></term> > > <term><option>--file=<replaceable > > class="parameter">filename</replaceable></option></term> > > <listitem> > > <para> > > Send output to the specified file. If this is omitted, the > > standard output is used. > > </para> > > </listitem> > > </varlistentry> > > ? > > > > since if --format=d, > > <option>--file=<replaceable class="parameter">filename</replaceable></option> > > can not be omitted. > > No, we don't need this change. With --fromat=d, we can omit the --file option. > I think this is not correct. since the following three will fail. $BIN6/pg_dumpall --format=custom --exclude-database=*template* --schema-only $BIN6/pg_dumpall --format=directory --exclude-database=*template* --schema-only $BIN6/pg_dumpall --format=tar --exclude-database=*template* --schema-only that means, pg_dumpall, when format is {custom|directory|tar} --file option cannot be omitted. you introduced a format p(plain) for pg_restore? since $BIN6/pg_restore --dbname=src6 --format=p will not error out. but doc/src/sgml/ref/pg_restore.sgml didn't mention this format. + if (archDumpFormat == archDirectory) + appendPQExpBufferStr(&cmd, " -F d "); + else if (archDumpFormat == archCustom) + appendPQExpBufferStr(&cmd, " -F c "); + else if (archDumpFormat == archTar) + appendPQExpBufferStr(&cmd, " -F t "); can we use long format, i think that would improve the readability. like changing from appendPQExpBufferStr(&cmd, " -F d "); to appendPQExpBufferStr(&cmd, " --format=directory"); ------------------------<<>>>------ I have tested {pg_dump && pg_restore --list}. pg_restore --list works fine with format {directory|custom|tar} but it seems there may be some problems with {pg_dumpall && pg_restore --list} where format is not plain. with your v08 patch, in my local environment. $BIN6/pg_dumpall --format=custom --exclude-database=*template* --schema-only --file=dumpall_src6.custom $BIN6/pg_restore --dbname=src6 --verbose --schema-only --list $SRC6/dumpall_src6.custom error: pg_restore: error: option -C/--create should be specified when using dump of pg_dumpall $BIN6/pg_restore --dbname=src6 --create --verbose --schema-only --list $SRC6/dumpall_src6.custom following is some of the output: pg_restore: found dbname as : "`s3or" and db_oid:1 in map.dat file while restoring pg_restore: found dbname as : "`s3or" and db_oid:5 in map.dat file while restoring pg_restore: found total 2 database names in map.dat file pg_restore: needs to restore 2 databases out of 2 databases pg_restore: restoring database "`s3or" pg_restore: error: could not open input file "/home/jian/Desktop/pg_src/src6/postgres/dumpall_src6.custom/databases/1": No such file or directory
hi. in master src/bin/pg_dump/pg_restore.c: main function if (opts->tocSummary) PrintTOCSummary(AH); else { ProcessArchiveRestoreOptions(AH); RestoreArchive(AH); } opts->tocSummary is true (pg_restore --list), no query will be executed. but your patch (pg_restore --list) may call execute_global_sql_commands, which executes a query. sscanf(line, "%u" , &db_oid); sscanf(line, "%s" , db_oid_str); i think it would be better sscanf(line, "%u %s" , &db_oid, db_oid_str); in doc/src/sgml/ref/pg_dumpall.sgml Note: This option can be omitted only when --format=p|plain. maybe change to Note: This option can be omitted only when <option>--format</option> is plain. --format=format section: "" Under this databases subdirectory, there will be subdirectory with dboid name for each database. "" this sentence is not correct? because drwxr-xr-x databases .rw-rw-r-- global.dat .rw-rw-r-- map.dat "databases" is a directory, and under the "database" directory, it's a list of files. each file filename is corresponding to a unique database name so there is no subdirectory under subdirectory? in src/bin/pg_dump/meson.build you need add 'common_dumpall_restore.c', to the pg_dump_common_sources section. otherwise meson build cannot compile. $BIN6/pg_restore --dbname=src6 --verbose --list $SRC6/dumpall.custom6 pg_restore: error: option -C/--create should be specified when using dump of pg_dumpall this command should not fail? in doc/src/sgml/ref/pg_restore.sgml <varlistentry> ... <term><option>--format=<replaceable class="parameter">format</replaceable></option></term> also need <term><literal>plain</literal></term> ?
hi. some minor issues come to my mind when I look at it again. looking at set_null_conf, i think "if (archDumpFormat != archNull)" can be: if (archDumpFormat != archNull) { OPF = fopen(toc_path, "w"); if (!OPF) pg_fatal("could not open global.dat file: \"%s\" for writing: %m", toc_path); } some places we use ``fopen(filename, PG_BINARY_W)``, some places we use ``fopen(filename, "w");`` kind of inconsistent... + printf(_(" -F, --format=c|d|t|p output file format (custom, directory, tar,\n" + " plain text (default))\n")); this indentation level is not right? if we look closely at the surrounding output of `pg_dumpall --help`. pg_dump.sgml --create option description: This option is ignored when emitting an archive (non-text) output file. For the archive formats, you can specify the option when you call pg_restore. in runPgDump, we have: /* * If this is non-plain format dump, then append file name and dump * format to the pg_dump command to get archive dump. */ if (archDumpFormat != archNull) { printfPQExpBuffer(&cmd, "\"%s\" -f %s %s", pg_dump_bin, dbfile, create_opts); ... } so in here, create_opts is not necessary per pg_dump.sgml above description. we can simplify it as: if (archDumpFormat != archNull) { printfPQExpBuffer(&cmd, "\"%s\" --file=%s", pg_dump_bin, dbfile); } ?
hi. $BIN10/pg_restore --globals-only --verbose --file=test.sql x.dump it will create a "test.sql" file, but it should create file test.sql (no double quotes). ------<>>>>------ if (archDumpFormat != archNull && (!filename || strcmp(filename, "") == 0)) { pg_log_error("options -F/--format=d|c|t requires option -f/--filename with non-empty string"); ... } here, it should be pg_log_error("options -F/--format=d|c|t requires option -f/--file with non-empty string"); ------<>>>>------ the following pg_dumpall, pg_restore not working. $BIN10/pg_dumpall --format=custom --file=x1.dump --globals-only $BIN10/pg_restore --file=3.sql x1.dump ERROR: pg_restore: error: directory "x1.dump" does not appear to be a valid archive ("toc.dat" does not exist) these two also not working: $BIN10/pg_dumpall --format=custom --file=x1.dump --verbose --globals-only $BIN10/pg_restore --file=3.sql --format=custom x1.dump error message: pg_restore: error: could not read from input file: Is a directory ------<>>>>------ IsFileExistsInDirectory function is the same as _fileExistsInDirectory. Can we make _fileExistsInDirectory extern function? + /* If global.dat and map.dat are exist, then proces them. */ + if (IsFileExistsInDirectory(pg_strdup(inputFileSpec), "global.dat") + && IsFileExistsInDirectory(pg_strdup(inputFileSpec), "map.dat")) + { comment typo, "proces" should "process". here, we don't need pg_strdup? ------<>>>>------ # pg_restore tests +command_fails_like( + [ + 'pg_restore', '-p', $port, '-f', $plainfile, + "--exclude-database=grabadge", + '--globals-only' + ], + qr/\Qg_restore: error: option --exclude-database cannot be used together with -g\/--globals-only\E/, + 'pg_restore: option --exclude-database cannot be used together with -g/--globals-only'); We can put the above test on src/bin/pg_dump/t/001_basic.pl, since validating these conflict options don't need a cluster to be set up. typedef struct SimpleDatabaseOidListCell and typedef struct SimpleDatabaseOidList need also put into src/tools/pgindent/typedefs.list
hi. + printfPQExpBuffer(query, + "SELECT substring ( " + " '%s' , " + " '%s' ) ", str, ptrn); + result = executeQuery(conn, query->data); + if (PQresultStatus(result) == PGRES_TUPLES_OK) + { + if (PQntuples(result) == 1) + { + const char *outstr; + + outstr = PQgetvalue(result, 0, 0); i think here you should use PQgetisnull(result, 0, 0) ? example: pg_dumpall and pg_restore: $BIN10/pg_dumpall --verbose --format=custom --file=x12.dump $BIN10/pg_restore --verbose --dbname=src10 x12.dump some log message for the above command: pg_restore: found dbname as : "template1" and db_oid:1 in map.dat file while restoring pg_restore: found dbname as : "s1" and db_oid:17960 in map.dat file while restoring pg_restore: found dbname as : "src10" and db_oid:5 in map.dat file while restoring pg_restore: found total 3 database names in map.dat file pg_restore: needs to restore 3 databases out of 3 databases pg_restore: restoring dump of pg_dumpall without -C option, there might be multiple databases in directory. pg_restore: restoring database "template1" pg_restore: connecting to database for restore pg_restore: implied data-only restore pg_restore: restoring database "s1" pg_restore: connecting to database for restore pg_restore: processing data for table "public.t" pg_restore: while PROCESSING TOC: pg_restore: from TOC entry 3376; 0 17961 TABLE DATA t jian pg_restore: error: could not execute query: ERROR: relation "public.t" does not exist Command was: COPY public.t (a) FROM stdin; 1. message: "pg_restore: implied data-only restore" Normally pg_dump and pg_restore will dump the schema and the data, then when we are connecting to the same database with pg_restore, there will be lots of schema elements already exists ERROR. but the above command case, pg_restore only restores the content/data not schema, that's why there is very little error happening. so here pg_restore not restore schema seems not ok? 2. pg_dumpall with non-text mode, we don't have \connect command in file global.dat or map.dat I have database "s1" with table "public.t". if I create a table src10.public.t (database.schema.table) with column a. then pg_restore will restore content of s1.public.t (database s1) to src10.public.t (database src10). in ConnectDatabase(Archive *AHX, const ConnParams *cparams, bool isReconnect) i added if (cparams->dbname) fprintf(stderr, "pg_backup_db.c:%d %s called connecting to %s now\n", __LINE__, __func__, cparams->dbname); to confirm that we are connecting the same database "src10", while dumping all the contents in x12.dump.
On Thu, 23 Jan 2025 at 14:59, jian he <jian.universality@gmail.com> wrote: > > hi. > The four patches attached are to solve the > TODO1: We need to think for --exclude-database=PATTERN for pg_restore. > it is based on your v11_pg_dumpall-with-directory-tar-custom-format-21-jan.patch > > > 0001. pg_dumpall --exclude-database=PATTERN already works, > main function resolve pattern matching is expand_dbname_patterns. > make it an extern function, so pg_restore --exclude-database can also use it. Hi Jian, We can't use the same expand_dbname_patterns function pg_restore. In the 1st patch, by mistake I also used this function but then I realised that we should not use this function due to some limitation for pg_restore. While doing pg_dumpall, we have all the existence database names in the pg_database catalog but while restoring, we don't have all databases in the catalog. Actually, we will read dbnames from map.dat file to skip matching patterns for restore. Ex: let say we have a fresh server with postgres and template1 databases. Now we want to restore one backup and inside the map.dat file, we have dbname=db_123 and dbname=db_234. If we want to use --exclude-database=db_123, then your patch will not work as this db hasn't been created. Please cross verify again and let me know your feedback. I think, as of now, mine v11 patch is working as per expectation. > > 0002 cosmetic code changes not in pg_restore.c > 0003 cosmetic code changes in pg_restore.c > > > 0004 fully implement pg_restore --exclude-database=PATTERN > similar to pg_dumpall.c > declare two file static variables: > static SimpleStringList database_exclude_names = {NULL, NULL}; > static SimpleStringList db_exclude_patterns = {NULL, NULL}; > I also deleted the function is_full_pattern. > > > I use > $BIN10/pg_restore --exclude-database=*x* --exclude-database=*s* > --exclude-database=*t* --verbose --file=test.sql x1.dump > the verbose message to verify my changes. -- Thanks and Regards Mahendra Singh Thalor EnterpriseDB: http://www.enterprisedb.com
> hi. > After some tests and thinking about your reply, I admit that using > expand_dbname_patterns > in pg_restore will not work. > We need to do pattern matching against the map.dat file. > Please check the attached v12 series based on your > v11_pg_dumpall-with-directory-tar-custom-format-21-jan.patch > > v12-0001 cosmetic change. > v12-0002 implement pg_resore --exclude-database=PATTERN. > main gist of implementation: > for each database name in map.dat file, > check if this database name pattern matches with PATTERN or not. > pattern matching is using processSQLNamePattern. > > your substring will not work. > some of the test cases. > $BIN10/pg_restore --exclude-database=* -Cd template1 --verbose dir10 > > dir_format 2>&1 Hi, As per discussion with Robert Haas and Dilip Kumar, we thought that we can't assume that there will be a db connection every time while doing pg_restore but in attached patch, we are assuming that we have a db connection. In my previous updates, I already mentioned this problem. I think, we should not use connection for --exclude-database, rather we should use direct functions to validate patterns or we should restrict as NAME only. On Sun, 26 Jan 2025 at 20:17, jian he <jian.universality@gmail.com> wrote: > > hi. > attached patching trying to refactor ReadOneStatement > for properly handling the single and double quotes. > the commit message also has some tests on it. > > it is based on your > v11_pg_dumpall-with-directory-tar-custom-format-21-jan.patch. I think, instead of char, if we read line by line, then we don't need that much code and need not to worry about double quotes. In the next version, I will merge some patches and will change it to read line by line. -- Thanks and Regards Mahendra Singh Thalor EnterpriseDB: http://www.enterprisedb.com
Hi mahendra,
I have reviewed the code in the v11 patch and it looks good to me.
But in common_dumpall_restore.c there's parseDumpFormat which is common between pg_dumpall and pg_restore ,as per the discussion in [1] thread i don't think we should create a common api ,as discussed in the thread there might chances in the future we might decide that some format is obsolete and desupport it in pg_dumpall ,while support in pg_restore for compatibility reasons.
[1] https://www.postgresql.org/message-id/flat/CAFC%2Bb6pfK-BGcWW1kQmtxVrCh-JGjB2X02rLPQs_ZFaDGjZDsQ%40mail.gmail.com
Regards,
Srinath Reddy Sadipiralla,
EDB: http://www.enterprisedb.com
On Tue, 28 Jan 2025 at 10:19, Srinath Reddy <srinath2133@gmail.com> wrote: > > > Hi mahendra, > > I have reviewed the code in the v11 patch and it looks good to me. > > But in common_dumpall_restore.c there's parseDumpFormat which is common between pg_dumpall and pg_restore ,as per thediscussion in [1] thread i don't think we should create a common api ,as discussed in the thread there might chances inthe future we might decide that some format is obsolete and desupport it in pg_dumpall ,while support in pg_restore forcompatibility reasons. > Oaky. Thanks for review. I will make changes as per discussion in another thread. On Tue, 28 Jan 2025 at 11:52, Srinath Reddy <srinath2133@gmail.com> wrote: > > make check-world fails,i think we don't need $port and $filename instead we can use something like 'xxx'.so fixed it inthe below patch. In offline discussion, Andew already reported this test case. I will fix this in the next version. > > Regards, > Srinath Reddy Sadipiralla, > EDB: http://www.enterprisedb.com > -- Thanks and Regards Mahendra Singh Thalor EnterpriseDB: http://www.enterprisedb.com
hi. more small issues. + count_db++; /* Increment db couter. */ + dboidprecell = dboid_cell; + } + typo, "couter" should be "counter". + +/* + * get_dbname_oid_list_from_mfile + * + * Open map.dat file and read line by line and then prepare a list of database + * names and correspoding db_oid. + * typo, "correspoding" should be "corresponding". execute_global_sql_commands comments didn't mention ``IF (outfile) `` branch related code. We can add some comments saying that ""IF opts->filename is not specified, then copy the content of global.dat to opts->filename""". or split it into two functions. + while((fgets(line, MAXPGPATH, pfile)) != NULL) + { + Oid db_oid; + char db_oid_str[MAXPGPATH + 1]; + char dbname[MAXPGPATH + 1]; + + /* Extract dboid. */ + sscanf(line, "%u" , &db_oid); + sscanf(line, "%s" , db_oid_str); + + /* Now copy dbname. */ + strcpy(dbname, line + strlen(db_oid_str) + 1); + + /* Remove \n from dbanme. */ + dbname[strlen(dbname) - 1] = '\0'; + + pg_log_info("found dbname as : \"%s\" and db_oid:%u in map.dat file while restoring", dbname, db_oid); + + /* Report error if file has any corrupted data. */ + if (!OidIsValid(db_oid) || strlen(dbname) == 0) + pg_fatal("invalid entry in map.dat file at line : %d", count + 1); + + /* + * XXX : before adding dbname into list, we can verify that this db + * needs to skipped for restore or not but as of now, we are making + * a list of all the databases. + */ + simple_db_oid_list_append(dbname_oid_list, db_oid, dbname); + count++; + } db_oid first should be set to 0, dbname first character first should be set to 0 (char dbname[0] = '\0') before sscanf call. so if sscanf fail, the db_oid and dbname value is not undermined)
i think we have to change the pg_dumpall "--help" message similar to pg_dump's specifying that now pg_dumpall dumps cluster into to other non-text formats.
Need similar "--help" message change in pg_restore to specify that now pg_restore supports restoring whole cluster from archive created from pg_dumpall.
diff --git a/src/bin/pg_dump/pg_dumpall.c b/src/bin/pg_dump/pg_dumpall.c
index 3e022ecdeb..728abe841c 100644
--- a/src/bin/pg_dump/pg_dumpall.c
+++ b/src/bin/pg_dump/pg_dumpall.c
@@ -667,7 +667,7 @@ main(int argc, char *argv[])
static void
help(void)
{
- printf(_("%s extracts a PostgreSQL database cluster into an SQL script file.\n\n"), progname);
+ printf(_("%s extracts a PostgreSQL database cluster into an SQL script file or to other formats.\n\n"), progname);
index fc248a441e..c4e58c1f3b 100644
--- a/src/bin/pg_dump/pg_restore.c
+++ b/src/bin/pg_dump/pg_restore.c
@@ -582,6 +582,8 @@ static void
usage(const char *progname)
{
printf(_("%s restores a PostgreSQL database from an archive created by pg_dump.\n\n"), progname);
+ printf(_("[or]\n"));
+ printf(_("%s restores a PostgreSQL entire cluster from an archive created by pg_dumpall.\n\n"), progname); Regards, Srinath Reddy Sadipiralla, EDB: https://www.enterprisedb.com
I found a bug ,while using "./pg_restore pdd -f -" actually it has to copy everything(global sql commands + remaining dump ) into stdout as per the "-f, --file=FILENAME output file name (- for stdout)" but it is copying global sql commands to a file literally naming it as "-" and remaining dump is written to stdout without those global sql commands."-" is not a output file it signifies stdout in terminal cmds.so we have to handle this case.
because of above reason "./pg_restore pdd -g -f -" also does the same creates a file "-" and writes globals to that file instead of stdout.
This is the delta patch to handle this case.please have a look and give some feedback.
@@ -84,7 +84,7 @@ static int restoreAllDatabases(PGconn *conn, const char *dumpdirpath,
SimpleStringList db_exclude_patterns, RestoreOptions *opts, int numWorkers);
static void execute_global_sql_commands(PGconn *conn, const char *dumpdirpath,
const char *outfile);
-static void copy_global_file_to_out_file(const char *outfile, FILE *pfile);
+static void copy_global_file(const char *outfile, FILE *pfile);
static int filter_dbnames_for_restore(PGconn *conn,
SimpleDatabaseOidList *dbname_oid_list,
- ofile = fopen(out_file_path, PG_BINARY_W);
+ if (strcmp(outfile, "-") == 0){
+ int fn = fileno(stdout);
+ ofile = fdopen(dup(fn), PG_BINARY_W);
+ }
+ else{
+ snprintf(out_file_path, MAXPGPATH, "%s", outfile);
+ ofile = fopen(out_file_path, PG_BINARY_W);
+ }
+
if (ofile == NULL)
{
Regards,
Srinath Reddy Sadipiralla,
EDB: https://www.enterprisedb.com
diff --git a/src/bin/pg_dump/pg_restore.c b/src/bin/pg_dump/pg_restore.c
index 42c4fe3ce2..90e6b71a50 100644
--- a/src/bin/pg_dump/pg_restore.c
+++ b/src/bin/pg_dump/pg_restore.c
@@ -84,7 +84,7 @@ static int restoreAllDatabases(PGconn *conn, const char *dumpdirpath,
SimpleStringList db_exclude_patterns, RestoreOptions *opts, int numWorkers);
static void execute_global_sql_commands(PGconn *conn, const char *dumpdirpath,
const char *outfile);
-static void copy_global_file_to_out_file(const char *outfile, FILE *pfile);
+static void copy_global_file(const char *outfile, FILE *pfile);
static int filter_dbnames_for_restore(PGconn *conn,
SimpleDatabaseOidList *dbname_oid_list,
SimpleStringList db_exclude_patterns);
@@ -1178,7 +1178,7 @@ execute_global_sql_commands(PGconn *conn, const char *dumpdirpath, const char *o
*/
if (outfile)
{
- copy_global_file_to_out_file(outfile, pfile);
+ copy_global_file(outfile, pfile);
return;
}
@@ -1207,24 +1207,35 @@ execute_global_sql_commands(PGconn *conn, const char *dumpdirpath, const char *o
}
/*
- * copy_global_file_to_out_file
+ * copy_global_file
*
- * This will copy global.dat file into out file.
+ * This will copy global.dat file into out file, if file is given
+ * else copies to stdout.
+ *
*/
static void
-copy_global_file_to_out_file(const char *outfile, FILE *pfile)
+copy_global_file(const char *outfile, FILE *pfile)
{
char out_file_path[MAXPGPATH];
FILE *ofile;
int c;
- snprintf(out_file_path, MAXPGPATH, "%s", outfile);
- ofile = fopen(out_file_path, PG_BINARY_W);
+ if (strcmp(outfile, "-") == 0)
+ {
+ int fn = fileno(stdout);
+ ofile = fdopen(dup(fn), PG_BINARY_W);
+ }
+ else
+ {
+ snprintf(out_file_path, MAXPGPATH, "%s", outfile);
+ ofile = fopen(out_file_path, PG_BINARY_W);
+ }
+
if (ofile == NULL)
{
fclose(pfile);
- pg_fatal("could not open file: \"%s\"", out_file_path);
+ pg_fatal("could not open file: \"%s\"", outfile);
}
/* Now append global.dat into out file. */
hi. just a quick response for v15. the pg_restore man page says option --list as "List the table of contents of the archive". but $BIN10/pg_restore --format=directory --list --file=1.sql dir10 also output the contents of "global.dat", we should not output it. in restoreAllDatabases, we can do the following change: ``` /* Open global.dat file and execute/append all the global sql commands. */ if (!opts->tocSummary) process_global_sql_commands(conn, dumpdirpath, opts->filename); ``` what should happen with $BIN10/pg_restore --format=directory --globals-only --verbose dir10 --list Should we error out saying "--globals-only" and "--list" are conflict options? if so then in main function we can do the following change: ``` if (globals_only) { process_global_sql_commands(conn, inputFileSpec, opts->filename); if (conn) PQfinish(conn); pg_log_info("databases restoring is skipped as -g/--globals-only option is specified"); } ``` in restoreAllDatabases, if num_db_restore == 0, we will still call process_global_sql_commands. I am not sure this is what we expected.
>
> hi.
> review based on v16.
>
> because of
> https://postgr.es/m/CAFC+b6pWQiSL+3rvLxN9vhC8aONp4OV9c6u+BVD6kmWmDbd1WQ@mail.gmail.com
>
> in copy_global_file_to_out_file, now it is:
> if (strcmp(outfile, "-") == 0)
> OPF = stdout;
> I am confused, why "-" means stdout.
> ``touch ./- `` command works fine.
> i think dash is not special character, you may see
> https://stackoverflow.com/a/40650391/15603477
-f
filename
--file=
filename
Specify output file for generated script, or for the listing when used with
-l
. Use-
for stdout.
>
>
> + /* Create a subdirectory with 'databases' name under main directory. */
> + if (mkdir(db_subdir, 0755) != 0)
> + pg_log_error("could not create subdirectory \"%s\": %m", db_subdir);
> here we should use pg_fatal?
>
>
> pg_log_info("executing %s", sqlstatement.data);
> change to
> pg_log_info("executing query: %s", sqlstatement.data);
> message would be more similar to the next pg_log_error(...) message.
>
>
> + /*
> + * User is suggested to use single database dump for --list option.
> + */
> + if (opts->tocSummary)
> + pg_fatal("option -l/--list cannot be used when using dump of pg_dumpall");
> maybe change to
> + pg_fatal("option -l/--list cannot be used when restoring multiple databases");
>
> $BIN10/pg_restore --format=directory --list dir10_x
> if the directory only has one database, then we can actually print out
> the tocSummary.
> if the directory has more than one database then pg_fatal.
> To tolerate this corner case (only one database) means that pg_restore
> --list requires a DB connection,
> but I am not sure that is fine.
> anyway, the attached patch allows this corner case.
>
>
> PrintTOCSummary can only print out summary for a single database.
> so we don't need to change PrintTOCSummary.
>
>
> + /*
> + * To restore multiple databases, -C (create database) option should
> be specified
> + * or all databases should be created before pg_restore.
> + */
> + if (opts->createDB != 1)
> + pg_log_info("restoring dump of pg_dumpall without -C option, there
> might be multiple databases in directory.");
>
> we can change it to
> + if (opts->createDB != 1 && num_db_restore > 0)
> + pg_log_info("restoring multiple databases without -C option.");
>
>
> Bug.
> when pg_restore --globals-only can be applied when we are restoring a
> single database (can be an output of pg_dump).
>
> There are some tests per https://commitfest.postgresql.org/52/5495, I
> will check it later.
> The attached patch is the change for the above reviews.
--
Thanks and Regards
Mahendra Singh Thalor
EnterpriseDB: http://www.enterprisedb.com
i think during restore we should not force user to use -C during cases like
./pg_restore pdd -g -f -
./pg_restore pdd -a -f -
./pg_restore pdd -s -f -
because its not good to use -C to create database every time when we are using these options individually.
latest patch throws following error for all the above cases
pg_restore: hint: Try "pg_restore --help" for more information.
pg_restore: hint: If db is already created and dump has single db dump, then use particular dump file.
Thanks and Regards
hi. <refnamediv> <refname>pg_restore</refname> <refpurpose> restore a <productname>PostgreSQL</productname> database from an archive file created by <application>pg_dump</application> or restore multiple <productname>PostgreSQL</productname> database from an archive directory created by <application>pg_dumpall</application> </refpurpose> </refnamediv> i think it's way too verbose. we can change it to: <refpurpose> restore <productname>PostgreSQL</productname> database from an archive file created by <application>pg_dump</application> or <application>pg_dumpall</application> </refpurpose> <para> <application>pg_restore</application> is a utility for restoring a <productname>PostgreSQL</productname> database from an archive created by <xref linkend="app-pgdump"/> in one of the non-plain-text formats. we can change it to <para> <application>pg_restore</application> is a utility for restoring <productname>PostgreSQL</productname> databases from an archive created by <xref linkend="app-pgdump"/> or <xref linkend="app-pgdumpall"/> in one of the non-plain-text formats. similarly, pg_dumpall first 3 sentences in the description section needs to change. in pg_restore.sgml <option>--create</option section, maybe we can explicitly mention that restoring multiple databases, <option>--create</option> is required. like: "This option is required when restoring multiple databases." restoreAllDatabases + if (!conn) + pg_log_info("there is no database connection so consider pattern as simple name for --exclude-database"); filter_dbnames_for_restore + if (!conn) + pg_log_info("considering PATTERN as NAME for --exclude-database option as no db connection while doing pg_restore."); these two log messages sent out the same information. maybe we can remove the first one, and change the second to if (!conn && db_exclude_patterns.head != NULL) pg_log_info("considering PATTERN as NAME for --exclude-database option as no db connection while doing pg_restore."); as mentioned in the previous thread, there is no need to change PrintTOCSummary. another minor issue about comments. I guess we can tolerate this minor issue. $BIN10/pg_restore --format=tar --create --file=1.sql --exclude-database=src10 --verbose tar10 > dir_format 2>&1 1.sql file will copy tar10/global.dat as is. but we already excluded src10. but 1.sql will still have comments as -- -- Database "src10" dump -- $BIN10/pg_dumpall --format=custom --file=x1.dump --globals-only $BIN10/pg_dumpall --format=custom --file=x2.dump Currently x1.dump/global.dat is differ from x2.dump/global.dat if we dump multiple databases using pg_dumpall we have " -- -- Databases -- -- -- Database "template1" dump -- -- -- Database "src10" dump -- -- -- Database "x" dump -- " maybe there are not need, since we already have map.dat file I am not sure if the following is as expected or not. $BIN10/pg_dumpall --format=custom --file=x1.dump --globals-only $BIN10/pg_restore --create --file=3.sql --globals-only x1.dump --verbose $BIN10/pg_restore --create --file=3.sql x1.dump --verbose the first pg_restore command will copy x1.dump/global.dat as is to 3.sql, the second pg_restore will not copy anything to 3.sql. but the second command implies copying global dumps to 3.sql?
On Tue, Feb 18, 2025 at 2:10 PM jian he <jian.universality@gmail.com> wrote: > > hi. hi. more cosmetic minor issues. +static int +get_dbname_oid_list_from_mfile(const char *dumpdirpath, SimpleDatabaseOidList *dbname_oid_list) ... + /* + * XXX : before adding dbname into list, we can verify that this db + * needs to skipped for restore or not but as of now, we are making + * a list of all the databases. + */ i think the above comment in get_dbname_oid_list_from_mfile is not necessary. we already have comments in filter_dbnames_for_restore. in get_dbname_oid_list_from_mfile: ``` pfile = fopen(map_file_path, PG_BINARY_R); if (pfile == NULL) pg_fatal("could not open map.dat file: \"%s\"", map_file_path); ``` file does not exist, we use pg_fatal, so if the directory does not exist, we should also use pg_fatal. so if (!IsFileExistsInDirectory(pg_strdup(dumpdirpath), "map.dat")) { pg_log_info("databases restoring is skipped as map.dat file is not present in \"%s\"", dumpdirpath); return 0; } can be if (!IsFileExistsInDirectory(pg_strdup(dumpdirpath), "map.dat")) pg_fatal("map.dat file: \"%s\"/map.dat does not exists", dumpdirpath); + /* Report error if file has any corrupted data. */ + if (!OidIsValid(db_oid) || strlen(dbname) == 0) + pg_fatal("invalid entry in map.dat file at line : %d", count + 1); i think the comments should be + /* Report error and exit if the file has any corrupted data. */ +/* + * filter_dbnames_for_restore + * + * This will remove names from all dblist those can + * be constructed from database_exclude_pattern list. + * + * returns number of dbnames those will be restored. + */ +static int +filter_dbnames_for_restore(PGconn *conn, + SimpleDatabaseOidList *dbname_oid_list, there is no "database_exclude_pattern" list, so the above comments are slightly wrong. +/* + * ReadOneStatement + * + * This will start reading from passed file pointer using fgetc and read till + * semicolon(sql statement terminator for global.sql file) + * + * EOF is returned if end-of-file input is seen; time to shut down. + */ here, "global sql" should change to "gloal.dat". /* sync the resulting file, errors are not fatal */ - if (dosync) + if (dosync && (archDumpFormat == archNull)) (void) fsync_fname(filename, false); does this mean pg_dumpall --no-sync option only works for plain format. if so, we need to update the pg_dumpall --no-sync section.
On Tue, 18 Feb 2025 at 10:00, Srinath Reddy <srinath2133@gmail.com> wrote: > > Hi, > i think during restore we should not force user to use -C during cases like > ./pg_restore pdd -g -f - > ./pg_restore pdd -a -f - > ./pg_restore pdd -s -f - > because its not good to use -C to create database every time when we are using these options individually. > latest patch throws following error for all the above cases -g => we can allow this case without the -C option. -a and -s => user should use this option with a single database (i mean user should use a particular dump file to restore, not full dump directory of all the databases.) As pg_dumpall dumps all the databases in create mode, we should either use --create option in our code or we should give an error. I think, error is a good option if the user is using a dump of pg_dumpall. If the user wants to use all the options, then the user should use a single database dump path. If we allow users without the --create option, then pg_restore will create all the tables under a single database even if those tables are in different databases. I will fix the -g option(1st test case) in the next patch. -- Thanks and Regards Mahendra Singh Thalor EnterpriseDB: http://www.enterprisedb.com
hi. Currently, pg_retore says --exit-on-error Exit if an error is encountered while sending SQL commands to the database. The default is to continue and to display a count of errors at the end of the restoration. Do we need to apply this to restore executing global commands (create role, create tablespace)? If not then we need to put some words in pg_restoe --exit-on-error option saying that while restoring global objects --exit-on-error option is ignored. IMHO, in pg_restore.sgml, we need words explicitly saying that when restoring multiple databases, all the specified options will apply to each individual database. I tested the following options for restoring multiple databases. The results look good to me. --index=index --table=table --schema-only --transaction-size --no-comments some part of (--filter=filename) --exclude-schema=schema attach is a minor cosmetic change.
Вложения
On Tue, 18 Feb 2025 at 10:00, Srinath Reddy <srinath2133@gmail.com> wrote: > > Hi, > i think during restore we should not force user to use -C during cases like > ./pg_restore pdd -g -f - > ./pg_restore pdd -a -f - > ./pg_restore pdd -s -f - > because its not good to use -C to create database every time when we are using these options individually. > latest patch throws following error for all the above cases Fixed. (./pg_restore pdd -g -f -) Thanks Jian and Srinath for the review and testing. On Tue, 18 Feb 2025 at 11:41, jian he <jian.universality@gmail.com> wrote: > > hi. > > <refnamediv> > <refname>pg_restore</refname> > <refpurpose> > restore a <productname>PostgreSQL</productname> database from an > archive file created by <application>pg_dump</application> > or restore multiple <productname>PostgreSQL</productname> database from an > archive directory created by <application>pg_dumpall</application> > </refpurpose> > </refnamediv> > > i think it's way too verbose. we can change it to: > <refpurpose> > restore <productname>PostgreSQL</productname> database from an > archive file created by <application>pg_dump</application> or > <application>pg_dumpall</application> > </refpurpose> Fixed. > > > <para> > <application>pg_restore</application> is a utility for restoring a > <productname>PostgreSQL</productname> database from an archive > created by <xref linkend="app-pgdump"/> in one of the non-plain-text > formats. > we can change it to > <para> > <application>pg_restore</application> is a utility for restoring > <productname>PostgreSQL</productname> databases from an archive > created by <xref linkend="app-pgdump"/> or <xref > linkend="app-pgdumpall"/> in one of the non-plain-text > formats. Fixed. > > > similarly, pg_dumpall first 3 sentences in the description section > needs to change. > I think we can keep them for pg_dumpall. > > in pg_restore.sgml <option>--create</option section, > maybe we can explicitly mention that restoring multiple databases, > <option>--create</option> is required. > like: "This option is required when restoring multiple databases." Fixed. > > > restoreAllDatabases > + if (!conn) > + pg_log_info("there is no database connection so consider pattern as > simple name for --exclude-database"); > filter_dbnames_for_restore > + if (!conn) > + pg_log_info("considering PATTERN as NAME for --exclude-database > option as no db connection while doing pg_restore."); > > these two log messages sent out the same information. > maybe we can remove the first one, and change the second to > if (!conn && db_exclude_patterns.head != NULL) > pg_log_info("considering PATTERN as NAME for > --exclude-database option as no db connection while doing > pg_restore."); Fixed. > > > as mentioned in the previous thread, there is no need to change PrintTOCSummary. Yes, I removed it. > > > another minor issue about comments. > I guess we can tolerate this minor issue. > $BIN10/pg_restore --format=tar --create --file=1.sql > --exclude-database=src10 --verbose tar10 > dir_format 2>&1 > 1.sql file will copy tar10/global.dat as is. but we already excluded > src10. but 1.sql will still have comments as > -- > -- Database "src10" dump > -- Fixed. > > $BIN10/pg_dumpall --format=custom --file=x1.dump --globals-only > $BIN10/pg_dumpall --format=custom --file=x2.dump > > Currently x1.dump/global.dat is differ from x2.dump/global.dat > if we dump multiple databases using pg_dumpall we have > " > -- > -- Databases > -- > -- > -- Database "template1" dump > -- > -- > -- Database "src10" dump > -- > -- > -- Database "x" dump > -- > " > maybe there are not need, since we already have map.dat file Okay. Fixed. > > > I am not sure if the following is as expected or not. > $BIN10/pg_dumpall --format=custom --file=x1.dump --globals-only > $BIN10/pg_restore --create --file=3.sql --globals-only x1.dump --verbose > $BIN10/pg_restore --create --file=3.sql x1.dump --verbose > > the first pg_restore command will copy x1.dump/global.dat as is to 3.sql, > the second pg_restore will not copy anything to 3.sql. > but the second command implies copying global dumps to 3.sql? We should copy global.dat. I fixed this in the v18 patch. On Tue, 18 Feb 2025 at 14:02, jian he <jian.universality@gmail.com> wrote: > > On Tue, Feb 18, 2025 at 2:10 PM jian he <jian.universality@gmail.com> wrote: > > > > hi. > > hi. more cosmetic minor issues. > > +static int > +get_dbname_oid_list_from_mfile(const char *dumpdirpath, > SimpleDatabaseOidList *dbname_oid_list) > ... > + /* > + * XXX : before adding dbname into list, we can verify that this db > + * needs to skipped for restore or not but as of now, we are making > + * a list of all the databases. > + */ > i think the above comment in get_dbname_oid_list_from_mfile is not necessary. > we already have comments in filter_dbnames_for_restore. As of now, I am keeping this comment as this will be helpful while implementing parallel pg_restore. > > in get_dbname_oid_list_from_mfile: > ``` > pfile = fopen(map_file_path, PG_BINARY_R); > if (pfile == NULL) > pg_fatal("could not open map.dat file: \"%s\"", map_file_path); > ``` > file does not exist, we use pg_fatal, so if the directory does not > exist, we should also use pg_fatal. > so > if (!IsFileExistsInDirectory(pg_strdup(dumpdirpath), "map.dat")) > { > pg_log_info("databases restoring is skipped as map.dat file is > not present in \"%s\"", dumpdirpath); > return 0; > } > can be > if (!IsFileExistsInDirectory(pg_strdup(dumpdirpath), "map.dat")) > pg_fatal("map.dat file: \"%s\"/map.dat does not exists", dumpdirpath); No, we can't add FATAL here as in case of global-only dump, we will not have a map.dat file. > > > + /* Report error if file has any corrupted data. */ > + if (!OidIsValid(db_oid) || strlen(dbname) == 0) > + pg_fatal("invalid entry in map.dat file at line : %d", count + 1); > i think the comments should be > + /* Report error and exit if the file has any corrupted data. */ Fixed. > > > +/* > + * filter_dbnames_for_restore > + * > + * This will remove names from all dblist those can > + * be constructed from database_exclude_pattern list. > + * > + * returns number of dbnames those will be restored. > + */ > +static int > +filter_dbnames_for_restore(PGconn *conn, > + SimpleDatabaseOidList *dbname_oid_list, > there is no "database_exclude_pattern" list, so the above comments are > slightly wrong. Fixed. > > > +/* > + * ReadOneStatement > + * > + * This will start reading from passed file pointer using fgetc and read till > + * semicolon(sql statement terminator for global.sql file) > + * > + * EOF is returned if end-of-file input is seen; time to shut down. > + */ > here, "global sql" should change to "gloal.dat". Fixed. > > > /* sync the resulting file, errors are not fatal */ > - if (dosync) > + if (dosync && (archDumpFormat == archNull)) > (void) fsync_fname(filename, false); > does this mean pg_dumpall --no-sync option only works for plain format. > if so, we need to update the pg_dumpall --no-sync section. As of now, we are using this option with plain format as we dump server commands in different db file. We can test this more. On Wed, 19 Feb 2025 at 17:08, jian he <jian.universality@gmail.com> wrote: > > hi. > > Currently, pg_retore says > --exit-on-error > Exit if an error is encountered while sending SQL commands to the > database. The default is to continue and to display a count of errors > at the end of the restoration. > Do we need to apply this to restore executing global commands (create > role, create tablespace)? > If not then we need to put some words in pg_restoe --exit-on-error > option saying that while restoring global objects --exit-on-error > option is ignored. I think this is the same for all pg_restore commands. Still if we want to add some docs, we can put. > > > > IMHO, in pg_restore.sgml, we need words explicitly saying that > when restoring multiple databases, all the specified options will > apply to each individual database. We can skip this extra info. I will try in the next version if we can add something in doc. > > I tested the following options for restoring multiple databases. The > results look good to me. > --index=index > --table=table > --schema-only > --transaction-size > --no-comments > some part of (--filter=filename) > --exclude-schema=schema Thank you for detailed testing. > > attach is a minor cosmetic change. Okay. Here, I am attaching an updated patch for review and testing. -- Thanks and Regards Mahendra Singh Thalor EnterpriseDB: http://www.enterprisedb.com
Вложения
Hello, I think the business with an evergrowing on_exit list needs a different solution than a gigantic array of entries. Maybe it would make sense to restructure that code so that there's a single on_exit item, but there exists a list of per-database entries to clean up which are all done in one call of the function. Then you don't need to change the hardcoded MAX_ON_EXIT_NICELY array size there. I think it would be better to have a preparatory 0001 patch that just moves the code to the new files, without touching anything else, and then the new feature is introduced as a separate 0002 commit. You still have a bunch of XXX items here and there which look to me like they need to be handled before this patch can be considered final, plus the TODOs in the commit message. Please pgindent. Thanks -- Álvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/ "Porque francamente, si para saber manejarse a uno mismo hubiera que rendir examen... ¿Quién es el machito que tendría carnet?" (Mafalda)
Thanks Álvaro for feedback. On Thu, 20 Feb 2025 at 02:39, Álvaro Herrera <alvherre@alvh.no-ip.org> wrote: > > Hello, > > I think the business with an evergrowing on_exit list needs a different > solution than a gigantic array of entries. Maybe it would make sense to > restructure that code so that there's a single on_exit item, but there > exists a list of per-database entries to clean up which are all done in > one call of the function. Then you don't need to change the hardcoded > MAX_ON_EXIT_NICELY array size there. > In the latest patch, I added one new function to clean index(on_exit_nicely_index) with each database restore. > I think it would be better to have a preparatory 0001 patch that just > moves the code to the new files, without touching anything else, and > then the new feature is introduced as a separate 0002 commit. Fixed. > > You still have a bunch of XXX items here and there which look to me like > they need to be handled before this patch can be considered final, plus Fixed. > the TODOs in the commit message. Please pgindent. I am facing some errors in pgindent. I will run pgindent in the next version. Here, I am attaching updated patches for review and testing. -- Thanks and Regards Mahendra Singh Thalor EnterpriseDB: http://www.enterprisedb.com
Вложения
hi. about 0001 /* * connectDatabase * * Make a database connection with the given parameters. An * interactive password prompt is automatically issued if required. * * If fail_on_error is false, we return NULL without printing any message * on failure, but preserve any prompted password for the next try. * * On success, the global variable 'connstr' is set to a connection string * containing the options used. */ PGconn * connectDatabase(const char *dbname, const char *connection_string, const char *pghost, const char *pgport, const char *pguser, trivalue prompt_password, bool fail_on_error, const char *progname, const char **connstr, int *server_version) do the comments need to change? since no global variable 'connstr' in common_dumpall_restore.c maybe we need some words to explain server_version, (i don't have a huge opinion though). /*------------------------------------------------------------------------- * * common_dumpall_restore.c * * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * This is a common file for pg_dumpall and pg_restore. * src/bin/pg_dump/common_dumpall_restore.c * *------------------------------------------------------------------------- */ may change to /*------------------------------------------------------------------------- * * common_dumpall_restore.c * This is a common file for pg_dumpall and pg_restore. * * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * IDENTIFICATION * src/bin/pg_dump/common_dumpall_restore.c * *------------------------------------------------------------------------- */ so the style aligns with most other files. (we can apply the same logic to src/bin/pg_dump/common_dumpall_restore.h) in src/bin/pg_dump/pg_dumpall.c #include "common_dumpall_restore.h" imply include "pg_backup.h". so in src/bin/pg_dump/pg_dumpall.c, we don't need include "pg_backup.h" attached are minor cosmetic changes for v19.
Вложения
>
> hi.
> about 0001
>
> /*
> * connectDatabase
> *
> * Make a database connection with the given parameters. An
> * interactive password prompt is automatically issued if required.
> *
> * If fail_on_error is false, we return NULL without printing any message
> * on failure, but preserve any prompted password for the next try.
> *
> * On success, the global variable 'connstr' is set to a connection string
> * containing the options used.
> */
> PGconn *
> connectDatabase(const char *dbname, const char *connection_string,
> const char *pghost, const char *pgport, const char *pguser,
> trivalue prompt_password, bool fail_on_error, const
> char *progname,
> const char **connstr, int *server_version)
> do the comments need to change? since no
> global variable 'connstr' in common_dumpall_restore.c
> maybe we need some words to explain server_version, (i don't have a
> huge opinion though).
>
>
> /*-------------------------------------------------------------------------
> *
> * common_dumpall_restore.c
> *
> * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
> * Portions Copyright (c) 1994, Regents of the University of California
> *
> * This is a common file for pg_dumpall and pg_restore.
> * src/bin/pg_dump/common_dumpall_restore.c
> *
> *-------------------------------------------------------------------------
> */
>
> may change to
>
> /*-------------------------------------------------------------------------
> *
> * common_dumpall_restore.c
> * This is a common file for pg_dumpall and pg_restore.
> *
> * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
> * Portions Copyright (c) 1994, Regents of the University of California
> *
> * IDENTIFICATION
> * src/bin/pg_dump/common_dumpall_restore.c
> *
> *-------------------------------------------------------------------------
> */
> so the style aligns with most other files.
> (we can apply the same logic to src/bin/pg_dump/common_dumpall_restore.h)
>
>
> in src/bin/pg_dump/pg_dumpall.c
> #include "common_dumpall_restore.h"
> imply include "pg_backup.h".
> so in src/bin/pg_dump/pg_dumpall.c, we don't need include "pg_backup.h"
>
>
> attached are minor cosmetic changes for v19.
- /* return number of errors */As per this comment, we can't return AH->n_errors as this might already be freed so we should copy before CloseArchive.
- if (AH->n_errors)
- n_errors = AH->n_errors;
-
/* AH may be freed in CloseArchive? */
CloseArchive(AH);
Вложения
hi. v20-0001 in src/bin/pg_dump/pg_dumpall.c, we have: static const char *connstr = ""; case 'd': connstr = pg_strdup(optarg); break; i am not sure you can declare it as "const" for connstr. since connstr value can be changed. ``#include "pg_backup.h"`` can be removed from src/bin/pg_dump/pg_dumpall.c Other than that, v20_0001 looks good to me. v20_0002 const char *formatName = "p"; formatName should not be declared as "const", since its value can be changed. + /* Create a subdirectory with 'databases' name under main directory. */ + if (mkdir(db_subdir, 0755) != 0) + pg_fatal("could not create subdirectory \"%s\": %m", db_subdir); can change to if (mkdir(db_subdir, pg_dir_create_mode) != 0) pg_fatal("could not create subdirectory \"%s\": %m", db_subdir); then in src/bin/pg_dump/pg_dumpall.c need add ``#include "common/file_perm.h"`` similarly + else if (mkdir(dirname, 0700) < 0) + pg_fatal("could not create directory \"%s\": %m", dirname); can change to `` else if (mkdir(dirname, pg_dir_create_mode) != 0) pg_fatal("could not create directory \"%s\": %m", dirname); `` + + if (!conn) + pg_log_info("considering PATTERN as NAME for --exclude-database option as no db connection while doing pg_restore."); "db connection" maybe "database connection" or "connection" + /* + * We need to reset on_exit_nicely_index with each database so that we can restore + * multiple databases by archive. See EXIT_NICELY macro for more details. + */ + if (dboid_cell != NULL) + reset_exit_nicely_list(n_errors ? 1 : 0); i don't fully understand this part, anyway, i think EXIT_NICELY, you mean MAX_ON_EXIT_NICELY? just found out, parseArchiveFormat looks familiar with parseDumpFormat. for all the options in pg_restore. --list option is not applicable to multiple databases, therefore option --use-list=list-file also not applicable, in the doc we should mention it. global.dat comments should not mention "cluster", "global objects" would be more appropriate. global.dat comments should not mention "--\n-- Database \"%s\" dump\n--\n\n" the attached minor patch fixes this issue.
Вложения
hi. some documentation issue: doc/src/sgml/ref/pg_dumpall.sgml <variablelist> <varlistentry> <term><literal>d</literal></term> <term><literal>directory</literal></term> <listitem> <para> Output a directory-format archive suitable for input into pg_restore. Under dboid subdirectory, this will create a directory with one file for each table and large object being dumped, plus a so-called Table of Contents file describing the dumped objects in a machine-readable format that pg_restore can read. A directory format archive can be manipulated with standard Unix tools; for example, files in an uncompressed archive can be compressed with the gzip, lz4, or zstd tools. This format is compressed by default using gzip and also supports parallel dumps. </para> </listitem> </varlistentry> with the v20 implementation, """ For example, files in an uncompressed archive can be compressed with the gzip, lz4, or zstd tools. This format is compressed by default using gzip and also supports parallel dumps. "" Is this part is wrong? I think, currently, by default the pg_dumpall directory will use gzip compress level=-1 to do the compression. and pg_dumpall format==directory does not support parallel dumps. ------------------- by default, this is plain format. If non-plain mode is passed, then global.dat (global sql commands) and map.dat(dboid and dbnames list of all the databases) files will be created. Apart from these files, one subdirectory with databases name will be created. Under this databases subdirectory, there will be files with dboid name for each database and if --format is directory, then toc.dat and other dump files will be under dboid subdirectory. ------------------- I think the above message changes to the below, the message is more clear, IMHO. By default, this uses plain format. If a non-plain mode is specified, two files will be created: **global.dat** (containing SQL commands for global objects) and **map.dat** (listing database OIDs and names for all databases). Additionally, a subdirectory named after each database OID will be created. If the --format option is set to **directory**, then **toc.dat** and other dump files will be stored within the corresponding database Oid subdirectory. --------------------- doc/src/sgml/ref/pg_restore.sgml <term><option>--exclude-database=<replaceable class="parameter">pattern</replaceable></option></term> we can add: When emitting a script, this option is not supported for wild-card matching, the excluded database must exactly match the literal <replaceable class="parameter">pattern</replaceable> string.
A database name containing a newline breaks things for this patch: CREATE DATABASE "foo bar"; $ pg_dumpall -Fc --file test shell command argument contains a newline or carriage return: " dbname='foo bar'" -- Álvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/
>
> A database name containing a newline breaks things for this patch:
>
> CREATE DATABASE "foo
> bar";
>
>
> $ pg_dumpall -Fc --file test
> shell command argument contains a newline or carriage return: " dbname='foo
> bar'"
>
> --
> Álvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/
Disclaimer: I didn't review these patches fully. On 2025-Mar-05, Mahendra Singh Thalor wrote: > On Wed, 5 Mar 2025 at 01:02, Álvaro Herrera <alvherre@alvh.no-ip.org> wrote: > > > A database name containing a newline breaks things for this patch: > > > > CREATE DATABASE "foo > > bar"; > I also reported this issue on 29-01-2025. This breaks even without this > patch also. Okay, we should probably fix that, but I think the new map.dat file your patch adds is going to make the problem worse, because it doesn't look like you handled that case in any particular way that would make it not fail. I think it would be good to avoid digging us up even deeper in that hole. More generally, the pg_upgrade tests contain some code to try database names with almost all possible ascii characters (see generate_db in pg_upgrade/t/002_pg_upgrade.pl); it would be good to ensure that this new functionality also works correctly for that -- perhaps add an equivalent test to the pg_dumpall test suite. Looking at 0001: I'm not sure that the whole common_dumpall_restore.c thing is properly structured. First, the file name shouldn't presume which programs exactly are going to use the funcionality there. Second, it looks like there's another PQconnectdbParams() in pg_backup_db.c and I don't understand what the reason is for that one to be separate. In my mind, there should be a file maybe called connection.c or connectdb.c or whatever that's in charge of establishing connection for all the src/bin/pg_dump programs, for cleanliness sake. (This is probably also the place where to put an on_exit callback that cleans up any leftover connections.) Looking at 0002 I see it mentions looking at the EXIT_NICELY macro for documentation. No such macro exists. But also I think the addition (and use) of reset_exit_nicely_list() is not a good idea. It seems to assume that the only entries in that list are ones that can be cleared and reinstated whenever. This makes too much of an assumption about how the program works. It may work today, but it'll get in the way of any other patch that wants to set up some different on-exit clean up. In other words, we shouldn't reset the on_exit list at all. Also, this is just a weird addition: #define exit_nicely(code) exit(code) You added "A" as an option to the getopt_long() call in pg_restore, but no handling for it is added. I think the --globals-only option to pg_restore should be a separate commit. -- Álvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/
Thanks Alvaro for feedback and review. On Wed, 5 Mar 2025 at 20:42, Álvaro Herrera <alvherre@alvh.no-ip.org> wrote: > > Disclaimer: I didn't review these patches fully. > > On 2025-Mar-05, Mahendra Singh Thalor wrote: > > > On Wed, 5 Mar 2025 at 01:02, Álvaro Herrera <alvherre@alvh.no-ip.org> wrote: > > > > > A database name containing a newline breaks things for this patch: > > > > > > CREATE DATABASE "foo > > > bar"; > > > I also reported this issue on 29-01-2025. This breaks even without this > > patch also. > > Okay, we should probably fix that, but I think the new map.dat file your > patch adds is going to make the problem worse, because it doesn't look > like you handled that case in any particular way that would make it not > fail. I think it would be good to avoid digging us up even deeper in > that hole. More generally, the pg_upgrade tests contain some code to > try database names with almost all possible ascii characters (see > generate_db in pg_upgrade/t/002_pg_upgrade.pl); it would be good to > ensure that this new functionality also works correctly for that -- > perhaps add an equivalent test to the pg_dumpall test suite. In the attached patch, I tried to solve the problem of the map.dat file. I will do more analysis based on dbnames in 002_pg_upgrade.pl file. > > Looking at 0001: > > I'm not sure that the whole common_dumpall_restore.c thing is properly > structured. First, the file name shouldn't presume which programs > exactly are going to use the funcionality there. Second, it looks like > there's another PQconnectdbParams() in pg_backup_db.c and I don't > understand what the reason is for that one to be separate. In my mind, > there should be a file maybe called connection.c or connectdb.c or > whatever that's in charge of establishing connection for all the > src/bin/pg_dump programs, for cleanliness sake. (This is probably also > the place where to put an on_exit callback that cleans up any leftover > connections.) Okay. I will do these changes. > > Looking at 0002 I see it mentions looking at the EXIT_NICELY macro for > documentation. No such macro exists. But also I think the addition > (and use) of reset_exit_nicely_list() is not a good idea. It seems to > assume that the only entries in that list are ones that can be cleared > and reinstated whenever. This makes too much of an assumption about how > the program works. It may work today, but it'll get in the way of any > other patch that wants to set up some different on-exit clean up. In > other words, we shouldn't reset the on_exit list at all. Also, this is > just a weird addition: I will do more study for this case and will update here. > > #define exit_nicely(code) exit(code) Okay. I will fix this. > > You added "A" as an option to the getopt_long() call in pg_restore, but > no handling for it is added. Fixed. > > I think the --globals-only option to pg_restore should be a separate > commit. Okay. > > -- > Álvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/ Here, I am attaching updated patches for review and testing. -- Thanks and Regards Mahendra Singh Thalor EnterpriseDB: http://www.enterprisedb.com
Вложения
On Thu, Mar 6, 2025 at 12:49 AM Mahendra Singh Thalor <mahi6run@gmail.com> wrote: > > Thanks Alvaro for feedback and review. > > On Wed, 5 Mar 2025 at 20:42, Álvaro Herrera <alvherre@alvh.no-ip.org> wrote: > > > > Disclaimer: I didn't review these patches fully. > > > > On 2025-Mar-05, Mahendra Singh Thalor wrote: > > > > > On Wed, 5 Mar 2025 at 01:02, Álvaro Herrera <alvherre@alvh.no-ip.org> wrote: > > > > > > > A database name containing a newline breaks things for this patch: > > > > > > > > CREATE DATABASE "foo > > > > bar"; > > > > > I also reported this issue on 29-01-2025. This breaks even without this > > > patch also. > > > > Okay, we should probably fix that, but I think the new map.dat file your > > patch adds is going to make the problem worse, because it doesn't look > > like you handled that case in any particular way that would make it not > > fail. I think it would be good to avoid digging us up even deeper in > > that hole. More generally, the pg_upgrade tests contain some code to > > try database names with almost all possible ascii characters (see > > generate_db in pg_upgrade/t/002_pg_upgrade.pl); it would be good to > > ensure that this new functionality also works correctly for that -- > > perhaps add an equivalent test to the pg_dumpall test suite. > > In the attached patch, I tried to solve the problem of the map.dat > file. I will do more analysis based on dbnames in 002_pg_upgrade.pl > file. > hi. /* * Append the given string to the shell command being built in the buffer, * with shell-style quoting as needed to create exactly one argument. * * Forbid LF or CR characters, which have scant practical use beyond designing * security breaches. The Windows command shell is unusable as a conduit for * arguments containing LF or CR characters. A future major release should * reject those characters in CREATE ROLE and CREATE DATABASE, because use * there eventually leads to errors here. * * appendShellString() simply prints an error and dies if LF or CR appears. * appendShellStringNoError() omits those characters from the result, and * returns false if there were any. */ void appendShellString(PQExpBuffer buf, const char *str) per above comments, we need to disallow LF/CR in database name and role name when issuing shell command. rolename LF/CR issue already being handled in src/bin/pg_dump/pg_dumpall.c: while(getopt_long) code: case 3: use_role = pg_strdup(optarg); appendPQExpBufferStr(pgdumpopts, " --role "); appendShellString(pgdumpopts, use_role); we can fail earlier also for database names in dumpDatabases, right after executeQuery. Please check attached, which is based on *v20*. in V21, src/bin/pg_dump/pg_dumpall.c: +#include "common_dumpall_restore.h" happened within v21-0001 and v21-0002, it is being included twice.
Вложения
Thanks Alvaro and Jian for the review and feedback. On Wed, 5 Mar 2025 at 20:42, Álvaro Herrera <alvherre@alvh.no-ip.org> wrote: > > Disclaimer: I didn't review these patches fully. > > On 2025-Mar-05, Mahendra Singh Thalor wrote: > > > On Wed, 5 Mar 2025 at 01:02, Álvaro Herrera <alvherre@alvh.no-ip.org> wrote: > > > > > A database name containing a newline breaks things for this patch: > > > > > > CREATE DATABASE "foo > > > bar"; > > > I also reported this issue on 29-01-2025. This breaks even without this > > patch also. > > Okay, we should probably fix that, but I think the new map.dat file your > patch adds is going to make the problem worse, because it doesn't look > like you handled that case in any particular way that would make it not > fail. I think it would be good to avoid digging us up even deeper in > that hole. More generally, the pg_upgrade tests contain some code to > try database names with almost all possible ascii characters (see > generate_db in pg_upgrade/t/002_pg_upgrade.pl); it would be good to > ensure that this new functionality also works correctly for that -- > perhaps add an equivalent test to the pg_dumpall test suite. As Jian also pointed out, we should not allow \n\r in dbnames. I am keeping dbanames as single line names only. I am doing testing using the pg_upgrade/t/002_pg_upgrade.pl file to check different-2 dbnames. > > Looking at 0001: > > I'm not sure that the whole common_dumpall_restore.c thing is properly > structured. First, the file name shouldn't presume which programs > exactly are going to use the funcionality there. Second, it looks like > there's another PQconnectdbParams() in pg_backup_db.c and I don't > understand what the reason is for that one to be separate. In my mind, > there should be a file maybe called connection.c or connectdb.c or > whatever that's in charge of establishing connection for all the > src/bin/pg_dump programs, for cleanliness sake. (This is probably also > the place where to put an on_exit callback that cleans up any leftover > connections.) I did some more refactoring and made a connectdb.c file. > Looking at 0002 I see it mentions looking at the EXIT_NICELY macro for > documentation. No such macro exists. But also I think the addition > (and use) of reset_exit_nicely_list() is not a good idea. It seems to > assume that the only entries in that list are ones that can be cleared > and reinstated whenever. This makes too much of an assumption about how > the program works. It may work today, but it'll get in the way of any > other patch that wants to set up some different on-exit clean up. In > other words, we shouldn't reset the on_exit list at all. Also, this is > just a weird addition: Based on some discussions, I added handling for cleanup. for 1st database, I am saving index of array and then I am using same index for rest of the databases as we are closing archive file in CloseArchive so we can use same index for next database. > > #define exit_nicely(code) exit(code) Fixed. > > You added "A" as an option to the getopt_long() call in pg_restore, but > no handling for it is added. Fixed. > > I think the --globals-only option to pg_restore should be a separate > commit. I will make this in the next version. Here, I am attaching updated patches for review and testing. -- Thanks and Regards Mahendra Singh Thalor EnterpriseDB: http://www.enterprisedb.com
Вложения
On 2025-Mar-11, Mahendra Singh Thalor wrote: > On Wed, 5 Mar 2025 at 20:42, Álvaro Herrera <alvherre@alvh.no-ip.org> wrote: > > Okay, we should probably fix that, but I think the new map.dat file your > > patch adds is going to make the problem worse, because it doesn't look > > like you handled that case in any particular way that would make it not > > fail. > > As Jian also pointed out, we should not allow \n\r in dbnames. I am > keeping dbanames as single line names only. Ehm, did you get consensus on adding such a restriction? -- Álvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/
>
> On 2025-Mar-11, Mahendra Singh Thalor wrote:
>
> > On Wed, 5 Mar 2025 at 20:42, Álvaro Herrera <alvherre@alvh.no-ip.org> wrote:
>
> > > Okay, we should probably fix that, but I think the new map.dat file your
> > > patch adds is going to make the problem worse, because it doesn't look
> > > like you handled that case in any particular way that would make it not
> > > fail.
> >
> > As Jian also pointed out, we should not allow \n\r in dbnames. I am
> > keeping dbanames as single line names only.
>
> Ehm, did you get consensus on adding such a restriction?
>
Hi Alvaro,
/*
* Append the given string to the shell command being built in the buffer,
* with shell-style quoting as needed to create exactly one argument.
*
* Forbid LF or CR characters, which have scant practical use beyond designing
* security breaches. The Windows command shell is unusable as a conduit for
* arguments containing LF or CR characters. A future major release should
* reject those characters in CREATE ROLE and CREATE DATABASE, because use
* there eventually leads to errors here.
*
* appendShellString() simply prints an error and dies if LF or CR appears.
* appendShellStringNoError() omits those characters from the result, and
* returns false if there were any.
*/
void
appendShellString(PQExpBuffer buf, const char *str)
Sorry, in the v22 patches, I missed to use the "git add connectdb.c" file. (Thanks Andrew for reporting this offline)
Вложения
Hello, On 2025-Mar-11, Mahendra Singh Thalor wrote: > In map.dat file, I tried to fix this issue by adding number of characters > in dbname but as per code comments, as of now, we are not supporting \n\r > in dbnames so i removed handling. > I will do some more study to fix this issue. Yeah, I think this is saying that you should not consider the contents of map.dat as a shell string. After all, you're not going to _execute_ that file via the shell. Maybe for map.dat you need to escape such characters somehow, so that they don't appear as literal newlines/carriage returns. -- Álvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/
Álvaro Herrera <alvherre@alvh.no-ip.org> writes: > Hello, > > On 2025-Mar-11, Mahendra Singh Thalor wrote: > >> In map.dat file, I tried to fix this issue by adding number of characters >> in dbname but as per code comments, as of now, we are not supporting \n\r >> in dbnames so i removed handling. >> I will do some more study to fix this issue. > > Yeah, I think this is saying that you should not consider the contents > of map.dat as a shell string. After all, you're not going to _execute_ > that file via the shell. > > Maybe for map.dat you need to escape such characters somehow, so that > they don't appear as literal newlines/carriage returns. I haven't looked at the code for this, but why are we inventing an ad-hoc file format? Why not use JSON, like we do for backup manifests? Then storing arbitrary database names won't be a problem. - ilmari
On 2025-03-11 Tu 1:52 PM, Dagfinn Ilmari Mannsåker wrote: > Álvaro Herrera <alvherre@alvh.no-ip.org> writes: > >> Hello, >> >> On 2025-Mar-11, Mahendra Singh Thalor wrote: >> >>> In map.dat file, I tried to fix this issue by adding number of characters >>> in dbname but as per code comments, as of now, we are not supporting \n\r >>> in dbnames so i removed handling. >>> I will do some more study to fix this issue. >> Yeah, I think this is saying that you should not consider the contents >> of map.dat as a shell string. After all, you're not going to _execute_ >> that file via the shell. >> >> Maybe for map.dat you need to escape such characters somehow, so that >> they don't appear as literal newlines/carriage returns. > I haven't looked at the code for this, but why are we inventing an > ad-hoc file format? Why not use JSON, like we do for backup manifests? > Then storing arbitrary database names won't be a problem. > I'm not sure everyone thinks that was a good idea for backup manifests (in fact I know some don't), and it seems somewhat like overkill for a simple map of oids to database names. cheers andrew > > -- Andrew Dunstan EDB: https://www.enterprisedb.com
On 2025-Mar-11, Andrew Dunstan wrote: > I'm not sure everyone thinks that was a good idea for backup manifests (in > fact I know some don't), and it seems somewhat like overkill for a simple > map of oids to database names. If such a simple system can be made to work for all possible valid database names, then I agree with you. But if it forces us to restrict database names to not contain newlines or other funny chars that are so far unrestricted, then I would take the other position. -- Álvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/ "La victoria es para quien se atreve a estar solo"
On 2025-03-11 Tu 5:03 PM, Álvaro Herrera wrote: > On 2025-Mar-11, Andrew Dunstan wrote: > >> I'm not sure everyone thinks that was a good idea for backup manifests (in >> fact I know some don't), and it seems somewhat like overkill for a simple >> map of oids to database names. > If such a simple system can be made to work for all possible valid > database names, then I agree with you. But if it forces us to restrict > database names to not contain newlines or other funny chars that are so > far unrestricted, then I would take the other position. > Well, JSON is supposed to be UTF8. What should we do about database names that are not UTF8? It's kinda tempting to say we should have the file consist of lines like: oid base64_encoded_name escaped_human_readable name cheers andrew -- Andrew Dunstan EDB: https://www.enterprisedb.com
Well, JSON is supposed to be UTF8. What should we do about database
names that are not UTF8?
On Wed, Mar 12, 2025 at 1:06 AM Álvaro Herrera <alvherre@alvh.no-ip.org> wrote: > > Hello, > > On 2025-Mar-11, Mahendra Singh Thalor wrote: > > > In map.dat file, I tried to fix this issue by adding number of characters > > in dbname but as per code comments, as of now, we are not supporting \n\r > > in dbnames so i removed handling. > > I will do some more study to fix this issue. > > Yeah, I think this is saying that you should not consider the contents > of map.dat as a shell string. After all, you're not going to _execute_ > that file via the shell. > > Maybe for map.dat you need to escape such characters somehow, so that > they don't appear as literal newlines/carriage returns. > I am confused. currently pg_dumpall plain format will abort when encountering dbname containing newline. the left dumped plain file does not contain all the cluster databases data. if pg_dumpall non-text format aborts earlier, it's aligned with pg_dumpall plain format? it's also an improvement since aborts earlier, nothing will be dumped? am i missing something?
On Tue, 2025-03-11 at 19:14 -0400, Isaac Morland wrote: > On Tue, 11 Mar 2025 at 18:37, Andrew Dunstan <andrew@dunslane.net> wrote: > > > Well, JSON is supposed to be UTF8. What should we do about database > > names that are not UTF8? > > How can you have a database name that isn't encodeable in UTF-8? At this point > I'm pretty sure Unicode has subsumed essentially every character ever mentioned > in a standards document. There is a difference between "encodable" and "encoded". You'd have to figure out the actual encoding of the database name and convert that to UTF-8. Yours, Laurenz Albe
On 2025-03-12 We 3:03 AM, jian he wrote: > On Wed, Mar 12, 2025 at 1:06 AM Álvaro Herrera <alvherre@alvh.no-ip.org> wrote: >> Hello, >> >> On 2025-Mar-11, Mahendra Singh Thalor wrote: >> >>> In map.dat file, I tried to fix this issue by adding number of characters >>> in dbname but as per code comments, as of now, we are not supporting \n\r >>> in dbnames so i removed handling. >>> I will do some more study to fix this issue. >> Yeah, I think this is saying that you should not consider the contents >> of map.dat as a shell string. After all, you're not going to _execute_ >> that file via the shell. >> >> Maybe for map.dat you need to escape such characters somehow, so that >> they don't appear as literal newlines/carriage returns. >> > I am confused. > currently pg_dumpall plain format will abort when encountering dbname > containing newline. > the left dumped plain file does not contain all the cluster databases data. > > > if pg_dumpall non-text format aborts earlier, > it's aligned with pg_dumpall plain format? > it's also an improvement since aborts earlier, nothing will be dumped? > > > am i missing something? > > I think we should fix that. But for the current proposal, Álvaro and I were talking this morning, and we thought the simplest thing here would be to have the one line format and escape NL/CRs in the database name. cheers andrew -- Andrew Dunstan EDB: https://www.enterprisedb.com
On Wed, 12 Mar 2025 at 21:18, Andrew Dunstan <andrew@dunslane.net> wrote: > > > On 2025-03-12 We 3:03 AM, jian he wrote: > > On Wed, Mar 12, 2025 at 1:06 AM Álvaro Herrera <alvherre@alvh.no-ip.org> wrote: > >> Hello, > >> > >> On 2025-Mar-11, Mahendra Singh Thalor wrote: > >> > >>> In map.dat file, I tried to fix this issue by adding number of characters > >>> in dbname but as per code comments, as of now, we are not supporting \n\r > >>> in dbnames so i removed handling. > >>> I will do some more study to fix this issue. > >> Yeah, I think this is saying that you should not consider the contents > >> of map.dat as a shell string. After all, you're not going to _execute_ > >> that file via the shell. > >> > >> Maybe for map.dat you need to escape such characters somehow, so that > >> they don't appear as literal newlines/carriage returns. > >> > > I am confused. > > currently pg_dumpall plain format will abort when encountering dbname > > containing newline. > > the left dumped plain file does not contain all the cluster databases data. > > > > > > if pg_dumpall non-text format aborts earlier, > > it's aligned with pg_dumpall plain format? > > it's also an improvement since aborts earlier, nothing will be dumped? > > > > > > am i missing something? > > > > > > I think we should fix that. > > But for the current proposal, Álvaro and I were talking this morning, > and we thought the simplest thing here would be to have the one line > format and escape NL/CRs in the database name. > > > cheers > Okay. As per discussions, we will keep one line entry for each database into map.file. Thanks all for feedback and review. Here, I am attaching updated patches for review and testing. These patches can be applied on commit a6524105d20b. -- Thanks and Regards Mahendra Singh Thalor EnterpriseDB: http://www.enterprisedb.com
Вложения
On 2025-03-19 We 2:41 AM, Mahendra Singh Thalor wrote: > On Wed, 12 Mar 2025 at 21:18, Andrew Dunstan <andrew@dunslane.net> wrote: >> >> On 2025-03-12 We 3:03 AM, jian he wrote: >>> On Wed, Mar 12, 2025 at 1:06 AM Álvaro Herrera <alvherre@alvh.no-ip.org> wrote: >>>> Hello, >>>> >>>> On 2025-Mar-11, Mahendra Singh Thalor wrote: >>>> >>>>> In map.dat file, I tried to fix this issue by adding number of characters >>>>> in dbname but as per code comments, as of now, we are not supporting \n\r >>>>> in dbnames so i removed handling. >>>>> I will do some more study to fix this issue. >>>> Yeah, I think this is saying that you should not consider the contents >>>> of map.dat as a shell string. After all, you're not going to _execute_ >>>> that file via the shell. >>>> >>>> Maybe for map.dat you need to escape such characters somehow, so that >>>> they don't appear as literal newlines/carriage returns. >>>> >>> I am confused. >>> currently pg_dumpall plain format will abort when encountering dbname >>> containing newline. >>> the left dumped plain file does not contain all the cluster databases data. >>> >>> >>> if pg_dumpall non-text format aborts earlier, >>> it's aligned with pg_dumpall plain format? >>> it's also an improvement since aborts earlier, nothing will be dumped? >>> >>> >>> am i missing something? >>> >>> >> I think we should fix that. >> >> But for the current proposal, Álvaro and I were talking this morning, >> and we thought the simplest thing here would be to have the one line >> format and escape NL/CRs in the database name. >> >> >> cheers >> > Okay. As per discussions, we will keep one line entry for each > database into map.file. > > Thanks all for feedback and review. > > Here, I am attaching updated patches for review and testing. These > patches can be applied on commit a6524105d20b. I'm working through this patch set with a view to committing it. Attached is some cleanup which is where I got to today, although there is more to do. One thing I am wondering is why not put the SimpleDatabaseOidList stuff in fe_utils/simle_list.{c,h} ? That's where all the similar stuff belongs, and it feels strange to have this inline in pg_restore.c. (I also don't like the name much - SimpleOidStringList or maybe SimpleOidPlusStringList might be better). cheers andrew -- Andrew Dunstan EDB: https://www.enterprisedb.com
Вложения
On 2025-03-27 Th 5:15 PM, Andrew Dunstan wrote: > > On 2025-03-19 We 2:41 AM, Mahendra Singh Thalor wrote: >> On Wed, 12 Mar 2025 at 21:18, Andrew Dunstan <andrew@dunslane.net> >> wrote: >>> >>> On 2025-03-12 We 3:03 AM, jian he wrote: >>>> On Wed, Mar 12, 2025 at 1:06 AM Álvaro Herrera >>>> <alvherre@alvh.no-ip.org> wrote: >>>>> Hello, >>>>> >>>>> On 2025-Mar-11, Mahendra Singh Thalor wrote: >>>>> >>>>>> In map.dat file, I tried to fix this issue by adding number of >>>>>> characters >>>>>> in dbname but as per code comments, as of now, we are not >>>>>> supporting \n\r >>>>>> in dbnames so i removed handling. >>>>>> I will do some more study to fix this issue. >>>>> Yeah, I think this is saying that you should not consider the >>>>> contents >>>>> of map.dat as a shell string. After all, you're not going to >>>>> _execute_ >>>>> that file via the shell. >>>>> >>>>> Maybe for map.dat you need to escape such characters somehow, so that >>>>> they don't appear as literal newlines/carriage returns. >>>>> >>>> I am confused. >>>> currently pg_dumpall plain format will abort when encountering dbname >>>> containing newline. >>>> the left dumped plain file does not contain all the cluster >>>> databases data. >>>> >>>> >>>> if pg_dumpall non-text format aborts earlier, >>>> it's aligned with pg_dumpall plain format? >>>> it's also an improvement since aborts earlier, nothing will be dumped? >>>> >>>> >>>> am i missing something? >>>> >>>> >>> I think we should fix that. >>> >>> But for the current proposal, Álvaro and I were talking this morning, >>> and we thought the simplest thing here would be to have the one line >>> format and escape NL/CRs in the database name. >>> >>> >>> cheers >>> >> Okay. As per discussions, we will keep one line entry for each >> database into map.file. >> >> Thanks all for feedback and review. >> >> Here, I am attaching updated patches for review and testing. These >> patches can be applied on commit a6524105d20b. > > > > I'm working through this patch set with a view to committing it. > Attached is some cleanup which is where I got to today, although there > is more to do. One thing I am wondering is why not put the > SimpleDatabaseOidList stuff in fe_utils/simle_list.{c,h} ? That's > where all the similar stuff belongs, and it feels strange to have this > inline in pg_restore.c. (I also don't like the name much - > SimpleOidStringList or maybe SimpleOidPlusStringList might be better). > > > OK, I have done that, so here is the result. The first two are you original patches. patch 3 adds the new list type to fe-utils, and patch 4 contains my cleanups and use of the new list type. Apart from some relatively minor cleanup, the one thing I would like to change is how dumps are named. If we are producing tar or custom format dumps, I think the file names should reflect that (oid.dmp and oid.tar rather than a bare oid as the filename), and pg_restore should look for those. I'm going to work on that tomorrow - I don't think it will be terribly difficult. cheers andrew -- Andrew Dunstan EDB: https://www.enterprisedb.com
Вложения
On Sat, 29 Mar 2025 at 03:50, Andrew Dunstan <andrew@dunslane.net> wrote: > > > On 2025-03-27 Th 5:15 PM, Andrew Dunstan wrote: > > > > On 2025-03-19 We 2:41 AM, Mahendra Singh Thalor wrote: > >> On Wed, 12 Mar 2025 at 21:18, Andrew Dunstan <andrew@dunslane.net> > >> wrote: > >>> > >>> On 2025-03-12 We 3:03 AM, jian he wrote: > >>>> On Wed, Mar 12, 2025 at 1:06 AM Álvaro Herrera > >>>> <alvherre@alvh.no-ip.org> wrote: > >>>>> Hello, > >>>>> > >>>>> On 2025-Mar-11, Mahendra Singh Thalor wrote: > >>>>> > >>>>>> In map.dat file, I tried to fix this issue by adding number of > >>>>>> characters > >>>>>> in dbname but as per code comments, as of now, we are not > >>>>>> supporting \n\r > >>>>>> in dbnames so i removed handling. > >>>>>> I will do some more study to fix this issue. > >>>>> Yeah, I think this is saying that you should not consider the > >>>>> contents > >>>>> of map.dat as a shell string. After all, you're not going to > >>>>> _execute_ > >>>>> that file via the shell. > >>>>> > >>>>> Maybe for map.dat you need to escape such characters somehow, so that > >>>>> they don't appear as literal newlines/carriage returns. > >>>>> > >>>> I am confused. > >>>> currently pg_dumpall plain format will abort when encountering dbname > >>>> containing newline. > >>>> the left dumped plain file does not contain all the cluster > >>>> databases data. > >>>> > >>>> > >>>> if pg_dumpall non-text format aborts earlier, > >>>> it's aligned with pg_dumpall plain format? > >>>> it's also an improvement since aborts earlier, nothing will be dumped? > >>>> > >>>> > >>>> am i missing something? > >>>> > >>>> > >>> I think we should fix that. > >>> > >>> But for the current proposal, Álvaro and I were talking this morning, > >>> and we thought the simplest thing here would be to have the one line > >>> format and escape NL/CRs in the database name. > >>> > >>> > >>> cheers > >>> > >> Okay. As per discussions, we will keep one line entry for each > >> database into map.file. > >> > >> Thanks all for feedback and review. > >> > >> Here, I am attaching updated patches for review and testing. These > >> patches can be applied on commit a6524105d20b. > > > > > > > > I'm working through this patch set with a view to committing it. > > Attached is some cleanup which is where I got to today, although there > > is more to do. One thing I am wondering is why not put the > > SimpleDatabaseOidList stuff in fe_utils/simle_list.{c,h} ? That's > > where all the similar stuff belongs, and it feels strange to have this > > inline in pg_restore.c. (I also don't like the name much - > > SimpleOidStringList or maybe SimpleOidPlusStringList might be better). > > > > > > > > > OK, I have done that, so here is the result. The first two are you > original patches. patch 3 adds the new list type to fe-utils, and patch > 4 contains my cleanups and use of the new list type. Apart from some > relatively minor cleanup, the one thing I would like to change is how > dumps are named. If we are producing tar or custom format dumps, I think > the file names should reflect that (oid.dmp and oid.tar rather than a > bare oid as the filename), and pg_restore should look for those. I'm > going to work on that tomorrow - I don't think it will be terribly > difficult. > Thanks Andrew. Here, I am attaching a delta patch for oid.tar and oid.dmp format. -- Thanks and Regards Mahendra Singh Thalor EnterpriseDB: http://www.enterprisedb.com
Вложения
On 2025-03-29 Sa 1:17 AM, Mahendra Singh Thalor wrote: > On Sat, 29 Mar 2025 at 03:50, Andrew Dunstan <andrew@dunslane.net> wrote: >> >> On 2025-03-27 Th 5:15 PM, Andrew Dunstan wrote: >>> On 2025-03-19 We 2:41 AM, Mahendra Singh Thalor wrote: >>>> On Wed, 12 Mar 2025 at 21:18, Andrew Dunstan <andrew@dunslane.net> >>>> wrote: >>>>> On 2025-03-12 We 3:03 AM, jian he wrote: >>>>>> On Wed, Mar 12, 2025 at 1:06 AM Álvaro Herrera >>>>>> <alvherre@alvh.no-ip.org> wrote: >>>>>>> Hello, >>>>>>> >>>>>>> On 2025-Mar-11, Mahendra Singh Thalor wrote: >>>>>>> >>>>>>>> In map.dat file, I tried to fix this issue by adding number of >>>>>>>> characters >>>>>>>> in dbname but as per code comments, as of now, we are not >>>>>>>> supporting \n\r >>>>>>>> in dbnames so i removed handling. >>>>>>>> I will do some more study to fix this issue. >>>>>>> Yeah, I think this is saying that you should not consider the >>>>>>> contents >>>>>>> of map.dat as a shell string. After all, you're not going to >>>>>>> _execute_ >>>>>>> that file via the shell. >>>>>>> >>>>>>> Maybe for map.dat you need to escape such characters somehow, so that >>>>>>> they don't appear as literal newlines/carriage returns. >>>>>>> >>>>>> I am confused. >>>>>> currently pg_dumpall plain format will abort when encountering dbname >>>>>> containing newline. >>>>>> the left dumped plain file does not contain all the cluster >>>>>> databases data. >>>>>> >>>>>> >>>>>> if pg_dumpall non-text format aborts earlier, >>>>>> it's aligned with pg_dumpall plain format? >>>>>> it's also an improvement since aborts earlier, nothing will be dumped? >>>>>> >>>>>> >>>>>> am i missing something? >>>>>> >>>>>> >>>>> I think we should fix that. >>>>> >>>>> But for the current proposal, Álvaro and I were talking this morning, >>>>> and we thought the simplest thing here would be to have the one line >>>>> format and escape NL/CRs in the database name. >>>>> >>>>> >>>>> cheers >>>>> >>>> Okay. As per discussions, we will keep one line entry for each >>>> database into map.file. >>>> >>>> Thanks all for feedback and review. >>>> >>>> Here, I am attaching updated patches for review and testing. These >>>> patches can be applied on commit a6524105d20b. >>> >>> >>> I'm working through this patch set with a view to committing it. >>> Attached is some cleanup which is where I got to today, although there >>> is more to do. One thing I am wondering is why not put the >>> SimpleDatabaseOidList stuff in fe_utils/simle_list.{c,h} ? That's >>> where all the similar stuff belongs, and it feels strange to have this >>> inline in pg_restore.c. (I also don't like the name much - >>> SimpleOidStringList or maybe SimpleOidPlusStringList might be better). >>> >>> >>> >> >> OK, I have done that, so here is the result. The first two are you >> original patches. patch 3 adds the new list type to fe-utils, and patch >> 4 contains my cleanups and use of the new list type. Apart from some >> relatively minor cleanup, the one thing I would like to change is how >> dumps are named. If we are producing tar or custom format dumps, I think >> the file names should reflect that (oid.dmp and oid.tar rather than a >> bare oid as the filename), and pg_restore should look for those. I'm >> going to work on that tomorrow - I don't think it will be terribly >> difficult. >> > Thanks Andrew. > > Here, I am attaching a delta patch for oid.tar and oid.dmp format. > OK, looks good, I have incorporated that. There are a couple of rough edges, though. First, I see this: andrew@ub22arm:inst $ bin/pg_restore -C -d postgres --exclude-database=regression_dummy_seclabel --exclude-database=regression_test_extensions --exclude-database=regression_test_pg_dump dest pg_restore: error: could not execute query: "ERROR: role "andrew" already exists " Command was: " -- -- Roles -- CREATE ROLE andrew;" pg_restore: warning: errors ignored on global.dat file restore: 1 pg_restore: error: could not execute query: ERROR: database "template1" already exists Command was: CREATE DATABASE template1 WITH TEMPLATE = template0 ENCODING = 'SQL_ASCII' LOCALE_PROVIDER = libc LOCALE = 'C'; pg_restore: warning: errors ignored on database "template1" restore: 1 pg_restore: error: could not execute query: ERROR: database "postgres" already exists Command was: CREATE DATABASE postgres WITH TEMPLATE = template0 ENCODING = 'SQL_ASCII' LOCALE_PROVIDER = libc LOCALE = 'C'; pg_restore: warning: errors ignored on database "postgres" restore: 1 pg_restore: warning: errors ignored on restore: 3 It seems pointless to be trying to create the rolw that we are connected as, and we also expect template1 and postgres to exist. In a similar vein, I don't see why we are setting the --create flag in pg_dumpall for those databases. I'm attaching a patch that is designed to stop that, but it doesn't solve the above issues. I also notice a bunch of these in globals.dat: -- -- Databases -- -- -- Database "template1" dump -- -- -- Database "andrew" dump -- -- -- Database "isolation_regression_brin" dump -- -- -- Database "isolation_regression_delay_execution" dump -- ... The patch also tries to fix this. Lastly, this badly needs some TAP tests written. I'm going to work on reviewing the documentation next. cheers andrew -- Andrew Dunstan EDB: https://www.enterprisedb.com
Вложения
On 2025-03-30 Su 12:50 PM, Andrew Dunstan wrote: > > On 2025-03-29 Sa 1:17 AM, Mahendra Singh Thalor wrote: >> On Sat, 29 Mar 2025 at 03:50, Andrew Dunstan <andrew@dunslane.net> >> wrote: >>> >>> On 2025-03-27 Th 5:15 PM, Andrew Dunstan wrote: >>>> On 2025-03-19 We 2:41 AM, Mahendra Singh Thalor wrote: >>>>> On Wed, 12 Mar 2025 at 21:18, Andrew Dunstan <andrew@dunslane.net> >>>>> wrote: >>>>>> On 2025-03-12 We 3:03 AM, jian he wrote: >>>>>>> On Wed, Mar 12, 2025 at 1:06 AM Álvaro Herrera >>>>>>> <alvherre@alvh.no-ip.org> wrote: >>>>>>>> Hello, >>>>>>>> >>>>>>>> On 2025-Mar-11, Mahendra Singh Thalor wrote: >>>>>>>> >>>>>>>>> In map.dat file, I tried to fix this issue by adding number of >>>>>>>>> characters >>>>>>>>> in dbname but as per code comments, as of now, we are not >>>>>>>>> supporting \n\r >>>>>>>>> in dbnames so i removed handling. >>>>>>>>> I will do some more study to fix this issue. >>>>>>>> Yeah, I think this is saying that you should not consider the >>>>>>>> contents >>>>>>>> of map.dat as a shell string. After all, you're not going to >>>>>>>> _execute_ >>>>>>>> that file via the shell. >>>>>>>> >>>>>>>> Maybe for map.dat you need to escape such characters somehow, >>>>>>>> so that >>>>>>>> they don't appear as literal newlines/carriage returns. >>>>>>>> >>>>>>> I am confused. >>>>>>> currently pg_dumpall plain format will abort when encountering >>>>>>> dbname >>>>>>> containing newline. >>>>>>> the left dumped plain file does not contain all the cluster >>>>>>> databases data. >>>>>>> >>>>>>> >>>>>>> if pg_dumpall non-text format aborts earlier, >>>>>>> it's aligned with pg_dumpall plain format? >>>>>>> it's also an improvement since aborts earlier, nothing will be >>>>>>> dumped? >>>>>>> >>>>>>> >>>>>>> am i missing something? >>>>>>> >>>>>>> >>>>>> I think we should fix that. >>>>>> >>>>>> But for the current proposal, Álvaro and I were talking this >>>>>> morning, >>>>>> and we thought the simplest thing here would be to have the one line >>>>>> format and escape NL/CRs in the database name. >>>>>> >>>>>> >>>>>> cheers >>>>>> >>>>> Okay. As per discussions, we will keep one line entry for each >>>>> database into map.file. >>>>> >>>>> Thanks all for feedback and review. >>>>> >>>>> Here, I am attaching updated patches for review and testing. These >>>>> patches can be applied on commit a6524105d20b. >>>> >>>> >>>> I'm working through this patch set with a view to committing it. >>>> Attached is some cleanup which is where I got to today, although there >>>> is more to do. One thing I am wondering is why not put the >>>> SimpleDatabaseOidList stuff in fe_utils/simle_list.{c,h} ? That's >>>> where all the similar stuff belongs, and it feels strange to have this >>>> inline in pg_restore.c. (I also don't like the name much - >>>> SimpleOidStringList or maybe SimpleOidPlusStringList might be better). >>>> >>>> >>>> >>> >>> OK, I have done that, so here is the result. The first two are you >>> original patches. patch 3 adds the new list type to fe-utils, and patch >>> 4 contains my cleanups and use of the new list type. Apart from some >>> relatively minor cleanup, the one thing I would like to change is how >>> dumps are named. If we are producing tar or custom format dumps, I >>> think >>> the file names should reflect that (oid.dmp and oid.tar rather than a >>> bare oid as the filename), and pg_restore should look for those. I'm >>> going to work on that tomorrow - I don't think it will be terribly >>> difficult. >>> >> Thanks Andrew. >> >> Here, I am attaching a delta patch for oid.tar and oid.dmp format. >> > > > OK, looks good, I have incorporated that. > > There are a couple of rough edges, though. > > First, I see this: > > > andrew@ub22arm:inst $ bin/pg_restore -C -d postgres > --exclude-database=regression_dummy_seclabel > --exclude-database=regression_test_extensions > --exclude-database=regression_test_pg_dump dest > pg_restore: error: could not execute query: "ERROR: role "andrew" > already exists > " > Command was: " > > -- > -- Roles > -- > > CREATE ROLE andrew;" > pg_restore: warning: errors ignored on global.dat file restore: 1 > pg_restore: error: could not execute query: ERROR: database > "template1" already exists > Command was: CREATE DATABASE template1 WITH TEMPLATE = template0 > ENCODING = 'SQL_ASCII' LOCALE_PROVIDER = libc LOCALE = 'C'; > > > pg_restore: warning: errors ignored on database "template1" restore: 1 > pg_restore: error: could not execute query: ERROR: database > "postgres" already exists > Command was: CREATE DATABASE postgres WITH TEMPLATE = template0 > ENCODING = 'SQL_ASCII' LOCALE_PROVIDER = libc LOCALE = 'C'; > > > pg_restore: warning: errors ignored on database "postgres" restore: 1 > pg_restore: warning: errors ignored on restore: 3 > > > > It seems pointless to be trying to create the rolw that we are > connected as, and we also expect template1 and postgres to exist. > > In a similar vein, I don't see why we are setting the --create flag in > pg_dumpall for those databases. I'm attaching a patch that is designed > to stop that, but it doesn't solve the above issues. > > I also notice a bunch of these in globals.dat: > > > -- > -- Databases > -- > > -- > -- Database "template1" dump > -- > > -- > -- Database "andrew" dump > -- > > -- > -- Database "isolation_regression_brin" dump > -- > > -- > -- Database "isolation_regression_delay_execution" dump > -- > > ... > > > The patch also tries to fix this. > > Lastly, this badly needs some TAP tests written. > > I'm going to work on reviewing the documentation next. > > > I have reworked the documentation some. See attached. cheers andrew -- Andrew Dunstan EDB: https://www.enterprisedb.com
Вложения
>
>
> On 2025-03-29 Sa 1:17 AM, Mahendra Singh Thalor wrote:
> > On Sat, 29 Mar 2025 at 03:50, Andrew Dunstan <andrew@dunslane.net> wrote:
> >>
> >> On 2025-03-27 Th 5:15 PM, Andrew Dunstan wrote:
> >>> On 2025-03-19 We 2:41 AM, Mahendra Singh Thalor wrote:
> >>>> On Wed, 12 Mar 2025 at 21:18, Andrew Dunstan <andrew@dunslane.net>
> >>>> wrote:
> >>>>> On 2025-03-12 We 3:03 AM, jian he wrote:
> >>>>>> On Wed, Mar 12, 2025 at 1:06 AM Álvaro Herrera
> >>>>>> <alvherre@alvh.no-ip.org> wrote:
> >>>>>>> Hello,
> >>>>>>>
> >>>>>>> On 2025-Mar-11, Mahendra Singh Thalor wrote:
> >>>>>>>
> >>>>>>>> In map.dat file, I tried to fix this issue by adding number of
> >>>>>>>> characters
> >>>>>>>> in dbname but as per code comments, as of now, we are not
> >>>>>>>> supporting \n\r
> >>>>>>>> in dbnames so i removed handling.
> >>>>>>>> I will do some more study to fix this issue.
> >>>>>>> Yeah, I think this is saying that you should not consider the
> >>>>>>> contents
> >>>>>>> of map.dat as a shell string. After all, you're not going to
> >>>>>>> _execute_
> >>>>>>> that file via the shell.
> >>>>>>>
> >>>>>>> Maybe for map.dat you need to escape such characters somehow, so that
> >>>>>>> they don't appear as literal newlines/carriage returns.
> >>>>>>>
> >>>>>> I am confused.
> >>>>>> currently pg_dumpall plain format will abort when encountering dbname
> >>>>>> containing newline.
> >>>>>> the left dumped plain file does not contain all the cluster
> >>>>>> databases data.
> >>>>>>
> >>>>>>
> >>>>>> if pg_dumpall non-text format aborts earlier,
> >>>>>> it's aligned with pg_dumpall plain format?
> >>>>>> it's also an improvement since aborts earlier, nothing will be dumped?
> >>>>>>
> >>>>>>
> >>>>>> am i missing something?
> >>>>>>
> >>>>>>
> >>>>> I think we should fix that.
> >>>>>
> >>>>> But for the current proposal, Álvaro and I were talking this morning,
> >>>>> and we thought the simplest thing here would be to have the one line
> >>>>> format and escape NL/CRs in the database name.
> >>>>>
> >>>>>
> >>>>> cheers
> >>>>>
> >>>> Okay. As per discussions, we will keep one line entry for each
> >>>> database into map.file.
> >>>>
> >>>> Thanks all for feedback and review.
> >>>>
> >>>> Here, I am attaching updated patches for review and testing. These
> >>>> patches can be applied on commit a6524105d20b.
> >>>
> >>>
> >>> I'm working through this patch set with a view to committing it.
> >>> Attached is some cleanup which is where I got to today, although there
> >>> is more to do. One thing I am wondering is why not put the
> >>> SimpleDatabaseOidList stuff in fe_utils/simle_list.{c,h} ? That's
> >>> where all the similar stuff belongs, and it feels strange to have this
> >>> inline in pg_restore.c. (I also don't like the name much -
> >>> SimpleOidStringList or maybe SimpleOidPlusStringList might be better).
> >>>
> >>>
> >>>
> >>
> >> OK, I have done that, so here is the result. The first two are you
> >> original patches. patch 3 adds the new list type to fe-utils, and patch
> >> 4 contains my cleanups and use of the new list type. Apart from some
> >> relatively minor cleanup, the one thing I would like to change is how
> >> dumps are named. If we are producing tar or custom format dumps, I think
> >> the file names should reflect that (oid.dmp and oid.tar rather than a
> >> bare oid as the filename), and pg_restore should look for those. I'm
> >> going to work on that tomorrow - I don't think it will be terribly
> >> difficult.
> >>
> > Thanks Andrew.
> >
> > Here, I am attaching a delta patch for oid.tar and oid.dmp format.
> >
>
>
> OK, looks good, I have incorporated that.
>
> There are a couple of rough edges, though.
>
> First, I see this:
>
>
> andrew@ub22arm:inst $ bin/pg_restore -C -d postgres
> --exclude-database=regression_dummy_seclabel
> --exclude-database=regression_test_extensions
> --exclude-database=regression_test_pg_dump dest
> pg_restore: error: could not execute query: "ERROR: role "andrew"
> already exists
> "
> Command was: "
>
> --
> -- Roles
> --
>
> CREATE ROLE andrew;"
> pg_restore: warning: errors ignored on global.dat file restore: 1
> pg_restore: error: could not execute query: ERROR: database "template1"
> already exists
> Command was: CREATE DATABASE template1 WITH TEMPLATE = template0
> ENCODING = 'SQL_ASCII' LOCALE_PROVIDER = libc LOCALE = 'C';
>
>
> pg_restore: warning: errors ignored on database "template1" restore: 1
> pg_restore: error: could not execute query: ERROR: database "postgres"
> already exists
> Command was: CREATE DATABASE postgres WITH TEMPLATE = template0 ENCODING
> = 'SQL_ASCII' LOCALE_PROVIDER = libc LOCALE = 'C';
>
>
> pg_restore: warning: errors ignored on database "postgres" restore: 1
> pg_restore: warning: errors ignored on restore: 3
>
>
>
> It seems pointless to be trying to create the rolw that we are connected
> as, and we also expect template1 and postgres to exist.
> In a similar vein, I don't see why we are setting the --create flag in
> pg_dumpall for those databases. I'm attaching a patch that is designed
> to stop that, but it doesn't solve the above issues.
>
> I also notice a bunch of these in globals.dat:
>
>
> --
> -- Databases
> --
>
> --
> -- Database "template1" dump
> --
>
> --
> -- Database "andrew" dump
> --
>
> --
> -- Database "isolation_regression_brin" dump
> --
>
> --
> -- Database "isolation_regression_delay_execution" dump
> --
>
> ...
>
>
> The patch also tries to fix this.
>
> Lastly, this badly needs some TAP tests written.
>
> I'm going to work on reviewing the documentation next.
>
>
> cheers
>
>
> andrew
>
> --
> Andrew Dunstan
> EDB: https://www.enterprisedb.com
--
Thanks and Regards
Mahendra Singh Thalor
EnterpriseDB: http://www.enterprisedb.com
Вложения
On 2025-03-31 Mo 5:34 AM, Mahendra Singh Thalor wrote: > > > > > There are a couple of rough edges, though. > > > > First, I see this: > > > > > > andrew@ub22arm:inst $ bin/pg_restore -C -d postgres > > --exclude-database=regression_dummy_seclabel > > --exclude-database=regression_test_extensions > > --exclude-database=regression_test_pg_dump dest > > pg_restore: error: could not execute query: "ERROR: role "andrew" > > already exists > > " > > Command was: " > > > > -- > > -- Roles > > -- > > > > CREATE ROLE andrew;" > > pg_restore: warning: errors ignored on global.dat file restore: 1 > > pg_restore: error: could not execute query: ERROR: database "template1" > > already exists > > Command was: CREATE DATABASE template1 WITH TEMPLATE = template0 > > ENCODING = 'SQL_ASCII' LOCALE_PROVIDER = libc LOCALE = 'C'; > > > > > > pg_restore: warning: errors ignored on database "template1" restore: 1 > > pg_restore: error: could not execute query: ERROR: database "postgres" > > already exists > > Command was: CREATE DATABASE postgres WITH TEMPLATE = template0 ENCODING > > = 'SQL_ASCII' LOCALE_PROVIDER = libc LOCALE = 'C'; > > > > > > pg_restore: warning: errors ignored on database "postgres" restore: 1 > > pg_restore: warning: errors ignored on restore: 3 > > > > > > > > It seems pointless to be trying to create the rolw that we are connected > > as, and we also expect template1 and postgres to exist. > > Thanks Andrew for the updated patches. > > Here, I am attaching a delta patch which solves the errors for the > already created database and we need to reset some flags also. Please > have a look over this delta patch and merge it. > > If we want to skip errors for connected user (CREATE ROLE username), > then we need to handle it by comparing sql commands in > process_global_sql_commands function or we can compare errors after > executing it. > delta_0002* patch is doing some handling but this is not a proper fix. > > I think we can merge delta_0001* and later, we can work on delta_0002. Yes, delta 1 looks OK, except that the pstrdup() calls are probably unnecessary. Delta 2 needs some significant surgery at least. I think we can use it as at least a partial fix, to avoid trying to create the role we're running as (Should use PQuser() for that rather than cparams.user). BTW, if you're sending delta patches, make sure they don't have .patch extensions. Otherwise, the CFBot gets upset. I usually just add .noci to the file names. cheers andrew -- Andrew Dunstan EDB: https://www.enterprisedb.com
On Mon, 31 Mar 2025 at 19:27, Andrew Dunstan <andrew@dunslane.net> wrote: > > > On 2025-03-31 Mo 5:34 AM, Mahendra Singh Thalor wrote: > > > > > > > > There are a couple of rough edges, though. > > > > > > First, I see this: > > > > > > > > > andrew@ub22arm:inst $ bin/pg_restore -C -d postgres > > > --exclude-database=regression_dummy_seclabel > > > --exclude-database=regression_test_extensions > > > --exclude-database=regression_test_pg_dump dest > > > pg_restore: error: could not execute query: "ERROR: role "andrew" > > > already exists > > > " > > > Command was: " > > > > > > -- > > > -- Roles > > > -- > > > > > > CREATE ROLE andrew;" > > > pg_restore: warning: errors ignored on global.dat file restore: 1 > > > pg_restore: error: could not execute query: ERROR: database "template1" > > > already exists > > > Command was: CREATE DATABASE template1 WITH TEMPLATE = template0 > > > ENCODING = 'SQL_ASCII' LOCALE_PROVIDER = libc LOCALE = 'C'; > > > > > > > > > pg_restore: warning: errors ignored on database "template1" restore: 1 > > > pg_restore: error: could not execute query: ERROR: database "postgres" > > > already exists > > > Command was: CREATE DATABASE postgres WITH TEMPLATE = template0 ENCODING > > > = 'SQL_ASCII' LOCALE_PROVIDER = libc LOCALE = 'C'; > > > > > > > > > pg_restore: warning: errors ignored on database "postgres" restore: 1 > > > pg_restore: warning: errors ignored on restore: 3 > > > > > > > > > > > > It seems pointless to be trying to create the rolw that we are connected > > > as, and we also expect template1 and postgres to exist. > > > > Thanks Andrew for the updated patches. > > > > Here, I am attaching a delta patch which solves the errors for the > > already created database and we need to reset some flags also. Please > > have a look over this delta patch and merge it. > > > > If we want to skip errors for connected user (CREATE ROLE username), > > then we need to handle it by comparing sql commands in > > process_global_sql_commands function or we can compare errors after > > executing it. > > delta_0002* patch is doing some handling but this is not a proper fix. > > > > I think we can merge delta_0001* and later, we can work on delta_0002. > > > Yes, delta 1 looks OK, except that the pstrdup() calls are probably > unnecessary. Delta 2 needs some significant surgery at least. I think we > can use it as at least a partial fix, to avoid trying to create the role > we're running as (Should use PQuser() for that rather than cparams.user). Thanks for the quick review. I fixed the above comments and made 2 delta patches. Please have a look over these. > BTW, if you're sending delta patches, make sure they don't have .patch > extensions. Otherwise, the CFBot gets upset. I usually just add .noci to > the file names. Sure. I will also use .noci. Thanks for feedback. -- Thanks and Regards Mahendra Singh Thalor EnterpriseDB: http://www.enterprisedb.com
Вложения
On 2025-03-31 Mo 12:16 PM, Mahendra Singh Thalor wrote: > On Mon, 31 Mar 2025 at 19:27, Andrew Dunstan <andrew@dunslane.net> wrote: >> >> On 2025-03-31 Mo 5:34 AM, Mahendra Singh Thalor wrote: >>>> There are a couple of rough edges, though. >>>> >>>> First, I see this: >>>> >>>> >>>> andrew@ub22arm:inst $ bin/pg_restore -C -d postgres >>>> --exclude-database=regression_dummy_seclabel >>>> --exclude-database=regression_test_extensions >>>> --exclude-database=regression_test_pg_dump dest >>>> pg_restore: error: could not execute query: "ERROR: role "andrew" >>>> already exists >>>> " >>>> Command was: " >>>> >>>> -- >>>> -- Roles >>>> -- >>>> >>>> CREATE ROLE andrew;" >>>> pg_restore: warning: errors ignored on global.dat file restore: 1 >>>> pg_restore: error: could not execute query: ERROR: database "template1" >>>> already exists >>>> Command was: CREATE DATABASE template1 WITH TEMPLATE = template0 >>>> ENCODING = 'SQL_ASCII' LOCALE_PROVIDER = libc LOCALE = 'C'; >>>> >>>> >>>> pg_restore: warning: errors ignored on database "template1" restore: 1 >>>> pg_restore: error: could not execute query: ERROR: database "postgres" >>>> already exists >>>> Command was: CREATE DATABASE postgres WITH TEMPLATE = template0 ENCODING >>>> = 'SQL_ASCII' LOCALE_PROVIDER = libc LOCALE = 'C'; >>>> >>>> >>>> pg_restore: warning: errors ignored on database "postgres" restore: 1 >>>> pg_restore: warning: errors ignored on restore: 3 >>>> >>>> >>>> >>>> It seems pointless to be trying to create the rolw that we are connected >>>> as, and we also expect template1 and postgres to exist. >>> Thanks Andrew for the updated patches. >>> >>> Here, I am attaching a delta patch which solves the errors for the >>> already created database and we need to reset some flags also. Please >>> have a look over this delta patch and merge it. >>> >>> If we want to skip errors for connected user (CREATE ROLE username), >>> then we need to handle it by comparing sql commands in >>> process_global_sql_commands function or we can compare errors after >>> executing it. >>> delta_0002* patch is doing some handling but this is not a proper fix. >>> >>> I think we can merge delta_0001* and later, we can work on delta_0002. >> >> Yes, delta 1 looks OK, except that the pstrdup() calls are probably >> unnecessary. Delta 2 needs some significant surgery at least. I think we >> can use it as at least a partial fix, to avoid trying to create the role >> we're running as (Should use PQuser() for that rather than cparams.user). > Thanks for the quick review. > > I fixed the above comments and made 2 delta patches. Please have a > look over these. > >> BTW, if you're sending delta patches, make sure they don't have .patch >> extensions. Otherwise, the CFBot gets upset. I usually just add .noci to >> the file names. > Sure. I will also use .noci. Thanks for feedback. Thanks. Here are patches that contain (my version of) all the cleanups. With this I get a clean restore run in my test case with no error messages. cheers andrew -- Andrew Dunstan EDB: https://www.enterprisedb.com
On 2025-03-31 Mo 1:16 PM, Andrew Dunstan wrote: > > > > Thanks. Here are patches that contain (my version of) all the > cleanups. With this I get a clean restore run in my test case with no > error messages. > > > This time with patches cheers andrew -- Andrew Dunstan EDB: https://www.enterprisedb.com
Вложения
Hi FWIW I don't think the on_exit_nicely business is in final shape just yet. We're doing something super strange and novel about keeping track of an array index, so that we can modify it later. Or something like that, I think? That doesn't sound all that nice to me. Elsewhere it was suggested that we need some way to keep track of the list of things that need cleanup (a list of connections IIRC?) -- perhaps in a thread-local variable or a global or something -- and we install the cleanup function once, and that reads from the variable. The program can add things to the list, or remove them, at will; and we don't need to modify the cleanup function in any way. -- Álvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/
On Mon, 31 Mar 2025 at 23:43, Álvaro Herrera <alvherre@alvh.no-ip.org> wrote: > > Hi > > FWIW I don't think the on_exit_nicely business is in final shape just > yet. We're doing something super strange and novel about keeping track > of an array index, so that we can modify it later. Or something like > that, I think? That doesn't sound all that nice to me. Elsewhere it > was suggested that we need some way to keep track of the list of things > that need cleanup (a list of connections IIRC?) -- perhaps in a > thread-local variable or a global or something -- and we install the > cleanup function once, and that reads from the variable. The program > can add things to the list, or remove them, at will; and we don't need > to modify the cleanup function in any way. > > -- > Álvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/ Thanks Álvaro for the feedback. I removed the old handling of on_exit_nicely_list from the last patch set and added one simple function to just update the archive handle in shutdown_info. (shutdown_info.AHX = AHX;) For first database, we will add entry into on_exit_nicely_list array and for rest database, we will update only shutdown_info as we already closed connection for previous database.With this fix, we will not touch entry of on_exit_nicely_list for each database. Here, I am attaching updated patches. -- Thanks and Regards Mahendra Singh Thalor EnterpriseDB: http://www.enterprisedb.com
Вложения
On 2025-04-01 Tu 1:59 AM, Mahendra Singh Thalor wrote: > On Mon, 31 Mar 2025 at 23:43, Álvaro Herrera <alvherre@alvh.no-ip.org> wrote: >> Hi >> >> FWIW I don't think the on_exit_nicely business is in final shape just >> yet. We're doing something super strange and novel about keeping track >> of an array index, so that we can modify it later. Or something like >> that, I think? That doesn't sound all that nice to me. Elsewhere it >> was suggested that we need some way to keep track of the list of things >> that need cleanup (a list of connections IIRC?) -- perhaps in a >> thread-local variable or a global or something -- and we install the >> cleanup function once, and that reads from the variable. The program >> can add things to the list, or remove them, at will; and we don't need >> to modify the cleanup function in any way. >> >> -- >> Álvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/ > Thanks Álvaro for the feedback. > > I removed the old handling of on_exit_nicely_list from the last patch > set and added one simple function to just update the archive handle in > shutdown_info. (shutdown_info.AHX = AHX;) > > For first database, we will add entry into on_exit_nicely_list array > and for rest database, we will update only shutdown_info as we already > closed connection for previous database.With this fix, we will not > touch entry of on_exit_nicely_list for each database. > > Here, I am attaching updated patches. > OK, looks good. here's my latest. I'm currently working on tidying up docco and comments. cheers andrew -- Andrew Dunstan EDB: https://www.enterprisedb.com
Вложения
On Fri, 4 Apr 2025 at 01:17, Andrew Dunstan <andrew@dunslane.net> wrote: > > > On 2025-04-01 Tu 1:59 AM, Mahendra Singh Thalor wrote: > > On Mon, 31 Mar 2025 at 23:43, Álvaro Herrera <alvherre@alvh.no-ip.org> wrote: > >> Hi > >> > >> FWIW I don't think the on_exit_nicely business is in final shape just > >> yet. We're doing something super strange and novel about keeping track > >> of an array index, so that we can modify it later. Or something like > >> that, I think? That doesn't sound all that nice to me. Elsewhere it > >> was suggested that we need some way to keep track of the list of things > >> that need cleanup (a list of connections IIRC?) -- perhaps in a > >> thread-local variable or a global or something -- and we install the > >> cleanup function once, and that reads from the variable. The program > >> can add things to the list, or remove them, at will; and we don't need > >> to modify the cleanup function in any way. > >> > >> -- > >> Álvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/ > > Thanks Álvaro for the feedback. > > > > I removed the old handling of on_exit_nicely_list from the last patch > > set and added one simple function to just update the archive handle in > > shutdown_info. (shutdown_info.AHX = AHX;) > > > > For first database, we will add entry into on_exit_nicely_list array > > and for rest database, we will update only shutdown_info as we already > > closed connection for previous database.With this fix, we will not > > touch entry of on_exit_nicely_list for each database. > > > > Here, I am attaching updated patches. > > > > > OK, looks good. here's my latest. I'm currently working on tidying up > docco and comments. > > > cheers > > > andrew > > > > > -- > Andrew Dunstan > EDB: https://www.enterprisedb.com Thanks Andrew for the updated patches. Here, I am attaching a delta patch with some more TAP-test cases. -- Thanks and Regards Mahendra Singh Thalor EnterpriseDB: http://www.enterprisedb.com
Вложения
On Fri, 4 Apr 2025 at 13:52, Mahendra Singh Thalor <mahi6run@gmail.com> wrote: > > On Fri, 4 Apr 2025 at 01:17, Andrew Dunstan <andrew@dunslane.net> wrote: > > > > > > On 2025-04-01 Tu 1:59 AM, Mahendra Singh Thalor wrote: > > > On Mon, 31 Mar 2025 at 23:43, Álvaro Herrera <alvherre@alvh.no-ip.org> wrote: > > >> Hi > > >> > > >> FWIW I don't think the on_exit_nicely business is in final shape just > > >> yet. We're doing something super strange and novel about keeping track > > >> of an array index, so that we can modify it later. Or something like > > >> that, I think? That doesn't sound all that nice to me. Elsewhere it > > >> was suggested that we need some way to keep track of the list of things > > >> that need cleanup (a list of connections IIRC?) -- perhaps in a > > >> thread-local variable or a global or something -- and we install the > > >> cleanup function once, and that reads from the variable. The program > > >> can add things to the list, or remove them, at will; and we don't need > > >> to modify the cleanup function in any way. > > >> > > >> -- > > >> Álvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/ > > > Thanks Álvaro for the feedback. > > > > > > I removed the old handling of on_exit_nicely_list from the last patch > > > set and added one simple function to just update the archive handle in > > > shutdown_info. (shutdown_info.AHX = AHX;) > > > > > > For first database, we will add entry into on_exit_nicely_list array > > > and for rest database, we will update only shutdown_info as we already > > > closed connection for previous database.With this fix, we will not > > > touch entry of on_exit_nicely_list for each database. > > > > > > Here, I am attaching updated patches. > > > > > > > > > OK, looks good. here's my latest. I'm currently working on tidying up > > docco and comments. > > > > > > cheers > > > > > > andrew > > > > > > > > > > -- > > Andrew Dunstan > > EDB: https://www.enterprisedb.com > > Thanks Andrew for the updated patches. > > Here, I am attaching a delta patch with some more TAP-test cases. > Here, I am attaching an updated delta patch which has some more TAP tests. Please include these tests also. This patch can be applied on v20250403_0004* patch. -- Thanks and Regards Mahendra Singh Thalor EnterpriseDB: http://www.enterprisedb.com
Вложения
On Fri, 4 Apr 2025 at 13:52, Mahendra Singh Thalor <mahi6run@gmail.com> wrote:On Fri, 4 Apr 2025 at 01:17, Andrew Dunstan <andrew@dunslane.net> wrote:On 2025-04-01 Tu 1:59 AM, Mahendra Singh Thalor wrote:On Mon, 31 Mar 2025 at 23:43, Álvaro Herrera <alvherre@alvh.no-ip.org> wrote:Hi FWIW I don't think the on_exit_nicely business is in final shape just yet. We're doing something super strange and novel about keeping track of an array index, so that we can modify it later. Or something like that, I think? That doesn't sound all that nice to me. Elsewhere it was suggested that we need some way to keep track of the list of things that need cleanup (a list of connections IIRC?) -- perhaps in a thread-local variable or a global or something -- and we install the cleanup function once, and that reads from the variable. The program can add things to the list, or remove them, at will; and we don't need to modify the cleanup function in any way. -- Álvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/Thanks Álvaro for the feedback. I removed the old handling of on_exit_nicely_list from the last patch set and added one simple function to just update the archive handle in shutdown_info. (shutdown_info.AHX = AHX;) For first database, we will add entry into on_exit_nicely_list array and for rest database, we will update only shutdown_info as we already closed connection for previous database.With this fix, we will not touch entry of on_exit_nicely_list for each database. Here, I am attaching updated patches.OK, looks good. here's my latest. I'm currently working on tidying up docco and comments. cheers andrew -- Andrew Dunstan EDB: https://www.enterprisedb.comThanks Andrew for the updated patches. Here, I am attaching a delta patch with some more TAP-test cases.Here, I am attaching an updated delta patch which has some more TAP tests. Please include these tests also. This patch can be applied on v20250403_0004* patch.
Thanks. I have pushed these now with a few further small tweaks.
cheers
andrew
-- Andrew Dunstan EDB: https://www.enterprisedb.com
On Sat, 5 Apr 2025 at 01:41, Andrew Dunstan <andrew@dunslane.net> wrote: > > > On 2025-04-04 Fr 5:12 AM, Mahendra Singh Thalor wrote: > > On Fri, 4 Apr 2025 at 13:52, Mahendra Singh Thalor <mahi6run@gmail.com> wrote: > > On Fri, 4 Apr 2025 at 01:17, Andrew Dunstan <andrew@dunslane.net> wrote: > > On 2025-04-01 Tu 1:59 AM, Mahendra Singh Thalor wrote: > > On Mon, 31 Mar 2025 at 23:43, Álvaro Herrera <alvherre@alvh.no-ip.org> wrote: > > Hi > > FWIW I don't think the on_exit_nicely business is in final shape just > yet. We're doing something super strange and novel about keeping track > of an array index, so that we can modify it later. Or something like > that, I think? That doesn't sound all that nice to me. Elsewhere it > was suggested that we need some way to keep track of the list of things > that need cleanup (a list of connections IIRC?) -- perhaps in a > thread-local variable or a global or something -- and we install the > cleanup function once, and that reads from the variable. The program > can add things to the list, or remove them, at will; and we don't need > to modify the cleanup function in any way. > > -- > Álvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/ > > Thanks Álvaro for the feedback. > > I removed the old handling of on_exit_nicely_list from the last patch > set and added one simple function to just update the archive handle in > shutdown_info. (shutdown_info.AHX = AHX;) > > For first database, we will add entry into on_exit_nicely_list array > and for rest database, we will update only shutdown_info as we already > closed connection for previous database.With this fix, we will not > touch entry of on_exit_nicely_list for each database. > > Here, I am attaching updated patches. > > > OK, looks good. here's my latest. I'm currently working on tidying up > docco and comments. > > > cheers > > > andrew > > > > > -- > Andrew Dunstan > EDB: https://www.enterprisedb.com > > Thanks Andrew for the updated patches. > > Here, I am attaching a delta patch with some more TAP-test cases. > > Here, I am attaching an updated delta patch which has some more TAP > tests. Please include these tests also. This patch can be applied on > v20250403_0004* patch. > > > > Thanks. I have pushed these now with a few further small tweaks. > > > cheers > > > andrew > > -- > Andrew Dunstan > EDB: https://www.enterprisedb.com Thanks Andrew for committing this. -- Thanks and Regards Mahendra Singh Thalor EnterpriseDB: http://www.enterprisedb.com
On 2025-04-04 Fr 5:12 AM, Mahendra Singh Thalor wrote:On Fri, 4 Apr 2025 at 13:52, Mahendra Singh Thalor <mahi6run@gmail.com> wrote:On Fri, 4 Apr 2025 at 01:17, Andrew Dunstan <andrew@dunslane.net> wrote:On 2025-04-01 Tu 1:59 AM, Mahendra Singh Thalor wrote:On Mon, 31 Mar 2025 at 23:43, Álvaro Herrera <alvherre@alvh.no-ip.org> wrote:Hi FWIW I don't think the on_exit_nicely business is in final shape just yet. We're doing something super strange and novel about keeping track of an array index, so that we can modify it later. Or something like that, I think? That doesn't sound all that nice to me. Elsewhere it was suggested that we need some way to keep track of the list of things that need cleanup (a list of connections IIRC?) -- perhaps in a thread-local variable or a global or something -- and we install the cleanup function once, and that reads from the variable. The program can add things to the list, or remove them, at will; and we don't need to modify the cleanup function in any way. -- Álvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/Thanks Álvaro for the feedback. I removed the old handling of on_exit_nicely_list from the last patch set and added one simple function to just update the archive handle in shutdown_info. (shutdown_info.AHX = AHX;) For first database, we will add entry into on_exit_nicely_list array and for rest database, we will update only shutdown_info as we already closed connection for previous database.With this fix, we will not touch entry of on_exit_nicely_list for each database. Here, I am attaching updated patches.OK, looks good. here's my latest. I'm currently working on tidying up docco and comments. cheers andrew -- Andrew Dunstan EDB: https://www.enterprisedb.comThanks Andrew for the updated patches. Here, I am attaching a delta patch with some more TAP-test cases.Here, I am attaching an updated delta patch which has some more TAP tests. Please include these tests also. This patch can be applied on v20250403_0004* patch.
Thanks. I have pushed these now with a few further small tweaks.
Вложения
Thanks. I have pushed these now with a few further small tweaks.
Sorry if it is not the right place.Coverity has another resource leak alert.trivial patch attached.
Thanks for checking. Pushed.
cheers
andrew
-- Andrew Dunstan EDB: https://www.enterprisedb.com
On 2025-04-10 Th 2:38 PM, Ranier Vilela wrote:
Thanks. I have pushed these now with a few further small tweaks.
Sorry if it is not the right place.Coverity has another resource leak alert.trivial patch attached.
Thanks for checking. Pushed.
Em qui., 10 de abr. de 2025 às 15:58, Andrew Dunstan <andrew@dunslane.net> escreveu:
On 2025-04-10 Th 2:38 PM, Ranier Vilela wrote:
Thanks. I have pushed these now with a few further small tweaks.
Sorry if it is not the right place.Coverity has another resource leak alert.trivial patch attached.
Thanks for checking. Pushed.
Andew, I think that the commit wasn't very correct.Now the variable *q* is being destroyed inside the loop.The patch was destroying the variable *q* (stringinfo),after the loop while.
Yes, you're right. Must be blind. Fixed.
cheers
andrew
-- Andrew Dunstan EDB: https://www.enterprisedb.com
On 2025-04-10 Th 5:45 PM, Ranier Vilela wrote:Em qui., 10 de abr. de 2025 às 15:58, Andrew Dunstan <andrew@dunslane.net> escreveu:
On 2025-04-10 Th 2:38 PM, Ranier Vilela wrote:
Thanks. I have pushed these now with a few further small tweaks.
Sorry if it is not the right place.Coverity has another resource leak alert.trivial patch attached.
Thanks for checking. Pushed.
Andew, I think that the commit wasn't very correct.Now the variable *q* is being destroyed inside the loop.The patch was destroying the variable *q* (stringinfo),after the loop while.
Yes, you're right. Must be blind. Fixed.
My fault.
I'm sorry.
On Sat, 5 Apr 2025 at 01:41, Andrew Dunstan <andrew@dunslane.net> wrote: > > > On 2025-04-04 Fr 5:12 AM, Mahendra Singh Thalor wrote: > > On Fri, 4 Apr 2025 at 13:52, Mahendra Singh Thalor <mahi6run@gmail.com> wrote: > > On Fri, 4 Apr 2025 at 01:17, Andrew Dunstan <andrew@dunslane.net> wrote: > > On 2025-04-01 Tu 1:59 AM, Mahendra Singh Thalor wrote: > > On Mon, 31 Mar 2025 at 23:43, Álvaro Herrera <alvherre@alvh.no-ip.org> wrote: > > Hi > > FWIW I don't think the on_exit_nicely business is in final shape just > yet. We're doing something super strange and novel about keeping track > of an array index, so that we can modify it later. Or something like > that, I think? That doesn't sound all that nice to me. Elsewhere it > was suggested that we need some way to keep track of the list of things > that need cleanup (a list of connections IIRC?) -- perhaps in a > thread-local variable or a global or something -- and we install the > cleanup function once, and that reads from the variable. The program > can add things to the list, or remove them, at will; and we don't need > to modify the cleanup function in any way. > > -- > Álvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/ > > Thanks Álvaro for the feedback. > > I removed the old handling of on_exit_nicely_list from the last patch > set and added one simple function to just update the archive handle in > shutdown_info. (shutdown_info.AHX = AHX;) > > For first database, we will add entry into on_exit_nicely_list array > and for rest database, we will update only shutdown_info as we already > closed connection for previous database.With this fix, we will not > touch entry of on_exit_nicely_list for each database. > > Here, I am attaching updated patches. > > > OK, looks good. here's my latest. I'm currently working on tidying up > docco and comments. > > > cheers > > > andrew > > > > > -- > Andrew Dunstan > EDB: https://www.enterprisedb.com > > Thanks Andrew for the updated patches. > > Here, I am attaching a delta patch with some more TAP-test cases. > > Here, I am attaching an updated delta patch which has some more TAP > tests. Please include these tests also. This patch can be applied on > v20250403_0004* patch. > > > > Thanks. I have pushed these now with a few further small tweaks. > > > cheers > > > andrew > > -- > Andrew Dunstan > EDB: https://www.enterprisedb.com Hi Andrew, I did some refactoring to find out dump file extensions(.dmp/.tar etc) in pg_restore. With the attached patch, we will not try to find out file extension with each database, rather we will find out before the loop. Here, I am attaching a patch for the same. Please have a look over this. -- Thanks and Regards Mahendra Singh Thalor EnterpriseDB: http://www.enterprisedb.com
Вложения
On 2025-04-15 Tu 2:30 PM, Mahendra Singh Thalor wrote: > Hi Andrew, > I did some refactoring to find out dump file extensions(.dmp/.tar etc) > in pg_restore. With the attached patch, we will not try to find out > file extension with each database, rather we will find out before the > loop. > > Here, I am attaching a patch for the same. Please have a look over this. That doesn't look right at first glance. You shouldn't have to tell pg_restore what format to use, it should be able to intuit it from the dumps (and that's what the docs say it does). The saving here would be hardly measurable anyway - you would be in effect saving one or two stat calls per database. cheers andrew -- Andrew Dunstan EDB: https://www.enterprisedb.com
On Fri, Apr 04, 2025 at 04:11:05PM -0400, Andrew Dunstan wrote: > Thanks. I have pushed these now with a few further small tweaks. This drops all databases: pg_dumpall --clean -Fd -f /tmp/dump pg_restore -d template1 --globals-only /tmp/dump That didn't match my expectations given this help text: $ pg_restore --help|grep global -g, --globals-only restore only global objects, no databases This happens in dropDBs(). I found that by searching pg_dumpall.c for "OPF", which finds all the content we can write to globals.dat. commit 1495eff wrote: > --- a/src/bin/pg_dump/pg_dumpall.c > +++ b/src/bin/pg_dump/pg_dumpall.c > @@ -1612,9 +1683,27 @@ dumpDatabases(PGconn *conn) > continue; > } > > + /* > + * If this is not a plain format dump, then append dboid and dbname to > + * the map.dat file. > + */ > + if (archDumpFormat != archNull) > + { > + if (archDumpFormat == archCustom) > + snprintf(dbfilepath, MAXPGPATH, "\"%s\"/\"%s\".dmp", db_subdir, oid); > + else if (archDumpFormat == archTar) > + snprintf(dbfilepath, MAXPGPATH, "\"%s\"/\"%s\".tar", db_subdir, oid); > + else > + snprintf(dbfilepath, MAXPGPATH, "\"%s\"/\"%s\"", db_subdir, oid); Use appendShellString() instead. Plain mode already does that for the "pg_dumpall -f" argument, which is part of db_subdir here. We don't want weird filename characters to work out differently for plain vs. non-plain mode. Also, it's easier to search for appendShellString() than to search for open-coded shell quoting. > @@ -1641,19 +1727,30 @@ dumpDatabases(PGconn *conn) > if (filename) > fclose(OPF); > > - ret = runPgDump(dbname, create_opts); > + ret = runPgDump(dbname, create_opts, dbfilepath, archDumpFormat); > if (ret != 0) > pg_fatal("pg_dump failed on database \"%s\", exiting", dbname); > > if (filename) > { > - OPF = fopen(filename, PG_BINARY_A); > + char global_path[MAXPGPATH]; > + > + if (archDumpFormat != archNull) > + snprintf(global_path, MAXPGPATH, "%s/global.dat", filename); > + else > + snprintf(global_path, MAXPGPATH, "%s", filename); > + > + OPF = fopen(global_path, PG_BINARY_A); > if (!OPF) > pg_fatal("could not re-open the output file \"%s\": %m", > - filename); > + global_path); Minor item: plain mode benefits from reopening, because pg_dump appended to the plain output file. There's no analogous need to reopen global.dat, since just this one process writes to global.dat. > @@ -1672,17 +1770,36 @@ runPgDump(const char *dbname, const char *create_opts) > initPQExpBuffer(&connstrbuf); > initPQExpBuffer(&cmd); > > - printfPQExpBuffer(&cmd, "\"%s\" %s %s", pg_dump_bin, > - pgdumpopts->data, create_opts); > - > /* > - * If we have a filename, use the undocumented plain-append pg_dump > - * format. > + * If this is not a plain format dump, then append file name and dump > + * format to the pg_dump command to get archive dump. > */ > - if (filename) > - appendPQExpBufferStr(&cmd, " -Fa "); > + if (archDumpFormat != archNull) > + { > + printfPQExpBuffer(&cmd, "\"%s\" -f %s %s", pg_dump_bin, > + dbfile, create_opts); > + > + if (archDumpFormat == archDirectory) > + appendPQExpBufferStr(&cmd, " --format=directory "); > + else if (archDumpFormat == archCustom) > + appendPQExpBufferStr(&cmd, " --format=custom "); > + else if (archDumpFormat == archTar) > + appendPQExpBufferStr(&cmd, " --format=tar "); > + } > else > - appendPQExpBufferStr(&cmd, " -Fp "); > + { > + printfPQExpBuffer(&cmd, "\"%s\" %s %s", pg_dump_bin, > + pgdumpopts->data, create_opts); This uses pgdumpopts for plain mode only, so many pg_dumpall options silently have no effect in non-plain mode. Example: strace -f pg_dumpall --lock-wait-timeout=10 2>&1 >/dev/null | grep exec strace -f pg_dumpall --lock-wait-timeout=10 -Fd -f /tmp/dump3 2>&1 >/dev/null | grep exec > --- a/src/bin/pg_dump/pg_restore.c > +++ b/src/bin/pg_dump/pg_restore.c > +/* > + * read_one_statement > + * > + * This will start reading from passed file pointer using fgetc and read till > + * semicolon(sql statement terminator for global.dat file) > + * > + * EOF is returned if end-of-file input is seen; time to shut down. What makes it okay to use this particular subset of SQL lexing? > +/* > + * get_dbnames_list_to_restore > + * > + * This will mark for skipping any entries from dbname_oid_list that pattern match an > + * entry in the db_exclude_patterns list. > + * > + * Returns the number of database to be restored. > + * > + */ > +static int > +get_dbnames_list_to_restore(PGconn *conn, > + SimpleOidStringList *dbname_oid_list, > + SimpleStringList db_exclude_patterns) > +{ > + int count_db = 0; > + PQExpBuffer query; > + PGresult *res; > + > + query = createPQExpBuffer(); > + > + if (!conn) > + pg_log_info("considering PATTERN as NAME for --exclude-database option as no db connection while doing pg_restore."); When do we not have a connection here? We'd need to document this behavior variation if it stays, but I'd prefer if we can just rely on having a connection. > + /* If database is already created, then don't set createDB flag. */ > + if (opts->cparams.dbname) > + { > + PGconn *test_conn; > + > + test_conn = ConnectDatabase(db_cell->str, NULL, opts->cparams.pghost, > + opts->cparams.pgport, opts->cparams.username, TRI_DEFAULT, > + false, progname, NULL, NULL, NULL, NULL); > + if (test_conn) > + { > + PQfinish(test_conn); > + > + /* Use already created database for connection. */ > + opts->createDB = 0; > + opts->cparams.dbname = db_cell->str; > + } > + else > + { > + /* we'll have to create it */ > + opts->createDB = 1; > + opts->cparams.dbname = connected_db; > + } In released versions, "pg_restore --create" fails if the database exists, and pg_restore w/o --create fails unless the database exists. I think we should continue that pattern in this new feature. If not, pg_restore should document how it treats pg_dumpall-sourced dumps with the "create if not exists" semantics appearing here.
Thanks Noah for the comments. On Wed, 9 Jul 2025 at 02:58, Noah Misch <noah@leadboat.com> wrote: > > On Fri, Apr 04, 2025 at 04:11:05PM -0400, Andrew Dunstan wrote: > > Thanks. I have pushed these now with a few further small tweaks. > > This drops all databases: > > pg_dumpall --clean -Fd -f /tmp/dump > pg_restore -d template1 --globals-only /tmp/dump > > That didn't match my expectations given this help text: > > $ pg_restore --help|grep global > -g, --globals-only restore only global objects, no databases Databases are global objects so due to --clean command, we are putting drop commands in global.dat for all the databases. While restoring, we used the "--globals-only" option so we are dropping all these databases by global.dat file. Please let us know your expectations for this specific case. > > This happens in dropDBs(). I found that by searching pg_dumpall.c for "OPF", > which finds all the content we can write to globals.dat. > > commit 1495eff wrote: > > --- a/src/bin/pg_dump/pg_dumpall.c > > +++ b/src/bin/pg_dump/pg_dumpall.c > > > @@ -1612,9 +1683,27 @@ dumpDatabases(PGconn *conn) > > continue; > > } > > > > + /* > > + * If this is not a plain format dump, then append dboid and dbname to > > + * the map.dat file. > > + */ > > + if (archDumpFormat != archNull) > > + { > > + if (archDumpFormat == archCustom) > > + snprintf(dbfilepath, MAXPGPATH, "\"%s\"/\"%s\".dmp", db_subdir, oid); > > + else if (archDumpFormat == archTar) > > + snprintf(dbfilepath, MAXPGPATH, "\"%s\"/\"%s\".tar", db_subdir, oid); > > + else > > + snprintf(dbfilepath, MAXPGPATH, "\"%s\"/\"%s\"", db_subdir, oid); > > Use appendShellString() instead. Plain mode already does that for the > "pg_dumpall -f" argument, which is part of db_subdir here. We don't want > weird filename characters to work out differently for plain vs. non-plain > mode. Also, it's easier to search for appendShellString() than to search for > open-coded shell quoting. Yes, we can use appendShellString also. We are using snprintf in the pg_dump.c file also. Ex: snprintf(tagbuf, sizeof(tagbuf), "LARGE OBJECTS %u..%u", loinfo->looids[0], loinfo->looids[loinfo->numlos - 1]); If we want to use appendShellString, I can write a patch for these. Please let me know your opinion. > > > @@ -1641,19 +1727,30 @@ dumpDatabases(PGconn *conn) > > if (filename) > > fclose(OPF); > > > > - ret = runPgDump(dbname, create_opts); > > + ret = runPgDump(dbname, create_opts, dbfilepath, archDumpFormat); > > if (ret != 0) > > pg_fatal("pg_dump failed on database \"%s\", exiting", dbname); > > > > if (filename) > > { > > - OPF = fopen(filename, PG_BINARY_A); > > + char global_path[MAXPGPATH]; > > + > > + if (archDumpFormat != archNull) > > + snprintf(global_path, MAXPGPATH, "%s/global.dat", filename); > > + else > > + snprintf(global_path, MAXPGPATH, "%s", filename); > > + > > + OPF = fopen(global_path, PG_BINARY_A); > > if (!OPF) > > pg_fatal("could not re-open the output file \"%s\": %m", > > - filename); > > + global_path); > > Minor item: plain mode benefits from reopening, because pg_dump appended to > the plain output file. There's no analogous need to reopen global.dat, since > just this one process writes to global.dat. yes, only once we need to open global.dat file but to keep simple code, we kept old code. > > > @@ -1672,17 +1770,36 @@ runPgDump(const char *dbname, const char *create_opts) > > initPQExpBuffer(&connstrbuf); > > initPQExpBuffer(&cmd); > > > > - printfPQExpBuffer(&cmd, "\"%s\" %s %s", pg_dump_bin, > > - pgdumpopts->data, create_opts); > > - > > /* > > - * If we have a filename, use the undocumented plain-append pg_dump > > - * format. > > + * If this is not a plain format dump, then append file name and dump > > + * format to the pg_dump command to get archive dump. > > */ > > - if (filename) > > - appendPQExpBufferStr(&cmd, " -Fa "); > > + if (archDumpFormat != archNull) > > + { > > + printfPQExpBuffer(&cmd, "\"%s\" -f %s %s", pg_dump_bin, > > + dbfile, create_opts); > > + > > + if (archDumpFormat == archDirectory) > > + appendPQExpBufferStr(&cmd, " --format=directory "); > > + else if (archDumpFormat == archCustom) > > + appendPQExpBufferStr(&cmd, " --format=custom "); > > + else if (archDumpFormat == archTar) > > + appendPQExpBufferStr(&cmd, " --format=tar "); > > + } > > else > > - appendPQExpBufferStr(&cmd, " -Fp "); > > + { > > + printfPQExpBuffer(&cmd, "\"%s\" %s %s", pg_dump_bin, > > + pgdumpopts->data, create_opts); > > This uses pgdumpopts for plain mode only, so many pg_dumpall options silently > have no effect in non-plain mode. Example: > > strace -f pg_dumpall --lock-wait-timeout=10 2>&1 >/dev/null | grep exec > strace -f pg_dumpall --lock-wait-timeout=10 -Fd -f /tmp/dump3 2>&1 >/dev/null | grep exec Agreed. We can add pgdumpopts->data to all the dump formats. > > > --- a/src/bin/pg_dump/pg_restore.c > > +++ b/src/bin/pg_dump/pg_restore.c > > > +/* > > + * read_one_statement > > + * > > + * This will start reading from passed file pointer using fgetc and read till > > + * semicolon(sql statement terminator for global.dat file) > > + * > > + * EOF is returned if end-of-file input is seen; time to shut down. > > What makes it okay to use this particular subset of SQL lexing? To support complex syntax, we used this code from another file. > > > +/* > > + * get_dbnames_list_to_restore > > + * > > + * This will mark for skipping any entries from dbname_oid_list that pattern match an > > + * entry in the db_exclude_patterns list. > > + * > > + * Returns the number of database to be restored. > > + * > > + */ > > +static int > > +get_dbnames_list_to_restore(PGconn *conn, > > + SimpleOidStringList *dbname_oid_list, > > + SimpleStringList db_exclude_patterns) > > +{ > > + int count_db = 0; > > + PQExpBuffer query; > > + PGresult *res; > > + > > + query = createPQExpBuffer(); > > + > > + if (!conn) > > + pg_log_info("considering PATTERN as NAME for --exclude-database option as no db connection while doingpg_restore."); > > When do we not have a connection here? We'd need to document this behavior > variation if it stays, but I'd prefer if we can just rely on having a > connection. Yes, we can document this behavior. > > > + /* If database is already created, then don't set createDB flag. */ > > + if (opts->cparams.dbname) > > + { > > + PGconn *test_conn; > > + > > + test_conn = ConnectDatabase(db_cell->str, NULL, opts->cparams.pghost, > > + opts->cparams.pgport, opts->cparams.username,TRI_DEFAULT, > > + false, progname, NULL, NULL, NULL, NULL); > > + if (test_conn) > > + { > > + PQfinish(test_conn); > > + > > + /* Use already created database for connection. */ > > + opts->createDB = 0; > > + opts->cparams.dbname = db_cell->str; > > + } > > + else > > + { > > + /* we'll have to create it */ > > + opts->createDB = 1; > > + opts->cparams.dbname = connected_db; > > + } > > In released versions, "pg_restore --create" fails if the database exists, and > pg_restore w/o --create fails unless the database exists. I think we should > continue that pattern in this new feature. If not, pg_restore should document > how it treats pg_dumpall-sourced dumps with the "create if not exists" > semantics appearing here. Yes, we can document this behavior also. I am working on all these review comments and I will post a patch in the coming days. Thanks and Regards Mahendra Singh Thalor EnterpriseDB: http://www.enterprisedb.com
On Thu, Jul 10, 2025 at 12:21:03AM +0530, Mahendra Singh Thalor wrote: > On Wed, 9 Jul 2025 at 02:58, Noah Misch <noah@leadboat.com> wrote: > > On Fri, Apr 04, 2025 at 04:11:05PM -0400, Andrew Dunstan wrote: > > > Thanks. I have pushed these now with a few further small tweaks. > > > > This drops all databases: > > > > pg_dumpall --clean -Fd -f /tmp/dump > > pg_restore -d template1 --globals-only /tmp/dump > > > > That didn't match my expectations given this help text: > > > > $ pg_restore --help|grep global > > -g, --globals-only restore only global objects, no databases > > Databases are global objects so due to --clean command, we are putting > drop commands in global.dat for all the databases. While restoring, we > used the "--globals-only" option so we are dropping all these > databases by global.dat file. > > Please let us know your expectations for this specific case. Be consistent with "pg_dump". A quick check suggests "pg_dump --clean" affects plain format only. For non-plain formats, only the pg_restore argument governs the final commands: $ rm -r /tmp/dump; pg_dump --clean -Fd -f /tmp/dump && pg_restore -f- /tmp/dump | grep DROP $ rm -r /tmp/dump; pg_dump --clean -Fd -f /tmp/dump && pg_restore --clean -f- /tmp/dump | grep DROP DROP TABLE public.example; That said, you should audit code referencing the --clean flag and see if there's more to it than that quick test suggests. Note that aligning with pg_dump will require changes for object types beyond databases. "pg_restore --clean" of a global dump should emit DROP TABLESPACE and DROP ROLE as appropriate, regardless of whether the original pg_dumpall had --clean. For my earlier example (pg_dumpall --clean; pg_restore --globals-only) I expect the same outcome as plain-format "pg_dumpall --globals-only", which is no databases dropped or created. The help line says "no databases". Plain "pg_dumpall --globals-only" and even "pg_dumpall --globals-only --clean" do not drop or create databases. > > commit 1495eff wrote: > > > --- a/src/bin/pg_dump/pg_dumpall.c > > > +++ b/src/bin/pg_dump/pg_dumpall.c > > > > > @@ -1612,9 +1683,27 @@ dumpDatabases(PGconn *conn) > > > continue; > > > } > > > > > > + /* > > > + * If this is not a plain format dump, then append dboid and dbname to > > > + * the map.dat file. > > > + */ > > > + if (archDumpFormat != archNull) > > > + { > > > + if (archDumpFormat == archCustom) > > > + snprintf(dbfilepath, MAXPGPATH, "\"%s\"/\"%s\".dmp", db_subdir, oid); > > > + else if (archDumpFormat == archTar) > > > + snprintf(dbfilepath, MAXPGPATH, "\"%s\"/\"%s\".tar", db_subdir, oid); > > > + else > > > + snprintf(dbfilepath, MAXPGPATH, "\"%s\"/\"%s\"", db_subdir, oid); > > > > Use appendShellString() instead. Plain mode already does that for the > > "pg_dumpall -f" argument, which is part of db_subdir here. We don't want > > weird filename characters to work out differently for plain vs. non-plain > > mode. Also, it's easier to search for appendShellString() than to search for > > open-coded shell quoting. > > Yes, we can use appendShellString also. We are using snprintf in the > pg_dump.c file also. > Ex: snprintf(tagbuf, sizeof(tagbuf), "LARGE OBJECTS %u..%u", > loinfo->looids[0], loinfo->looids[loinfo->numlos - 1]); It's true snprintf() is not banned in these programs, but don't use it to do the quoting for OS shell command lines or fragments thereof. dbfilepath is a fragment of an OS shell command line. The LARGE OBJECTS string is not one of those. Hence, the LARGE OBJECTS scenario should keep using snprintf(). > If we want to use appendShellString, I can write a patch for these. > Please let me know your opinion. Use appendShellString() for shell quoting. Don't attempt to use it for other purposes. > > > --- a/src/bin/pg_dump/pg_restore.c > > > +++ b/src/bin/pg_dump/pg_restore.c > > > > > +/* > > > + * read_one_statement > > > + * > > > + * This will start reading from passed file pointer using fgetc and read till > > > + * semicolon(sql statement terminator for global.dat file) > > > + * > > > + * EOF is returned if end-of-file input is seen; time to shut down. > > > > What makes it okay to use this particular subset of SQL lexing? > > To support complex syntax, we used this code from another file. I'm hearing that you copied this code from somewhere. Running "git grep 'time to shut down'" suggests you copied it from InteractiveBackend(). Is that right? I do see other similarities between read_one_statement() and InteractiveBackend(). Copying InteractiveBackend() provides negligible assurance that this is the right subset of SQL lexing. Only single-user mode uses InteractiveBackend(). Single-user mode survives mostly as a last resort for recovering from having reached xidStopLimit, is rarely used, and only superusers write queries to it. > > > +/* > > > + * get_dbnames_list_to_restore > > > + * > > > + * This will mark for skipping any entries from dbname_oid_list that pattern match an > > > + * entry in the db_exclude_patterns list. > > > + * > > > + * Returns the number of database to be restored. > > > + * > > > + */ > > > +static int > > > +get_dbnames_list_to_restore(PGconn *conn, > > > + SimpleOidStringList *dbname_oid_list, > > > + SimpleStringList db_exclude_patterns) > > > +{ > > > + int count_db = 0; > > > + PQExpBuffer query; > > > + PGresult *res; > > > + > > > + query = createPQExpBuffer(); > > > + > > > + if (!conn) > > > + pg_log_info("considering PATTERN as NAME for --exclude-database option as no db connection while doingpg_restore."); > > > > When do we not have a connection here? We'd need to document this behavior > > variation if it stays, but I'd prefer if we can just rely on having a > > connection. > > Yes, we can document this behavior. My review asked a question there. I don't see an answer to that question. Would you answer that question?
Thanks Noah for the feedback. On Wed, 16 Jul 2025 at 05:50, Noah Misch <noah@leadboat.com> wrote: > > On Thu, Jul 10, 2025 at 12:21:03AM +0530, Mahendra Singh Thalor wrote: > > On Wed, 9 Jul 2025 at 02:58, Noah Misch <noah@leadboat.com> wrote: > > > On Fri, Apr 04, 2025 at 04:11:05PM -0400, Andrew Dunstan wrote: > > > > Thanks. I have pushed these now with a few further small tweaks. > > > > > > This drops all databases: > > > > > > pg_dumpall --clean -Fd -f /tmp/dump > > > pg_restore -d template1 --globals-only /tmp/dump > > > > > > That didn't match my expectations given this help text: > > > > > > $ pg_restore --help|grep global > > > -g, --globals-only restore only global objects, no databases > > > > Databases are global objects so due to --clean command, we are putting > > drop commands in global.dat for all the databases. While restoring, we > > used the "--globals-only" option so we are dropping all these > > databases by global.dat file. > > > > Please let us know your expectations for this specific case. > > Be consistent with "pg_dump". A quick check suggests "pg_dump --clean" > affects plain format only. For non-plain formats, only the pg_restore > argument governs the final commands: > > $ rm -r /tmp/dump; pg_dump --clean -Fd -f /tmp/dump && pg_restore -f- /tmp/dump | grep DROP > $ rm -r /tmp/dump; pg_dump --clean -Fd -f /tmp/dump && pg_restore --clean -f- /tmp/dump | grep DROP > DROP TABLE public.example; > > That said, you should audit code referencing the --clean flag and see if > there's more to it than that quick test suggests. Note that aligning with > pg_dump will require changes for object types beyond databases. "pg_restore > --clean" of a global dump should emit DROP TABLESPACE and DROP ROLE as > appropriate, regardless of whether the original pg_dumpall had --clean. > > For my earlier example (pg_dumpall --clean; pg_restore --globals-only) I > expect the same outcome as plain-format "pg_dumpall --globals-only", which is > no databases dropped or created. The help line says "no databases". Plain > "pg_dumpall --globals-only" and even "pg_dumpall --globals-only --clean" do > not drop or create databases. To pg_restore, we are giving a dump of pg_dumpall which has a global.dat file and we have drop commands in the global.dat file so when we are using 'globals-only', we are dropping databases as we have DROP commands. As of now, we don't have any filter for global.dat file in restore. If a user wants to restore only globals(without droping db), then they should use 'globals-only' in pg_dumpall. Or if we don't want to DROP databases by global.dat file, then we should add a filter in pg_restore (hard to implement as we have SQL commands in global.dat file). I think, for this case, we can do some more doc changes. Example: pg_restore --globals-only : this will restore the global.dat file(including all drop commands). It might drop databases if any drop commands. @Andrew Dunstan Please add your opinion. > > > > commit 1495eff wrote: > > > > --- a/src/bin/pg_dump/pg_dumpall.c > > > > +++ b/src/bin/pg_dump/pg_dumpall.c > > > > > > > @@ -1612,9 +1683,27 @@ dumpDatabases(PGconn *conn) > > > > continue; > > > > } > > > > > > > > + /* > > > > + * If this is not a plain format dump, then append dboid and dbname to > > > > + * the map.dat file. > > > > + */ > > > > + if (archDumpFormat != archNull) > > > > + { > > > > + if (archDumpFormat == archCustom) > > > > + snprintf(dbfilepath, MAXPGPATH, "\"%s\"/\"%s\".dmp", db_subdir, oid); > > > > + else if (archDumpFormat == archTar) > > > > + snprintf(dbfilepath, MAXPGPATH, "\"%s\"/\"%s\".tar", db_subdir, oid); > > > > + else > > > > + snprintf(dbfilepath, MAXPGPATH, "\"%s\"/\"%s\"", db_subdir, oid); > > > > > > Use appendShellString() instead. Plain mode already does that for the > > > "pg_dumpall -f" argument, which is part of db_subdir here. We don't want > > > weird filename characters to work out differently for plain vs. non-plain > > > mode. Also, it's easier to search for appendShellString() than to search for > > > open-coded shell quoting. > > > > Yes, we can use appendShellString also. We are using snprintf in the > > pg_dump.c file also. > > Ex: snprintf(tagbuf, sizeof(tagbuf), "LARGE OBJECTS %u..%u", > > loinfo->looids[0], loinfo->looids[loinfo->numlos - 1]); > > It's true snprintf() is not banned in these programs, but don't use it to do > the quoting for OS shell command lines or fragments thereof. dbfilepath is a > fragment of an OS shell command line. The LARGE OBJECTS string is not one of > those. Hence, the LARGE OBJECTS scenario should keep using snprintf(). > > > If we want to use appendShellString, I can write a patch for these. > > Please let me know your opinion. > > Use appendShellString() for shell quoting. Don't attempt to use it for other > purposes. Okay. Fixed in attached patch. > > > > > --- a/src/bin/pg_dump/pg_restore.c > > > > +++ b/src/bin/pg_dump/pg_restore.c > > > > > > > +/* > > > > + * read_one_statement > > > > + * > > > > + * This will start reading from passed file pointer using fgetc and read till > > > > + * semicolon(sql statement terminator for global.dat file) > > > > + * > > > > + * EOF is returned if end-of-file input is seen; time to shut down. > > > > > > What makes it okay to use this particular subset of SQL lexing? > > > > To support complex syntax, we used this code from another file. > > I'm hearing that you copied this code from somewhere. Running > "git grep 'time to shut down'" suggests you copied it from > InteractiveBackend(). Is that right? I do see other similarities between > read_one_statement() and InteractiveBackend(). > > Copying InteractiveBackend() provides negligible assurance that this is the > right subset of SQL lexing. Only single-user mode uses InteractiveBackend(). > Single-user mode survives mostly as a last resort for recovering from having > reached xidStopLimit, is rarely used, and only superusers write queries to it. Yes, we copied this from InteractiveBackend to read statements from global.dat file. > > > > > +/* > > > > + * get_dbnames_list_to_restore > > > > + * > > > > + * This will mark for skipping any entries from dbname_oid_list that pattern match an > > > > + * entry in the db_exclude_patterns list. > > > > + * > > > > + * Returns the number of database to be restored. > > > > + * > > > > + */ > > > > +static int > > > > +get_dbnames_list_to_restore(PGconn *conn, > > > > + SimpleOidStringList *dbname_oid_list, > > > > + SimpleStringList db_exclude_patterns) > > > > +{ > > > > + int count_db = 0; > > > > + PQExpBuffer query; > > > > + PGresult *res; > > > > + > > > > + query = createPQExpBuffer(); > > > > + > > > > + if (!conn) > > > > + pg_log_info("considering PATTERN as NAME for --exclude-database option as no db connection while doingpg_restore."); > > > > > > When do we not have a connection here? We'd need to document this behavior > > > variation if it stays, but I'd prefer if we can just rely on having a > > > connection. > > > > Yes, we can document this behavior. > > My review asked a question there. I don't see an answer to that question. > Would you answer that question? Example: if there is no active database, even postgres/template1, then we will consider PATTEREN as NAME. This is the rare case. In attached patch, I added one doc line also for this case. > > @@ -1612,9 +1683,27 @@ dumpDatabases(PGconn *conn) > > continue; > > } > > > > + /* > > + * If this is not a plain format dump, then append dboid and dbname to > > + * the map.dat file. > > + */ > > + if (archDumpFormat != archNull) > > + { > > + if (archDumpFormat == archCustom) > > + snprintf(dbfilepath, MAXPGPATH, "\"%s\"/\"%s\".dmp", db_subdir, oid); > > + else if (archDumpFormat == archTar) > > + snprintf(dbfilepath, MAXPGPATH, "\"%s\"/\"%s\".tar", db_subdir, oid); > > + else > > + snprintf(dbfilepath, MAXPGPATH, "\"%s\"/\"%s\"", db_subdir, oid); > > Use appendShellString() instead. Plain mode already does that for the > "pg_dumpall -f" argument, which is part of db_subdir here. We don't want > weird filename characters to work out differently for plain vs. non-plain > mode. Also, it's easier to search for appendShellString() than to search for > open-coded shell quoting. Fixed. > > > @@ -1641,19 +1727,30 @@ dumpDatabases(PGconn *conn) > > if (filename) > > fclose(OPF); > > > > - ret = runPgDump(dbname, create_opts); > > + ret = runPgDump(dbname, create_opts, dbfilepath, archDumpFormat); > > if (ret != 0) > > pg_fatal("pg_dump failed on database \"%s\", exiting", dbname); > > > > if (filename) > > { > > - OPF = fopen(filename, PG_BINARY_A); > > + char global_path[MAXPGPATH]; > > + > > + if (archDumpFormat != archNull) > > + snprintf(global_path, MAXPGPATH, "%s/global.dat", filename); > > + else > > + snprintf(global_path, MAXPGPATH, "%s", filename); > > + > > + OPF = fopen(global_path, PG_BINARY_A); > > if (!OPF) > > pg_fatal("could not re-open the output file \"%s\": %m", > > - filename); > > + global_path); > > Minor item: plain mode benefits from reopening, because pg_dump appended to > the plain output file. There's no analogous need to reopen global.dat, since > just this one process writes to global.dat. Fixed. > > > @@ -1672,17 +1770,36 @@ runPgDump(const char *dbname, const char *create_opts) > > initPQExpBuffer(&connstrbuf); > > initPQExpBuffer(&cmd); > > > > - printfPQExpBuffer(&cmd, "\"%s\" %s %s", pg_dump_bin, > > - pgdumpopts->data, create_opts); > > - > > /* > > - * If we have a filename, use the undocumented plain-append pg_dump > > - * format. > > + * If this is not a plain format dump, then append file name and dump > > + * format to the pg_dump command to get archive dump. > > */ > > - if (filename) > > - appendPQExpBufferStr(&cmd, " -Fa "); > > + if (archDumpFormat != archNull) > > + { > > + printfPQExpBuffer(&cmd, "\"%s\" -f %s %s", pg_dump_bin, > > + dbfile, create_opts); > > + > > + if (archDumpFormat == archDirectory) > > + appendPQExpBufferStr(&cmd, " --format=directory "); > > + else if (archDumpFormat == archCustom) > > + appendPQExpBufferStr(&cmd, " --format=custom "); > > + else if (archDumpFormat == archTar) > > + appendPQExpBufferStr(&cmd, " --format=tar "); > > + } > > else > > - appendPQExpBufferStr(&cmd, " -Fp "); > > + { > > + printfPQExpBuffer(&cmd, "\"%s\" %s %s", pg_dump_bin, > > + pgdumpopts->data, create_opts); > > This uses pgdumpopts for plain mode only, so many pg_dumpall options silently > have no effect in non-plain mode. Example: > > strace -f pg_dumpall --lock-wait-timeout=10 2>&1 >/dev/null | grep exec > strace -f pg_dumpall --lock-wait-timeout=10 -Fd -f /tmp/dump3 2>&1 >/dev/null | grep exec Fixed. > > + /* If database is already created, then don't set createDB flag. */ > > + if (opts->cparams.dbname) > > + { > > + PGconn *test_conn; > > + > > + test_conn = ConnectDatabase(db_cell->str, NULL, opts->cparams.pghost, > > + opts->cparams.pgport, opts->cparams.username,TRI_DEFAULT, > > + false, progname, NULL, NULL, NULL, NULL); > > + if (test_conn) > > + { > > + PQfinish(test_conn); > > + > > + /* Use already created database for connection. */ > > + opts->createDB = 0; > > + opts->cparams.dbname = db_cell->str; > > + } > > + else > > + { > > + /* we'll have to create it */ > > + opts->createDB = 1; > > + opts->cparams.dbname = connected_db; > > + } > > In released versions, "pg_restore --create" fails if the database exists, and > pg_restore w/o --create fails unless the database exists. I think we should > continue that pattern in this new feature. If not, pg_restore should document > how it treats pg_dumpall-sourced dumps with the "create if not exists" > semantics appearing here. Added one more doc line for this case. Here, I am attaching a patch. Please let me know feedback. -- Thanks and Regards Mahendra Singh Thalor EnterpriseDB: http://www.enterprisedb.com
Вложения
Attaching the correct patch. Sorry, I attached the wrong patch in my last email. On Thu, 17 Jul 2025 at 15:46, Mahendra Singh Thalor <mahi6run@gmail.com> wrote: > > Thanks Noah for the feedback. > > On Wed, 16 Jul 2025 at 05:50, Noah Misch <noah@leadboat.com> wrote: > > > > On Thu, Jul 10, 2025 at 12:21:03AM +0530, Mahendra Singh Thalor wrote: > > > On Wed, 9 Jul 2025 at 02:58, Noah Misch <noah@leadboat.com> wrote: > > > > On Fri, Apr 04, 2025 at 04:11:05PM -0400, Andrew Dunstan wrote: > > > > > Thanks. I have pushed these now with a few further small tweaks. > > > > > > > > This drops all databases: > > > > > > > > pg_dumpall --clean -Fd -f /tmp/dump > > > > pg_restore -d template1 --globals-only /tmp/dump > > > > > > > > That didn't match my expectations given this help text: > > > > > > > > $ pg_restore --help|grep global > > > > -g, --globals-only restore only global objects, no databases > > > > > > Databases are global objects so due to --clean command, we are putting > > > drop commands in global.dat for all the databases. While restoring, we > > > used the "--globals-only" option so we are dropping all these > > > databases by global.dat file. > > > > > > Please let us know your expectations for this specific case. > > > > Be consistent with "pg_dump". A quick check suggests "pg_dump --clean" > > affects plain format only. For non-plain formats, only the pg_restore > > argument governs the final commands: > > > > $ rm -r /tmp/dump; pg_dump --clean -Fd -f /tmp/dump && pg_restore -f- /tmp/dump | grep DROP > > $ rm -r /tmp/dump; pg_dump --clean -Fd -f /tmp/dump && pg_restore --clean -f- /tmp/dump | grep DROP > > DROP TABLE public.example; > > > > That said, you should audit code referencing the --clean flag and see if > > there's more to it than that quick test suggests. Note that aligning with > > pg_dump will require changes for object types beyond databases. "pg_restore > > --clean" of a global dump should emit DROP TABLESPACE and DROP ROLE as > > appropriate, regardless of whether the original pg_dumpall had --clean. > > > > For my earlier example (pg_dumpall --clean; pg_restore --globals-only) I > > expect the same outcome as plain-format "pg_dumpall --globals-only", which is > > no databases dropped or created. The help line says "no databases". Plain > > "pg_dumpall --globals-only" and even "pg_dumpall --globals-only --clean" do > > not drop or create databases. > > To pg_restore, we are giving a dump of pg_dumpall which has a > global.dat file and we have drop commands in the global.dat file so > when we are using 'globals-only', we are dropping databases as we have > DROP commands. > As of now, we don't have any filter for global.dat file in restore. If > a user wants to restore only globals(without droping db), then they > should use 'globals-only' in pg_dumpall. > Or if we don't want to DROP databases by global.dat file, then we > should add a filter in pg_restore (hard to implement as we have SQL > commands in global.dat file). I think, for this case, we can do some > more doc changes. > Example: pg_restore --globals-only : this will restore the global.dat > file(including all drop commands). It might drop databases if any drop > commands. > @Andrew Dunstan Please add your opinion. > > > > > > > commit 1495eff wrote: > > > > > --- a/src/bin/pg_dump/pg_dumpall.c > > > > > +++ b/src/bin/pg_dump/pg_dumpall.c > > > > > > > > > @@ -1612,9 +1683,27 @@ dumpDatabases(PGconn *conn) > > > > > continue; > > > > > } > > > > > > > > > > + /* > > > > > + * If this is not a plain format dump, then append dboid and dbname to > > > > > + * the map.dat file. > > > > > + */ > > > > > + if (archDumpFormat != archNull) > > > > > + { > > > > > + if (archDumpFormat == archCustom) > > > > > + snprintf(dbfilepath, MAXPGPATH, "\"%s\"/\"%s\".dmp", db_subdir, oid); > > > > > + else if (archDumpFormat == archTar) > > > > > + snprintf(dbfilepath, MAXPGPATH, "\"%s\"/\"%s\".tar", db_subdir, oid); > > > > > + else > > > > > + snprintf(dbfilepath, MAXPGPATH, "\"%s\"/\"%s\"", db_subdir, oid); > > > > > > > > Use appendShellString() instead. Plain mode already does that for the > > > > "pg_dumpall -f" argument, which is part of db_subdir here. We don't want > > > > weird filename characters to work out differently for plain vs. non-plain > > > > mode. Also, it's easier to search for appendShellString() than to search for > > > > open-coded shell quoting. > > > > > > Yes, we can use appendShellString also. We are using snprintf in the > > > pg_dump.c file also. > > > Ex: snprintf(tagbuf, sizeof(tagbuf), "LARGE OBJECTS %u..%u", > > > loinfo->looids[0], loinfo->looids[loinfo->numlos - 1]); > > > > It's true snprintf() is not banned in these programs, but don't use it to do > > the quoting for OS shell command lines or fragments thereof. dbfilepath is a > > fragment of an OS shell command line. The LARGE OBJECTS string is not one of > > those. Hence, the LARGE OBJECTS scenario should keep using snprintf(). > > > > > If we want to use appendShellString, I can write a patch for these. > > > Please let me know your opinion. > > > > Use appendShellString() for shell quoting. Don't attempt to use it for other > > purposes. > > Okay. Fixed in attached patch. > > > > > > > > --- a/src/bin/pg_dump/pg_restore.c > > > > > +++ b/src/bin/pg_dump/pg_restore.c > > > > > > > > > +/* > > > > > + * read_one_statement > > > > > + * > > > > > + * This will start reading from passed file pointer using fgetc and read till > > > > > + * semicolon(sql statement terminator for global.dat file) > > > > > + * > > > > > + * EOF is returned if end-of-file input is seen; time to shut down. > > > > > > > > What makes it okay to use this particular subset of SQL lexing? > > > > > > To support complex syntax, we used this code from another file. > > > > I'm hearing that you copied this code from somewhere. Running > > "git grep 'time to shut down'" suggests you copied it from > > InteractiveBackend(). Is that right? I do see other similarities between > > read_one_statement() and InteractiveBackend(). > > > > Copying InteractiveBackend() provides negligible assurance that this is the > > right subset of SQL lexing. Only single-user mode uses InteractiveBackend(). > > Single-user mode survives mostly as a last resort for recovering from having > > reached xidStopLimit, is rarely used, and only superusers write queries to it. > > Yes, we copied this from InteractiveBackend to read statements from > global.dat file. > > > > > > > > +/* > > > > > + * get_dbnames_list_to_restore > > > > > + * > > > > > + * This will mark for skipping any entries from dbname_oid_list that pattern match an > > > > > + * entry in the db_exclude_patterns list. > > > > > + * > > > > > + * Returns the number of database to be restored. > > > > > + * > > > > > + */ > > > > > +static int > > > > > +get_dbnames_list_to_restore(PGconn *conn, > > > > > + SimpleOidStringList *dbname_oid_list, > > > > > + SimpleStringList db_exclude_patterns) > > > > > +{ > > > > > + int count_db = 0; > > > > > + PQExpBuffer query; > > > > > + PGresult *res; > > > > > + > > > > > + query = createPQExpBuffer(); > > > > > + > > > > > + if (!conn) > > > > > + pg_log_info("considering PATTERN as NAME for --exclude-database option as no db connection whiledoing pg_restore."); > > > > > > > > When do we not have a connection here? We'd need to document this behavior > > > > variation if it stays, but I'd prefer if we can just rely on having a > > > > connection. > > > > > > Yes, we can document this behavior. > > > > My review asked a question there. I don't see an answer to that question. > > Would you answer that question? > > Example: if there is no active database, even postgres/template1, then > we will consider PATTEREN as NAME. This is the rare case. > In attached patch, I added one doc line also for this case. > > > > @@ -1612,9 +1683,27 @@ dumpDatabases(PGconn *conn) > > > continue; > > > } > > > > > > + /* > > > + * If this is not a plain format dump, then append dboid and dbname to > > > + * the map.dat file. > > > + */ > > > + if (archDumpFormat != archNull) > > > + { > > > + if (archDumpFormat == archCustom) > > > + snprintf(dbfilepath, MAXPGPATH, "\"%s\"/\"%s\".dmp", db_subdir, oid); > > > + else if (archDumpFormat == archTar) > > > + snprintf(dbfilepath, MAXPGPATH, "\"%s\"/\"%s\".tar", db_subdir, oid); > > > + else > > > + snprintf(dbfilepath, MAXPGPATH, "\"%s\"/\"%s\"", db_subdir, oid); > > > > Use appendShellString() instead. Plain mode already does that for the > > "pg_dumpall -f" argument, which is part of db_subdir here. We don't want > > weird filename characters to work out differently for plain vs. non-plain > > mode. Also, it's easier to search for appendShellString() than to search for > > open-coded shell quoting. > > Fixed. > > > > > > @@ -1641,19 +1727,30 @@ dumpDatabases(PGconn *conn) > > > if (filename) > > > fclose(OPF); > > > > > > - ret = runPgDump(dbname, create_opts); > > > + ret = runPgDump(dbname, create_opts, dbfilepath, archDumpFormat); > > > if (ret != 0) > > > pg_fatal("pg_dump failed on database \"%s\", exiting", dbname); > > > > > > if (filename) > > > { > > > - OPF = fopen(filename, PG_BINARY_A); > > > + char global_path[MAXPGPATH]; > > > + > > > + if (archDumpFormat != archNull) > > > + snprintf(global_path, MAXPGPATH, "%s/global.dat", filename); > > > + else > > > + snprintf(global_path, MAXPGPATH, "%s", filename); > > > + > > > + OPF = fopen(global_path, PG_BINARY_A); > > > if (!OPF) > > > pg_fatal("could not re-open the output file \"%s\": %m", > > > - filename); > > > + global_path); > > > > Minor item: plain mode benefits from reopening, because pg_dump appended to > > the plain output file. There's no analogous need to reopen global.dat, since > > just this one process writes to global.dat. > > Fixed. > > > > > > @@ -1672,17 +1770,36 @@ runPgDump(const char *dbname, const char *create_opts) > > > initPQExpBuffer(&connstrbuf); > > > initPQExpBuffer(&cmd); > > > > > > - printfPQExpBuffer(&cmd, "\"%s\" %s %s", pg_dump_bin, > > > - pgdumpopts->data, create_opts); > > > - > > > /* > > > - * If we have a filename, use the undocumented plain-append pg_dump > > > - * format. > > > + * If this is not a plain format dump, then append file name and dump > > > + * format to the pg_dump command to get archive dump. > > > */ > > > - if (filename) > > > - appendPQExpBufferStr(&cmd, " -Fa "); > > > + if (archDumpFormat != archNull) > > > + { > > > + printfPQExpBuffer(&cmd, "\"%s\" -f %s %s", pg_dump_bin, > > > + dbfile, create_opts); > > > + > > > + if (archDumpFormat == archDirectory) > > > + appendPQExpBufferStr(&cmd, " --format=directory "); > > > + else if (archDumpFormat == archCustom) > > > + appendPQExpBufferStr(&cmd, " --format=custom "); > > > + else if (archDumpFormat == archTar) > > > + appendPQExpBufferStr(&cmd, " --format=tar "); > > > + } > > > else > > > - appendPQExpBufferStr(&cmd, " -Fp "); > > > + { > > > + printfPQExpBuffer(&cmd, "\"%s\" %s %s", pg_dump_bin, > > > + pgdumpopts->data, create_opts); > > > > This uses pgdumpopts for plain mode only, so many pg_dumpall options silently > > have no effect in non-plain mode. Example: > > > > strace -f pg_dumpall --lock-wait-timeout=10 2>&1 >/dev/null | grep exec > > strace -f pg_dumpall --lock-wait-timeout=10 -Fd -f /tmp/dump3 2>&1 >/dev/null | grep exec > > Fixed. > > > > + /* If database is already created, then don't set createDB flag. */ > > > + if (opts->cparams.dbname) > > > + { > > > + PGconn *test_conn; > > > + > > > + test_conn = ConnectDatabase(db_cell->str, NULL, opts->cparams.pghost, > > > + opts->cparams.pgport, opts->cparams.username,TRI_DEFAULT, > > > + false, progname, NULL, NULL, NULL, NULL); > > > + if (test_conn) > > > + { > > > + PQfinish(test_conn); > > > + > > > + /* Use already created database for connection. */ > > > + opts->createDB = 0; > > > + opts->cparams.dbname = db_cell->str; > > > + } > > > + else > > > + { > > > + /* we'll have to create it */ > > > + opts->createDB = 1; > > > + opts->cparams.dbname = connected_db; > > > + } > > > > In released versions, "pg_restore --create" fails if the database exists, and > > pg_restore w/o --create fails unless the database exists. I think we should > > continue that pattern in this new feature. If not, pg_restore should document > > how it treats pg_dumpall-sourced dumps with the "create if not exists" > > semantics appearing here. > > Added one more doc line for this case. > > Here, I am attaching a patch. Please let me know feedback. > > -- > Thanks and Regards > Mahendra Singh Thalor > EnterpriseDB: http://www.enterprisedb.com -- Thanks and Regards Mahendra Singh Thalor EnterpriseDB: http://www.enterprisedb.com
Вложения
On 2025-Jul-17, Mahendra Singh Thalor wrote: > To pg_restore, we are giving a dump of pg_dumpall which has a > global.dat file and we have drop commands in the global.dat file so > when we are using 'globals-only', we are dropping databases as we have > DROP commands. > As of now, we don't have any filter for global.dat file in restore. If > a user wants to restore only globals(without droping db), then they > should use 'globals-only' in pg_dumpall. > Or if we don't want to DROP databases by global.dat file, then we > should add a filter in pg_restore (hard to implement as we have SQL > commands in global.dat file). I think dropping database is dangerous and makes no practical sense; doing it renders pg_dumpall --clean completely unusable. You're arguing from the point of view of ease of implementation, but that doesn't help users. > I think, for this case, we can do some > more doc changes. > Example: pg_restore --globals-only : this will restore the global.dat > file(including all drop commands). It might drop databases if any drop > commands. I don't think doc changes are useful. -- Álvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/ "I love the Postgres community. It's all about doing things _properly_. :-)" (David Garamond)
On Thu, 17 Jul 2025 at 16:41, Álvaro Herrera <alvherre@alvh.no-ip.org> wrote:
>
> On 2025-Jul-17, Mahendra Singh Thalor wrote:
>
> > To pg_restore, we are giving a dump of pg_dumpall which has a
> > global.dat file and we have drop commands in the global.dat file so
> > when we are using 'globals-only', we are dropping databases as we have
> > DROP commands.
> > As of now, we don't have any filter for global.dat file in restore. If
> > a user wants to restore only globals(without droping db), then they
> > should use 'globals-only' in pg_dumpall.
> > Or if we don't want to DROP databases by global.dat file, then we
> > should add a filter in pg_restore (hard to implement as we have SQL
> > commands in global.dat file).
>
> I think dropping database is dangerous and makes no practical sense;
> doing it renders pg_dumpall --clean completely unusable. You're arguing
> from the point of view of ease of implementation, but that doesn't help
> users.
>
> > I think, for this case, we can do some
> > more doc changes.
> > Example: pg_restore --globals-only : this will restore the global.dat
> > file(including all drop commands). It might drop databases if any drop
> > commands.
>
> I don't think doc changes are useful.
>
> --
> Álvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/
> "I love the Postgres community. It's all about doing things _properly_. :-)"
> (David Garamond)
--
Thanks and Regards
Mahendra Singh Thalor
EnterpriseDB: http://www.enterprisedb.com
Вложения
On 2025-Jul-17, Mahendra Singh Thalor wrote:To pg_restore, we are giving a dump of pg_dumpall which has a global.dat file and we have drop commands in the global.dat file so when we are using 'globals-only', we are dropping databases as we have DROP commands. As of now, we don't have any filter for global.dat file in restore. If a user wants to restore only globals(without droping db), then they should use 'globals-only' in pg_dumpall. Or if we don't want to DROP databases by global.dat file, then we should add a filter in pg_restore (hard to implement as we have SQL commands in global.dat file).I think dropping database is dangerous and makes no practical sense; doing it renders pg_dumpall --clean completely unusable. You're arguing from the point of view of ease of implementation, but that doesn't help users.
Yeah. I also agree with Noah that we should be consistent with pg_dump. And we should err on the side of caution. If we impose a little inconvenience on the user by requiring them to drop a database, it's better than surprising them by dropping a database when they didn't expect it.
There are some subtleties here. pg_restore will only issue DROP DATABASE of you use the -C flag, even if you specify --clean, so we need to be very careful about issuing DROP DATABASE.
I confess that all this didn't occur to me when working on the commit.
I think, for this case, we can do some more doc changes. Example: pg_restore --globals-only : this will restore the global.dat file(including all drop commands). It might drop databases if any drop commands.I don't think doc changes are useful.
Yeah, I don't think this is something that can be cured by documentation.
cheers
andrew
-- Andrew Dunstan EDB: https://www.enterprisedb.com
--- a/src/bin/pg_dump/pg_restore.c +++ b/src/bin/pg_dump/pg_restore.c+/* + * read_one_statement + * + * This will start reading from passed file pointer using fgetc and read till + * semicolon(sql statement terminator for global.dat file) + * + * EOF is returned if end-of-file input is seen; time to shut down.What makes it okay to use this particular subset of SQL lexing?To support complex syntax, we used this code from another file.I'm hearing that you copied this code from somewhere. Running "git grep 'time to shut down'" suggests you copied it from InteractiveBackend(). Is that right? I do see other similarities between read_one_statement() and InteractiveBackend(). Copying InteractiveBackend() provides negligible assurance that this is the right subset of SQL lexing. Only single-user mode uses InteractiveBackend(). Single-user mode survives mostly as a last resort for recovering from having reached xidStopLimit, is rarely used, and only superusers write queries to it.Yes, we copied this from InteractiveBackend to read statements from global.dat file.
Maybe we should ensure that identifiers with CR or LF are turned into Unicode quoted identifiers, so each SQL statement would always only occupy one line. Or just reject role and tablespace names with CR or LF altogether, just as we do for database names.
cheers
andrew
-- Andrew Dunstan EDB: https://www.enterprisedb.com
On Mon, Jul 21, 2025 at 04:41:03PM -0400, Andrew Dunstan wrote: > On 2025-07-17 Th 6:18 AM, Mahendra Singh Thalor wrote > > > > > > > --- a/src/bin/pg_dump/pg_restore.c > > > > > > > +++ b/src/bin/pg_dump/pg_restore.c > > > > > > > +/* > > > > > > > + * read_one_statement > > > > > > > + * > > > > > > > + * This will start reading from passed file pointer using fgetc and read till > > > > > > > + * semicolon(sql statement terminator for global.dat file) > > > > > > > + * > > > > > > > + * EOF is returned if end-of-file input is seen; time to shut down. > > > > > > What makes it okay to use this particular subset of SQL lexing? > > > > > To support complex syntax, we used this code from another file. > > > > I'm hearing that you copied this code from somewhere. Running > > > > "git grep 'time to shut down'" suggests you copied it from > > > > InteractiveBackend(). Is that right? I do see other similarities between > > > > read_one_statement() and InteractiveBackend(). > > > > > > > > Copying InteractiveBackend() provides negligible assurance that this is the > > > > right subset of SQL lexing. Only single-user mode uses InteractiveBackend(). > > > > Single-user mode survives mostly as a last resort for recovering from having > > > > reached xidStopLimit, is rarely used, and only superusers write queries to it. > > > Yes, we copied this from InteractiveBackend to read statements from > > > global.dat file. > > Maybe we should ensure that identifiers with CR or LF are turned into > Unicode quoted identifiers, so each SQL statement would always only occupy > one line. Interesting. That might work. > Or just reject role and tablespace names with CR or LF altogether, > just as we do for database names. There are other ways to get multi-line statements. Non-exhaustive list: - pg_db_role_setting.setconfig - pg_shdescription.description - pg_shseclabel.label - pg_tablespace.spcoptions (if we add a text option in the future) I think this decision about lexing also ties to other unfinished open item work of aligning "pg_dumpall -Fd;pg_restore [options]" behavior with "pg_dump -Fd;pg_restore [options]". "pg_restore --no-privileges" should not restore pg_tablespace.spcacl, and "pg_restore --no-comments" should not emit COMMENT statements. I suspect this is going to end with a structured dump like we use on the pg_dump (per-database) side. It's not an accident that v17 pg_restore doesn't lex text files to do its job. pg_dumpall deals with a more-limited set of statements than pg_dump deals with, but they're not _that much_ more limited. I won't veto a lexing-based approach if it gets the behaviors right, but I don't have high hopes for it getting the behaviors right and staying that way. (I almost said "pg_restore --no-owner" should not restore pg_tablespace.spcowner, but v17 "pg_dumpall --no-owner" does restore it. One could argue for or against aligning $SUBJECT behavior w/ v17's mistake there.)
On 2025-07-21 Mo 8:53 PM, Noah Misch wrote: > On Mon, Jul 21, 2025 at 04:41:03PM -0400, Andrew Dunstan wrote: >> On 2025-07-17 Th 6:18 AM, Mahendra Singh Thalor wrote >>>>>>>> --- a/src/bin/pg_dump/pg_restore.c >>>>>>>> +++ b/src/bin/pg_dump/pg_restore.c >>>>>>>> +/* >>>>>>>> + * read_one_statement >>>>>>>> + * >>>>>>>> + * This will start reading from passed file pointer using fgetc and read till >>>>>>>> + * semicolon(sql statement terminator for global.dat file) >>>>>>>> + * >>>>>>>> + * EOF is returned if end-of-file input is seen; time to shut down. >>>>>>> What makes it okay to use this particular subset of SQL lexing? >>>>>> To support complex syntax, we used this code from another file. >>>>> I'm hearing that you copied this code from somewhere. Running >>>>> "git grep 'time to shut down'" suggests you copied it from >>>>> InteractiveBackend(). Is that right? I do see other similarities between >>>>> read_one_statement() and InteractiveBackend(). >>>>> >>>>> Copying InteractiveBackend() provides negligible assurance that this is the >>>>> right subset of SQL lexing. Only single-user mode uses InteractiveBackend(). >>>>> Single-user mode survives mostly as a last resort for recovering from having >>>>> reached xidStopLimit, is rarely used, and only superusers write queries to it. >>>> Yes, we copied this from InteractiveBackend to read statements from >>>> global.dat file. >> Maybe we should ensure that identifiers with CR or LF are turned into >> Unicode quoted identifiers, so each SQL statement would always only occupy >> one line. > Interesting. That might work. > >> Or just reject role and tablespace names with CR or LF altogether, >> just as we do for database names. > There are other ways to get multi-line statements. Non-exhaustive list: > > - pg_db_role_setting.setconfig > - pg_shdescription.description > - pg_shseclabel.label > - pg_tablespace.spcoptions (if we add a text option in the future) > > I think this decision about lexing also ties to other unfinished open item > work of aligning "pg_dumpall -Fd;pg_restore [options]" behavior with "pg_dump > -Fd;pg_restore [options]". "pg_restore --no-privileges" should not restore > pg_tablespace.spcacl, and "pg_restore --no-comments" should not emit COMMENT > statements. > > I suspect this is going to end with a structured dump like we use on the > pg_dump (per-database) side. It's not an accident that v17 pg_restore doesn't > lex text files to do its job. pg_dumpall deals with a more-limited set of > statements than pg_dump deals with, but they're not _that much_ more limited. > I won't veto a lexing-based approach if it gets the behaviors right, but I > don't have high hopes for it getting the behaviors right and staying that way. Yeah, that was my original idea. But maybe instead of extending the archive mechanism, we could do something more lightweight, e.g. output the statements as a JSON array. cheers andrew -- Andrew Dunstan EDB: https://www.enterprisedb.com
On Thu, Jul 17, 2025 at 03:46:41PM +0530, Mahendra Singh Thalor wrote: > On Wed, 16 Jul 2025 at 05:50, Noah Misch <noah@leadboat.com> wrote: > > On Thu, Jul 10, 2025 at 12:21:03AM +0530, Mahendra Singh Thalor wrote: > > > On Wed, 9 Jul 2025 at 02:58, Noah Misch <noah@leadboat.com> wrote: > > > > On Fri, Apr 04, 2025 at 04:11:05PM -0400, Andrew Dunstan wrote: > > > > > +/* > > > > > + * get_dbnames_list_to_restore > > > > > + * > > > > > + * This will mark for skipping any entries from dbname_oid_list that pattern match an > > > > > + * entry in the db_exclude_patterns list. > > > > > + * > > > > > + * Returns the number of database to be restored. > > > > > + * > > > > > + */ > > > > > +static int > > > > > +get_dbnames_list_to_restore(PGconn *conn, > > > > > + SimpleOidStringList *dbname_oid_list, > > > > > + SimpleStringList db_exclude_patterns) > > > > > +{ > > > > > + int count_db = 0; > > > > > + PQExpBuffer query; > > > > > + PGresult *res; > > > > > + > > > > > + query = createPQExpBuffer(); > > > > > + > > > > > + if (!conn) > > > > > + pg_log_info("considering PATTERN as NAME for --exclude-database option as no db connection whiledoing pg_restore."); > > > > > > > > When do we not have a connection here? We'd need to document this behavior > > > > variation if it stays, but I'd prefer if we can just rely on having a > > > > connection. > > > > > > Yes, we can document this behavior. > > > > My review asked a question there. I don't see an answer to that question. > > Would you answer that question? > > Example: if there is no active database, even postgres/template1, then > we will consider PATTEREN as NAME. This is the rare case. > In attached patch, I added one doc line also for this case. If I change s/pg_log_info/pg_fatal/, check-world still passes. So no test is reaching the !conn case. If one wanted to write a test that reaches the !conn test, how would they do that?
On Wed, Jul 9, 2025 at 2:51 PM Mahendra Singh Thalor <mahi6run@gmail.com> wrote: > > This drops all databases: > > > > pg_dumpall --clean -Fd -f /tmp/dump > > pg_restore -d template1 --globals-only /tmp/dump > > > > That didn't match my expectations given this help text: > > > > $ pg_restore --help|grep global > > -g, --globals-only restore only global objects, no databases > > Databases are global objects so due to --clean command, we are putting > drop commands in global.dat for all the databases. While restoring, we > used the "--globals-only" option so we are dropping all these > databases by global.dat file. > > Please let us know your expectations for this specific case. I am not sure whether pg_dumpall --clean should ever drop databases, but it certainly shouldn't do it with --globals-only. In that case, it's not restoring the databases, so dropping them seems catastrophically bad. -- Robert Haas EDB: http://www.enterprisedb.com
I suspect this is going to end with a structured dump like we use on the pg_dump (per-database) side. It's not an accident that v17 pg_restore doesn't lex text files to do its job. pg_dumpall deals with a more-limited set of statements than pg_dump deals with, but they're not _that much_ more limited. I won't veto a lexing-based approach if it gets the behaviors right, but I don't have high hopes for it getting the behaviors right and staying that way.
I have been talking offline with Mahendra about this. I agree that we would be better off with a structured object for globals. But the thing that's been striking me all afternoon as I have pondered it is that we should not be designing such an animal at this stage of the cycle. Whatever we do we're going to be stuck supporting, so I have very reluctantly come to the conclusion that it would probably be better to back the feature out and have another go for PG 19.
cheers
andrew
-- Andrew Dunstan EDB: https://www.enterprisedb.com
On Thu, Jul 24, 2025 at 04:33:15PM -0400, Andrew Dunstan wrote: > On 2025-07-21 Mo 8:53 PM, Noah Misch wrote: > > I suspect this is going to end with a structured dump like we use on the > > pg_dump (per-database) side. It's not an accident that v17 pg_restore doesn't > > lex text files to do its job. pg_dumpall deals with a more-limited set of > > statements than pg_dump deals with, but they're not _that much_ more limited. > > I won't veto a lexing-based approach if it gets the behaviors right, but I > > don't have high hopes for it getting the behaviors right and staying that way. > > I have been talking offline with Mahendra about this. I agree that we would > be better off with a structured object for globals. But the thing that's > been striking me all afternoon as I have pondered it is that we should not > be designing such an animal at this stage of the cycle. Whatever we do we're > going to be stuck supporting, so I have very reluctantly come to the > conclusion that it would probably be better to back the feature out and have > another go for PG 19. That makes sense to me. It would be quite a sprint to get this done in time, and that wouldn't leave much room for additional testing and feedback before the final release. I agree with the reluctance and with the conclusion.
On Thu, Jul 24, 2025 at 04:33:15PM -0400, Andrew Dunstan wrote:On 2025-07-21 Mo 8:53 PM, Noah Misch wrote:I suspect this is going to end with a structured dump like we use on the pg_dump (per-database) side. It's not an accident that v17 pg_restore doesn't lex text files to do its job. pg_dumpall deals with a more-limited set of statements than pg_dump deals with, but they're not _that much_ more limited. I won't veto a lexing-based approach if it gets the behaviors right, but I don't have high hopes for it getting the behaviors right and staying that way.I have been talking offline with Mahendra about this. I agree that we would be better off with a structured object for globals. But the thing that's been striking me all afternoon as I have pondered it is that we should not be designing such an animal at this stage of the cycle. Whatever we do we're going to be stuck supporting, so I have very reluctantly come to the conclusion that it would probably be better to back the feature out and have another go for PG 19.That makes sense to me. It would be quite a sprint to get this done in time, and that wouldn't leave much room for additional testing and feedback before the final release. I agree with the reluctance and with the conclusion.
Before we throw the baby out with the bathwater, how about this suggestion? pg_dumpall would continue to produce globals.dat, but it wouldn't be processed by pg_restore, which would only restore the individual databases. Or else we just don't produce globals.dat at all. Then we could introduce a structured object that pg_restore could safely use for release 19, and I think we'd still have something useful for release 18.
cheers
andrew
-- Andrew Dunstan EDB: https://www.enterprisedb.com
Andrew Dunstan <andrew@dunslane.net> writes: > Before we throw the baby out with the bathwater, how about this > suggestion? pg_dumpall would continue to produce globals.dat, but it > wouldn't be processed by pg_restore, which would only restore the > individual databases. Or else we just don't produce globals.dat at all. > Then we could introduce a structured object that pg_restore could safely > use for release 19, and I think we'd still have something useful for > release 18. I dunno ... that seems like a pretty weird behavior. People would have to do a separate text-mode "pg_dumpall -g" and remember to restore that too. Admittedly, this could be more convenient than "pg_dumpall -g" plus separately pg_dump'ing each database, which is what people have to do today if they want anything smarter than a flat text dumpfile. But it still seems like a hack --- and it would not be compatible with v19, where presumably "pg_dumpall | pg_restore" *would* restore globals. I think that the prospect of changing dump/restore scripts and then having to change them again in v19 isn't too appetizing. regards, tom lane
On Fri, Jul 25, 2025 at 04:59:29PM -0400, Tom Lane wrote: > Andrew Dunstan <andrew@dunslane.net> writes: > > Before we throw the baby out with the bathwater, how about this > > suggestion? pg_dumpall would continue to produce globals.dat, but it > > wouldn't be processed by pg_restore, which would only restore the > > individual databases. Or else we just don't produce globals.dat at all. > > Then we could introduce a structured object that pg_restore could safely > > use for release 19, and I think we'd still have something useful for > > release 18. > > I dunno ... that seems like a pretty weird behavior. People would > have to do a separate text-mode "pg_dumpall -g" and remember to > restore that too. Admittedly, this could be more convenient than > "pg_dumpall -g" plus separately pg_dump'ing each database, which is > what people have to do today if they want anything smarter than a flat > text dumpfile. But it still seems like a hack --- and it would not be > compatible with v19, where presumably "pg_dumpall | pg_restore" > *would* restore globals. I think that the prospect of changing > dump/restore scripts and then having to change them again in v19 > isn't too appetizing. +1
On 2025-07-27 Su 7:56 PM, Noah Misch wrote: > On Fri, Jul 25, 2025 at 04:59:29PM -0400, Tom Lane wrote: >> Andrew Dunstan <andrew@dunslane.net> writes: >>> Before we throw the baby out with the bathwater, how about this >>> suggestion? pg_dumpall would continue to produce globals.dat, but it >>> wouldn't be processed by pg_restore, which would only restore the >>> individual databases. Or else we just don't produce globals.dat at all. >>> Then we could introduce a structured object that pg_restore could safely >>> use for release 19, and I think we'd still have something useful for >>> release 18. >> I dunno ... that seems like a pretty weird behavior. People would >> have to do a separate text-mode "pg_dumpall -g" and remember to >> restore that too. Admittedly, this could be more convenient than >> "pg_dumpall -g" plus separately pg_dump'ing each database, which is >> what people have to do today if they want anything smarter than a flat >> text dumpfile. But it still seems like a hack --- and it would not be >> compatible with v19, where presumably "pg_dumpall | pg_restore" >> *would* restore globals. I think that the prospect of changing >> dump/restore scripts and then having to change them again in v19 >> isn't too appetizing. > +1 OK, got it. Will revert. cheers andrew -- Andrew Dunstan EDB: https://www.enterprisedb.com
On 2025-07-28 Mo 8:04 AM, Andrew Dunstan wrote: > > On 2025-07-27 Su 7:56 PM, Noah Misch wrote: >> On Fri, Jul 25, 2025 at 04:59:29PM -0400, Tom Lane wrote: >>> Andrew Dunstan <andrew@dunslane.net> writes: >>>> Before we throw the baby out with the bathwater, how about this >>>> suggestion? pg_dumpall would continue to produce globals.dat, but it >>>> wouldn't be processed by pg_restore, which would only restore the >>>> individual databases. Or else we just don't produce globals.dat at >>>> all. >>>> Then we could introduce a structured object that pg_restore could >>>> safely >>>> use for release 19, and I think we'd still have something useful for >>>> release 18. >>> I dunno ... that seems like a pretty weird behavior. People would >>> have to do a separate text-mode "pg_dumpall -g" and remember to >>> restore that too. Admittedly, this could be more convenient than >>> "pg_dumpall -g" plus separately pg_dump'ing each database, which is >>> what people have to do today if they want anything smarter than a flat >>> text dumpfile. But it still seems like a hack --- and it would not be >>> compatible with v19, where presumably "pg_dumpall | pg_restore" >>> *would* restore globals. I think that the prospect of changing >>> dump/restore scripts and then having to change them again in v19 >>> isn't too appetizing. >> +1 > > > OK, got it. Will revert. > > > here's a reversion patch for master. It applies cleanly to release 18 as well. Thanks to Mahendra Singh Thalor for helping me sanity check it (Any issues are of course my responsibility) I'll work on pulling the entry out of the release notes. cheers andrew -- Andrew Dunstan EDB: https://www.enterprisedb.com
Вложения
On Tue, Jul 29, 2025 at 04:09:13PM -0400, Andrew Dunstan wrote: > here's a reversion patch for master. > This reverts parts or all of the following commits: I briefly looked through this. The biggest non-reverted part is, I think, c1da728 "Move common pg_dump code related to connections to a new file". Refraining from a revert of that one is defensible. > dec6643487b Improve pg_dump/pg_dumpall help synopses and terminology > @@ -1276,7 +1276,7 @@ main(int argc, char **argv) > static void > help(const char *progname) > { > - printf(_("%s exports a PostgreSQL database as an SQL script or to other formats.\n\n"), progname); > + printf(_("%s dumps a database as a text file or to other formats.\n\n"), progname); > printf(_("Usage:\n")); > printf(_(" %s [OPTION]... [DBNAME]\n"), progname); I think commit dec6643487b, which e.g. decided to standardize on the term "export" for these programs, was independent of $SUBJECT.
On 2025-07-29 Tu 4:34 PM, Noah Misch wrote: > On Tue, Jul 29, 2025 at 04:09:13PM -0400, Andrew Dunstan wrote: >> here's a reversion patch for master. >> This reverts parts or all of the following commits: > I briefly looked through this. The biggest non-reverted part is, I think, > c1da728 "Move common pg_dump code related to connections to a new file". > Refraining from a revert of that one is defensible. Yes, that was deliberate, since we intend to use it in the same way when we redo this. > >> dec6643487b Improve pg_dump/pg_dumpall help synopses and terminology >> @@ -1276,7 +1276,7 @@ main(int argc, char **argv) >> static void >> help(const char *progname) >> { >> - printf(_("%s exports a PostgreSQL database as an SQL script or to other formats.\n\n"), progname); >> + printf(_("%s dumps a database as a text file or to other formats.\n\n"), progname); >> printf(_("Usage:\n")); >> printf(_(" %s [OPTION]... [DBNAME]\n"), progname); > I think commit dec6643487b, which e.g. decided to standardize on the term > "export" for these programs, was independent of $SUBJECT. OK, thanks for looking. Will fix. cheers andrew -- Andrew Dunstan EDB: https://www.enterprisedb.com
On 2025-07-29 Tu 4:09 PM, Andrew Dunstan wrote: > > On 2025-07-28 Mo 8:04 AM, Andrew Dunstan wrote: >> >> On 2025-07-27 Su 7:56 PM, Noah Misch wrote: >>> On Fri, Jul 25, 2025 at 04:59:29PM -0400, Tom Lane wrote: >>>> Andrew Dunstan <andrew@dunslane.net> writes: >>>>> Before we throw the baby out with the bathwater, how about this >>>>> suggestion? pg_dumpall would continue to produce globals.dat, but it >>>>> wouldn't be processed by pg_restore, which would only restore the >>>>> individual databases. Or else we just don't produce globals.dat at >>>>> all. >>>>> Then we could introduce a structured object that pg_restore could >>>>> safely >>>>> use for release 19, and I think we'd still have something useful for >>>>> release 18. >>>> I dunno ... that seems like a pretty weird behavior. People would >>>> have to do a separate text-mode "pg_dumpall -g" and remember to >>>> restore that too. Admittedly, this could be more convenient than >>>> "pg_dumpall -g" plus separately pg_dump'ing each database, which is >>>> what people have to do today if they want anything smarter than a flat >>>> text dumpfile. But it still seems like a hack --- and it would not be >>>> compatible with v19, where presumably "pg_dumpall | pg_restore" >>>> *would* restore globals. I think that the prospect of changing >>>> dump/restore scripts and then having to change them again in v19 >>>> isn't too appetizing. >>> +1 >> >> >> OK, got it. Will revert. >> >> >> > > here's a reversion patch for master. It applies cleanly to release 18 > as well. Thanks to Mahendra Singh Thalor for helping me sanity check > it (Any issues are of course my responsibility) > > > I'll work on pulling the entry out of the release notes. > > > OK, now that's reverted we should discuss how to proceed. I had two thoughts - we could use invent a JSON format for the globals, or we could just use the existing archive format. I think the archive format is pretty flexible, and should be able to accommodate this. The downside is it's not humanly readable. The upside is that we don't need to do anything special either to write it or parse it. There might also be other reasonable options. But I think we should stay out of the business of using custom code to parse text. cheers andrew -- Andrew Dunstan EDB: https://www.enterprisedb.com
The 18 branch is broken for apt.pg.o: 00:54:18 # Failed test 'dump_globals_only: pg_dumpall runs' 00:54:18 # at t/006_pg_dumpall.pl line 329. 00:54:18 # Tests were run but no plan was declared and done_testing() was not seen. 00:54:18 # Looks like your test exited with 2 just after 1. 00:54:18 t/006_pg_dumpall.pl ........... 00:54:18 # initializing database system by copying initdb template 00:54:18 # initializing database system by copying initdb template 00:54:18 not ok 1 - dump_globals_only: pg_dumpall runs 00:54:18 Dubious, test returned 2 (wstat 512, 0x200) 00:54:18 Failed 1/1 subtests Devel is ok. Christoph
On 2025-07-31 Th 5:44 AM, Christoph Berg wrote: > The 18 branch is broken for apt.pg.o: > > 00:54:18 # Failed test 'dump_globals_only: pg_dumpall runs' > 00:54:18 # at t/006_pg_dumpall.pl line 329. > 00:54:18 # Tests were run but no plan was declared and done_testing() was not seen. > 00:54:18 # Looks like your test exited with 2 just after 1. > 00:54:18 t/006_pg_dumpall.pl ........... > 00:54:18 # initializing database system by copying initdb template > 00:54:18 # initializing database system by copying initdb template > 00:54:18 not ok 1 - dump_globals_only: pg_dumpall runs > 00:54:18 Dubious, test returned 2 (wstat 512, 0x200) > 00:54:18 Failed 1/1 subtests > > Devel is ok. > That file was deleted by the revert. Maybe you have a cache problem? cheers andrew -- Andrew Dunstan EDB: https://www.enterprisedb.com
Re: Andrew Dunstan > That file was deleted by the revert. Maybe you have a cache problem? Oh right. This was caused by our snapshot builds using the latest tarball (if available) and putting a patch on top that. I've now bumped the upstream version to 18~beta3, this should avoid the problem. Sorry for the noise, and thanks for the pointer! Christoph
On Wed, Jul 30, 2025 at 02:51:59PM -0400, Andrew Dunstan wrote: > OK, now that's reverted... Can we close the open item for this one now? Or is there something else remaining? -- nathan
On 2025-07-31 Th 2:22 PM, Nathan Bossart wrote: > On Wed, Jul 30, 2025 at 02:51:59PM -0400, Andrew Dunstan wrote: >> OK, now that's reverted... > Can we close the open item for this one now? Or is there something else > remaining? > Thanks for the reminder. I have marked the item as fixed. cheers andrew -- Andrew Dunstan EDB: https://www.enterprisedb.com
On Wed, Jul 30, 2025 at 02:51:59PM -0400, Andrew Dunstan wrote: > OK, now that's reverted we should discuss how to proceed. I had two thoughts > - we could use invent a JSON format for the globals, or we could just use > the existing archive format. I think the archive format is pretty flexible, > and should be able to accommodate this. The downside is it's not humanly > readable. The upside is that we don't need to do anything special either to > write it or parse it. I would first try to use the existing archiver API, because that makes it harder to miss bugs. Any tension between that API and pg_dumpall is likely to have corresponding tension on the pg_restore side. Resolving that tension will reveal much of the project's scope that remained hidden during the v18 attempt. Perhaps more important than that, using the archiver API means future pg_dump and pg_restore options are more likely to cooperate properly with $SUBJECT. In other words, I want it to be hard to add pg_dump/pg_restore features that malfunction only for $SUBJECT archives. The strength of the archiver architecture shows in how rarely new features need format-specific logic and how rarely format-specific bugs get reported. We've had little or no trouble with e.g. bugs that appear in -Fd but not in -Fc. If pg_backup_json.c emerged as a new backend to the archiver API, I'd not have concerns about that. But a JSON format specific to $SUBJECT sounds like a recipe for bugs. > There might also be other reasonable options. But I think we should stay out > of the business of using custom code to parse text. Agreed.
On Wed, Jul 30, 2025 at 02:51:59PM -0400, Andrew Dunstan wrote:OK, now that's reverted we should discuss how to proceed. I had two thoughts - we could use invent a JSON format for the globals, or we could just use the existing archive format. I think the archive format is pretty flexible, and should be able to accommodate this. The downside is it's not humanly readable. The upside is that we don't need to do anything special either to write it or parse it.I would first try to use the existing archiver API, because that makes it harder to miss bugs. Any tension between that API and pg_dumpall is likely to have corresponding tension on the pg_restore side. Resolving that tension will reveal much of the project's scope that remained hidden during the v18 attempt. Perhaps more important than that, using the archiver API means future pg_dump and pg_restore options are more likely to cooperate properly with $SUBJECT. In other words, I want it to be hard to add pg_dump/pg_restore features that malfunction only for $SUBJECT archives. The strength of the archiver architecture shows in how rarely new features need format-specific logic and how rarely format-specific bugs get reported. We've had little or no trouble with e.g. bugs that appear in -Fd but not in -Fc.
Yeah, that's what we're going to try.
cheers
andrew
-- Andrew Dunstan EDB: https://www.enterprisedb.com