Обсуждение: pg_dump versus ancient server versions

Поиск
Список
Период
Сортировка

pg_dump versus ancient server versions

От
Tom Lane
Дата:
While doing some desultory testing, I realized that the commit
I just pushed (92316a458) broke pg_dump against 8.0 servers:

$ pg_dump -p5480 -s regression
pg_dump: error: schema with OID 11 does not exist

The reason turns out to be something I'd long forgotten about: except
for the few "bootstrap" catalogs, our system catalogs didn't use to
have fixed OIDs.  That changed at 7c13781ee, but 8.0 predates that.
So when pg_dump reads a catalog on 8.0, it gets some weird number for
"tableoid", and the logic I just put into common.c's findNamespaceByOid
et al fails to find the resulting DumpableObjects.

So my first thought was just to revert 92316a458 and give up on it as
a bad idea.  However ... does anyone actually still care about being
able to dump from such ancient servers?  In addition to this issue,
I'm thinking of the discussion at [1] about wanting to use unnest()
in pg_dump, and of what we would need to do instead in pre-8.4 servers
that lack that.  Maybe it'd be better to move up pg_dump's minimum
supported server version to 8.4 or 9.0, and along the way whack a
few more lines of its backward-compatibility hacks.  If there is
anyone out there still using an 8.x server, they could use its
own pg_dump whenever they get around to migration.

Another idea would be to ignore "tableoid" and instead use the OIDs
we're expecting, but that's way too ugly for my taste, especially
given the rather thin argument for committing 92316a458 at all.

Anyway, I think the default answer is "revert 92316a458 and keep the
compatibility goalposts where they are".  But I wanted to open up a
discussion to see if anyone likes the other approach better.

            regards, tom lane

[1] https://www.postgresql.org/message-id/20211022055939.z6fihsm7hdzbjttf%40alap3.anarazel.de



Re: pg_dump versus ancient server versions

От
"David G. Johnston"
Дата:
On Fri, Oct 22, 2021 at 3:42 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:
Anyway, I think the default answer is "revert 92316a458 and keep the
compatibility goalposts where they are".  But I wanted to open up a
discussion to see if anyone likes the other approach better.

[1] https://www.postgresql.org/message-id/20211022055939.z6fihsm7hdzbjttf%40alap3.anarazel.de


I'd rather drop legacy support than revert.  Even if the benefit of 92316a456 of is limited to refactoring the fact it was committed is enough for me to feel it is a worthwhile improvement.  It's still yet another five years before there won't be a supported release that can dump/restore this - so 20 years for someone to have upgraded without having to go to the (not that big a) hassle of installing an out-of-support version as a stop-over.

In short, IMO, the bar for this kind of situation should be 10 releases at most - 5 of which would be in support at the time the patch goes in.  We don't have to actively drop support of older stuff but anything older shouldn't be preventing new commits.

David J.

Re: pg_dump versus ancient server versions

От
Robert Haas
Дата:
On Fri, Oct 22, 2021 at 6:42 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:
> So my first thought was just to revert 92316a458 and give up on it as
> a bad idea.  However ... does anyone actually still care about being
> able to dump from such ancient servers?

I think I recently heard about an 8.4 server still out there in the
wild, but AFAICR it's been a long time since I've heard about anything
older.

It seems to me that if you're upgrading by a dozen server versions in
one shot, it's not a totally crazy idea that you might want to do it
in steps, or use the pg_dump for the version you have and then hack
the dump. I kind of wonder if there's really any hope of a pain-free
upgrade across that many versions anyway. There are things that can
bite you despite all the work we've put into pg_dump, like having
objects that depend on system objects whose definition has changed
over the years, plus implicit casting differences, operator precedence
changes, => getting deprecated, lots of GUC changes, etc. You are
going to be able to upgrade in the end, but it's probably going to
take some work. So I'm not really sure that giving up pg_dump
compatibility for versions that old is losing as much as it may seem.

Another thing to think about in that regard: how likely is that
PostgreSQL 7.4 and PostgreSQL 15 both compile and run on the same
operating system? I suspect the answer is "not very." I seem to recall
Greg Stark trying to compile really old versions of PostgreSQL for a
conference talk some years ago, and he got back to a point where it
just became impossible to make work on modern toolchains even with a
decent amount of hackery. One tends to think of C as about as static a
thing as can be, but that's kind of missing the point. On my laptop
for example, my usual configure invocation fails on 7.4 with:

checking for SSL_library_init in -lssl... no
configure: error: library 'ssl' is required for OpenSSL

In fact, I get that same failure on every branch older than 9.2. I
expect I could work around that by disabling SSL or finding an older
version of OpenSSL that works the way those branches expect, but that
might not be the only problem, either.  Now I understand you could
have PostgreSQL 15 on a new box and PostgreSQL 7.x on an ancient one
and connect via the network, and it would in all fairness be cool if
that Just Worked. But I suspect that even if that did happen in the
lab, reality wouldn't often be so kind.

--
Robert Haas
EDB: http://www.enterprisedb.com



Re: pg_dump versus ancient server versions

От
Tom Lane
Дата:
"David G. Johnston" <david.g.johnston@gmail.com> writes:
> On Fri, Oct 22, 2021 at 3:42 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:
>> Anyway, I think the default answer is "revert 92316a458 and keep the
>> compatibility goalposts where they are".  But I wanted to open up a
>> discussion to see if anyone likes the other approach better.

> ... IMO, the bar for this kind of situation should be 10 releases at
> most - 5 of which would be in support at the time the patch goes in.  We
> don't have to actively drop support of older stuff but anything older
> shouldn't be preventing new commits.

Yeah.  I checked into when it was that we dropped pre-8.0 support
from pg_dump, and the answer is just about five years ago (64f3524e2).
So moving the bar forward by five releases isn't at all out of line.
8.4 would be eight years past EOL by the time v15 comes out.

One of the arguments for the previous change was that it was getting
very hard to build old releases on modern platforms, thus making it
hard to do any compatibility testing.  I believe the same is starting
to become true of the 8.x releases, though I've not tried personally
to build any of them in some time.  (The executables I'm using for
them date from 2014 or earlier, and have not been recompiled in
subsequent platform upgrades ...)  Anyway it's definitely not free
to continue to support old source server versions.

            regards, tom lane



Re: pg_dump versus ancient server versions

От
Tom Lane
Дата:
Robert Haas <robertmhaas@gmail.com> writes:
> Another thing to think about in that regard: how likely is that
> PostgreSQL 7.4 and PostgreSQL 15 both compile and run on the same
> operating system? I suspect the answer is "not very." I seem to recall
> Greg Stark trying to compile really old versions of PostgreSQL for a
> conference talk some years ago, and he got back to a point where it
> just became impossible to make work on modern toolchains even with a
> decent amount of hackery.

Right.  The toolchains keep moving, even if the official language
definition doesn't.  For grins, I just checked out REL8_4_STABLE
on my M1 Mac, and found that it only gets this far:

checking test program... ok
checking whether long int is 64 bits... no
checking whether long long int is 64 bits... no
configure: error: Cannot find a working 64-bit integer type.

which turns out to be down to a configure-script issue we fixed
some years ago, ie using exit() without a prototype:

conftest.c:158:3: error: implicitly declaring library function 'exit' with type\
 'void (int) __attribute__((noreturn))' [-Werror,-Wimplicit-function-declaratio\
n]
  exit(! does_int64_work());
  ^

I notice that the configure script is also selecting some warning
switches that this compiler doesn't much like, plus it doesn't
believe 2.6.x flex is usable.  So that's *at least* three things
that'd have to be hacked even to get to a successful configure run.

Individually such issues are (usually) not very painful, but when
you have to recreate all of them at once it's a daunting project.

So if I had to rebuild 8.4 from scratch right now, I would not be
a happy camper.  That seems like a good argument for not deeming
it to be something we still have to support.

            regards, tom lane



Re: pg_dump versus ancient server versions

От
Alvaro Herrera
Дата:
On 2021-Oct-22, Robert Haas wrote:

> In fact, I get that same failure on every branch older than 9.2. I
> expect I could work around that by disabling SSL or finding an older
> version of OpenSSL that works the way those branches expect, but that
> might not be the only problem, either.

I just tried to build 9.1.  My config line there doesn't have ssl, but I
do get this in the compile stage:

gram.c:69:25: error: conflicting types for ‘base_yylex’
   69 | #define yylex           base_yylex
      |                         ^~~~~~~~~~
scan.c:15241:12: note: in expansion of macro ‘yylex’
15241 | extern int yylex \
      |            ^~~~~
In file included from /pgsql/source/REL9_1_STABLE/src/backend/parser/gram.y:60:
/pgsql/source/REL9_1_STABLE/src/include/parser/gramparse.h:66:12: note: previous declaration of ‘base_yylex’ was here
   66 | extern int base_yylex(YYSTYPE *lvalp, YYLTYPE *llocp,
      |            ^~~~~~~~~~
gram.c:69:25: error: conflicting types for ‘base_yylex’
   69 | #define yylex           base_yylex
      |                         ^~~~~~~~~~
scan.c:15244:21: note: in expansion of macro ‘yylex’
15244 | #define YY_DECL int yylex \
      |                     ^~~~~
scan.c:15265:1: note: in expansion of macro ‘YY_DECL’
15265 | YY_DECL
      | ^~~~~~~
In file included from /pgsql/source/REL9_1_STABLE/src/backend/parser/gram.y:60:
/pgsql/source/REL9_1_STABLE/src/include/parser/gramparse.h:66:12: note: previous declaration of ‘base_yylex’ was here
   66 | extern int base_yylex(YYSTYPE *lvalp, YYLTYPE *llocp,
      |            ^~~~~~~~~~
make[3]: *** [../../../src/Makefile.global:655: gram.o] Error 1

-- 
Álvaro Herrera              Valdivia, Chile  —  https://www.EnterpriseDB.com/
"El Maquinismo fue proscrito so pena de cosquilleo hasta la muerte"
(Ijon Tichy en Viajes, Stanislaw Lem)



Re: pg_dump versus ancient server versions

От
Robert Haas
Дата:
On Fri, Oct 22, 2021 at 7:51 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:
> I just tried to build 9.1.  My config line there doesn't have ssl, but I
> do get this in the compile stage:

Hmm.

You know, one thing we could think about doing is patching some of the
older branches to make them compile on modern machines. That would not
only be potentially useful for people who are upgrading from ancient
versions, but also for hackers trying to do research on the origin of
bugs or performance problems, and also for people who are trying to
maintain some kind of backward compatibility or other and want to test
against old versions.

I don't know whether that's really worth the effort and I expect Tom
will say that it's not. If he does say that, he may be right. But I
think if I were trying to extract my data from an old 7.4 database, I
think I'd find it a lot more useful if I could make 9.0 or 9.2 or
something compile and talk to it than if I had to use v15 and hope
that held together somehow. It doesn't really make sense to try to
keep compatibility of any sort with versions we can no longer test
against.

--
Robert Haas
EDB: http://www.enterprisedb.com



Re: pg_dump versus ancient server versions

От
Tom Lane
Дата:
Robert Haas <robertmhaas@gmail.com> writes:
> You know, one thing we could think about doing is patching some of the
> older branches to make them compile on modern machines. That would not
> only be potentially useful for people who are upgrading from ancient
> versions, but also for hackers trying to do research on the origin of
> bugs or performance problems, and also for people who are trying to
> maintain some kind of backward compatibility or other and want to test
> against old versions.

Yeah.  We have done that in the past; I thought more than once,
but right now the only case I can find is d13f41d21/105f3ef49.
There are some other post-EOL commits in git, but I think the
others were mistakes from over-enthusiastic back-patching, while
that one was definitely an intentional portability fix for EOL'd
versions.

> I don't know whether that's really worth the effort and I expect Tom
> will say that it's not. If he does say ,that, he may be right.

Hmm ... I guess the question is how much work we feel like putting
into that, and how we'd track whether old branches still work,
and on what platforms.  It could easily turn into a time sink
that's not justified by the value.  I do see your point that there's
some value in it; I'm just not sure about the cost/benefit ratio.

One thing we could do that would help circumscribe the costs is to say
"we are not going to consider issues involving new compiler warnings
or bugs caused by more-aggressive optimization".  We could mechanize
that pretty effectively by changing configure shortly after a branch's
EOL to select -O0 and no extra warning flags, so that anyone building
from branch tip would get those switch choices.

(I have no idea what this might look like on the Windows side, but
I'm concerned by the fact that we seem to need fixes every time a
new Visual Studio major version comes out.)

            regards, tom lane



Re: pg_dump versus ancient server versions

От
Alvaro Herrera
Дата:
On 2021-Oct-24, Robert Haas wrote:

> You know, one thing we could think about doing is patching some of the
> older branches to make them compile on modern machines. That would not
> only be potentially useful for people who are upgrading from ancient
> versions, but also for hackers trying to do research on the origin of
> bugs or performance problems, and also for people who are trying to
> maintain some kind of backward compatibility or other and want to test
> against old versions.

I think it is worth *some* effort, at least as far back as we want to
claim that we maintain pg_dump and/or psql compatibility, assuming it is
not too onerous.  For instance, I wouldn't want to clutter buildfarm or
CI dashboards with testing these branches, unless it is well isolated
from regular ones; we shouldn't commit anything that's too invasive; and
we shouldn't make any claims about supportability of these abandoned
branches.

As an example, I did backpatch one such fix to 8.3 (just over a year)
and 8.2 (four years) after they had closed -- see d13f41d21538 and
105f3ef492ab.

-- 
Álvaro Herrera           39°49'30"S 73°17'W  —  https://www.EnterpriseDB.com/
"Puedes vivir sólo una vez, pero si lo haces bien, una vez es suficiente"



Re: pg_dump versus ancient server versions

От
Laurenz Albe
Дата:
On Fri, 2021-10-22 at 19:26 -0400, Robert Haas wrote:
> On Fri, Oct 22, 2021 at 6:42 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:
> > So my first thought was just to revert 92316a458 and give up on it as
> > a bad idea.  However ... does anyone actually still care about being
> > able to dump from such ancient servers?
> 
> I think I recently heard about an 8.4 server still out there in the
> wild, but AFAICR it's been a long time since I've heard about anything
> older.

I had a customer with 8.3 in the not too distant past, but that need not
stop the show.  If necessary, they can dump with 8.3 and restire that.

Yours,
Laurenz Albe




Re: pg_dump versus ancient server versions

От
Andrew Dunstan
Дата:
On 10/22/21 19:30, Tom Lane wrote:
> "David G. Johnston" <david.g.johnston@gmail.com> writes:
>> On Fri, Oct 22, 2021 at 3:42 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:
>>> Anyway, I think the default answer is "revert 92316a458 and keep the
>>> compatibility goalposts where they are".  But I wanted to open up a
>>> discussion to see if anyone likes the other approach better.
>> ... IMO, the bar for this kind of situation should be 10 releases at
>> most - 5 of which would be in support at the time the patch goes in.  We
>> don't have to actively drop support of older stuff but anything older
>> shouldn't be preventing new commits.
> Yeah.  I checked into when it was that we dropped pre-8.0 support
> from pg_dump, and the answer is just about five years ago (64f3524e2).
> So moving the bar forward by five releases isn't at all out of line.
> 8.4 would be eight years past EOL by the time v15 comes out.
>
> One of the arguments for the previous change was that it was getting
> very hard to build old releases on modern platforms, thus making it
> hard to do any compatibility testing.  I believe the same is starting
> to become true of the 8.x releases, though I've not tried personally
> to build any of them in some time.  (The executables I'm using for
> them date from 2014 or earlier, and have not been recompiled in
> subsequent platform upgrades ...)  Anyway it's definitely not free
> to continue to support old source server versions.


But we don't need to build them on modern platforms, just run them on
modern platforms, ISTM.

Some months ago I built binaries all the way back to 7.2 that with a
little help run on modern Fedora and Ubuntu systems. I just upgraded my
Fedora system from 31 to 34 and they still run. See
<https://gitlab.com/adunstan/pg-old-bin> One of the intended use cases
was to test pg_dump against old versions.

I'm not opposed to us cutting off support for very old versions,
although I think we should only do that very occasionally (no more than
once every five years, say) unless there's a very good reason. I'm also
not opposed to us making small adjustments to allow us to build old
versions on modern platforms, but if we do that then we should probably
have some buildfarm support for it.


cheers


andrew


--
Andrew Dunstan
EDB: https://www.enterprisedb.com




Re: pg_dump versus ancient server versions

От
Robert Haas
Дата:
On Sun, Oct 24, 2021 at 5:46 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:
> Hmm ... I guess the question is how much work we feel like putting
> into that, and how we'd track whether old branches still work,
> and on what platforms.  It could easily turn into a time sink
> that's not justified by the value.  I do see your point that there's
> some value in it; I'm just not sure about the cost/benefit ratio.

Right. Well, we could leave it up to people who care to decide how
much work they want to do, perhaps. But I do find it annoying that
pg_dump is supposed to maintain compatibility with server releases
that I can't easily build. Fortunately I don't patch pg_dump very
often, but if I did, it'd be very difficult for me to verify that
things work against really old versions. I know that you (Tom) do a
lot of work of this type though. In my opinion, if you find yourself
working on a project of this type and as part of that you do some
fixes to an older branch to make it compile, maybe you ought to commit
those so that the next person doesn't have the same problem. And maybe
when we add support for newer versions of OpenSSL or Windows, we ought
to consider back-patching those even to unsupported releases if
someone's willing to do the work. If they're not, they're not, but I
think we tend to strongly discourage commits to EOL branches, and I
think maybe we should stop doing that. Not that people should
routinely back-patch bug fixes, but stuff that makes it easier to
build seems fair game.

I don't think we need to worry too much about users getting the wrong
impression. People who want to know what is supported are going to
look at our web site for that information, and they are going to look
for releases. I can't rule out the possibility that someone is going
to build an updated version of 7.4 or 8.2 with whatever patches we
might choose to commit there, but they're unlikely to think that means
those are fully supported branches. And if they somehow do think that
despite all evidence to the contrary, we can just tell them that they
are mistaken.

> One thing we could do that would help circumscribe the costs is to say
> "we are not going to consider issues involving new compiler warnings
> or bugs caused by more-aggressive optimization".  We could mechanize
> that pretty effectively by changing configure shortly after a branch's
> EOL to select -O0 and no extra warning flags, so that anyone building
> from branch tip would get those switch choices.

I don't much like the idea of including -O0 because it seems like it
could be confusing. People might not realize that that the build
settings have been changed. I don't think that's really the problem
anyway: anybody who hits compiler warnings in older branches could
decide to fix them (and as long as it's a committer who will be
responsible for their own work, I think that's totally fine) or enable
-O0 locally. I routinely do that when I hit problems on older
branches, and it helps a lot, but the way I see it, that's such an
easy change that there's little reason to make it in the source code.
What's a lot more annoying is if the compile fails altogether, or you
can't even get past the configure step.

-- 
Robert Haas
EDB: http://www.enterprisedb.com



Re: pg_dump versus ancient server versions

От
Robert Haas
Дата:
On Mon, Oct 25, 2021 at 8:29 AM Andrew Dunstan <andrew@dunslane.net> wrote:
> But we don't need to build them on modern platforms, just run them on
> modern platforms, ISTM.

I don't really agree with this.

> Some months ago I built binaries all the way back to 7.2 that with a
> little help run on modern Fedora and Ubuntu systems. I just upgraded my
> Fedora system from 31 to 34 and they still run. See
> <https://gitlab.com/adunstan/pg-old-bin> One of the intended use cases
> was to test pg_dump against old versions.

That's cool, but I don't have a Fedora or Ubuntu VM handy, and it does
seem like if people are working on testing against old versions, they
might even want to be able to recompile with debugging statements
added or something. So I think actually compiling is a lot better than
being able to get working binaries from someplace, even though the
latter is better than nothing.

> I'm not opposed to us cutting off support for very old versions,
> although I think we should only do that very occasionally (no more than
> once every five years, say) unless there's a very good reason. I'm also
> not opposed to us making small adjustments to allow us to build old
> versions on modern platforms, but if we do that then we should probably
> have some buildfarm support for it.

Yeah, I think having a small number of buildfarm animals testing very
old versions would be nice. Perhaps we can call them tyrannosaurus,
brontosaurus, triceratops, etc. :-)

-- 
Robert Haas
EDB: http://www.enterprisedb.com



Re: pg_dump versus ancient server versions

От
Tom Lane
Дата:
Robert Haas <robertmhaas@gmail.com> writes:
> Right. Well, we could leave it up to people who care to decide how
> much work they want to do, perhaps. But I do find it annoying that
> pg_dump is supposed to maintain compatibility with server releases
> that I can't easily build. Fortunately I don't patch pg_dump very
> often, but if I did, it'd be very difficult for me to verify that
> things work against really old versions. I know that you (Tom) do a
> lot of work of this type though. In my opinion, if you find yourself
> working on a project of this type and as part of that you do some
> fixes to an older branch to make it compile, maybe you ought to commit
> those so that the next person doesn't have the same problem.

Well, the answer to that so far is that I've never done such fixes.
I have the last released versions of old branches laying around,
and that's what I test against.  It's been sufficient so far, although
if I suddenly needed to do (say) SSL-enabled testing, that would be
a problem because I don't think I built with SSL for any of those
branches.

Because of that angle, I concur with your position that it'd really
be desirable to be able to build old versions on modern platforms.
Even if you've got an old executable, it might be misconfigured for
the purpose you have in mind.

> And maybe
> when we add support for newer versions of OpenSSL or Windows, we ought
> to consider back-patching those even to unsupported releases if
> someone's willing to do the work. If they're not, they're not, but I
> think we tend to strongly discourage commits to EOL branches, and I
> think maybe we should stop doing that. Not that people should
> routinely back-patch bug fixes, but stuff that makes it easier to
> build seems fair game.

What concerns me here is that we not get into a position where we're
effectively still maintaining EOL'd versions.  Looking at the git
history yesterday reminded me that we had such a situation back in
the early 7.x days.  I can see that I still occasionally made commits
into 7.1 and 7.2 years after the last releases of those branches,
which ended up being a complete waste of effort.  There was no policy
guiding what to back-patch into what branches, partly because we
didn't have a defined EOL policy then.  So I want to have a policy
(and a pretty tight one) before I'll go back to doing that.

Roughly speaking, I think the policy should be "no feature bug fixes,
not even security fixes, for EOL'd branches; only fixes that are
minimally necessary to make it build on newer platforms".  And
I want to have a sunset provision even for that.  Fixing every branch
forevermore doesn't scale.

There's also the question of how we get to a working state in the
first place -- as we found upthread, there's a fair-sized amount
of work to do just to restore buildability right now, for anything
that was EOL'd more than a year or two back.  I'm not volunteering
for that, but somebody would have to to get things off the ground.

Also, I concur with Andrew's point that we'd really have to have
buildfarm support.  However, this might not be as bad as it seems.
In principle we might just need to add resurrected branches back to
the branches_to_build list.  Given my view of what the back-patching
policy ought to be, a new build in an old branch might only be
required a couple of times a year, which would not be an undue
investment of buildfarm resources.  (Hmmm ... but disk space could
become a problem, particularly on older machines with not so much
disk.  Do we really need to maintain a separate checkout for each
branch?  It seems like a fresh checkout from the repo would be
little more expensive than the current copy-a-checkout process.)

            regards, tom lane



Re: pg_dump versus ancient server versions

От
Robert Haas
Дата:
On Mon, Oct 25, 2021 at 10:23 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:
> What concerns me here is that we not get into a position where we're
> effectively still maintaining EOL'd versions.  Looking at the git
> history yesterday reminded me that we had such a situation back in
> the early 7.x days.  I can see that I still occasionally made commits
> into 7.1 and 7.2 years after the last releases of those branches,
> which ended up being a complete waste of effort.  There was no policy
> guiding what to back-patch into what branches, partly because we
> didn't have a defined EOL policy then.  So I want to have a policy
> (and a pretty tight one) before I'll go back to doing that.
>
> Roughly speaking, I think the policy should be "no feature bug fixes,
> not even security fixes, for EOL'd branches; only fixes that are
> minimally necessary to make it build on newer platforms".  And
> I want to have a sunset provision even for that.  Fixing every branch
> forevermore doesn't scale.

Sure, but you can ameliorate that a lot by just saying it's something
people have the *option* to do, not something anybody is *expected* to
do. I agree it's best if we continue to discourage back-patching bug
fixes into supported branches, but I also think we don't need to be
too stringent about this. What I think we don't want is, for example,
somebody working at company X deciding to back-patch all the bug fixes
that customers of company X cares about into our back-branches, but
not the other ones. But on the other hand if somebody is trying to
benchmark or test compatibility an old branch and it keeps crashing
because of some bug, telling them that they're not allowed to fix that
bug because it's not a sufficiently-minimal change to a dead branch is
kind of ridiculous. In other words, if you try to police every change
anyone wants to make, e.g. "well I know that would help YOU build on a
newer platform but it doesn't seem like it meets the criteria of the
minimum necessary change to make it build on a newer platform," then
you might as well just give up now. Nobody cares about the older
branches enough to put work into fixing whatever's wrong and then
having to argue about whether that work ought to be thrown away
anyway.

> There's also the question of how we get to a working state in the
> first place -- as we found upthread, there's a fair-sized amount
> of work to do just to restore buildability right now, for anything
> that was EOL'd more than a year or two back.  I'm not volunteering
> for that, but somebody would have to to get things off the ground.

Right.

> Also, I concur with Andrew's point that we'd really have to have
> buildfarm support.  However, this might not be as bad as it seems.
> In principle we might just need to add resurrected branches back to
> the branches_to_build list.  Given my view of what the back-patching
> policy ought to be, a new build in an old branch might only be
> required a couple of times a year, which would not be an undue
> investment of buildfarm resources.  (Hmmm ... but disk space could
> become a problem, particularly on older machines with not so much
> disk.  Do we really need to maintain a separate checkout for each
> branch?  It seems like a fresh checkout from the repo would be
> little more expensive than the current copy-a-checkout process.)

I suppose it would be useful if we had the ability to do new runs only
when the source code has changed...

-- 
Robert Haas
EDB: http://www.enterprisedb.com



Re: pg_dump versus ancient server versions

От
Tom Lane
Дата:
Robert Haas <robertmhaas@gmail.com> writes:
> On Mon, Oct 25, 2021 at 10:23 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:
>> Roughly speaking, I think the policy should be "no feature bug fixes,
>> not even security fixes, for EOL'd branches; only fixes that are
>> minimally necessary to make it build on newer platforms".  And
>> I want to have a sunset provision even for that.  Fixing every branch
>> forevermore doesn't scale.

> Sure, but you can ameliorate that a lot by just saying it's something
> people have the *option* to do, not something anybody is *expected* to
> do. I agree it's best if we continue to discourage back-patching bug
> fixes into supported branches, but I also think we don't need to be
> too stringent about this.

Actually, I think we do.  If I want to test against 7.4, ISTM I want
to test against the last released 7.4 version, not something with
arbitrary later changes.  Otherwise, what exactly is the point?

>> In principle we might just need to add resurrected branches back to
>> the branches_to_build list.  Given my view of what the back-patching
>> policy ought to be, a new build in an old branch might only be
>> required a couple of times a year, which would not be an undue
>> investment of buildfarm resources.

> I suppose it would be useful if we had the ability to do new runs only
> when the source code has changed...

Uh, don't we have that already?  I know you can configure a buildfarm
animal to force a run at least every-so-often, but it's not required,
and I don't think it's even the default.

            regards, tom lane



Re: pg_dump versus ancient server versions

От
Alvaro Herrera
Дата:
On 2021-Oct-25, Tom Lane wrote:

> Roughly speaking, I think the policy should be "no feature bug fixes,
> not even security fixes, for EOL'd branches; only fixes that are
> minimally necessary to make it build on newer platforms".  And
> I want to have a sunset provision even for that.  Fixing every branch
> forevermore doesn't scale.

Agreed.  I think dropping such support at the same time we drop
psql/pg_dump support is a decent answer to that.  That meets the stated
purpose of being able to test such support, and also it moves forward
according to subjective choice per development needs.

> Also, I concur with Andrew's point that we'd really have to have
> buildfarm support.  However, this might not be as bad as it seems.
> In principle we might just need to add resurrected branches back to
> the branches_to_build list.

Well, we would add them to *some* list, but not to the one used by stock
BF members -- not only because of the diskspace issue but also because
of the time to build.  I suggest that we should have a separate
list-of-branches file that would only be used by BF members especially
configured to do so; and hopefully we won't allow more than a handful
animals to do that but rather a well-chosen subset, and also maybe allow
only GCC rather than try to support other compilers.  (There's no need
to ensure compilability on any Windows platform, for example.)

-- 
Álvaro Herrera              Valdivia, Chile  —  https://www.EnterpriseDB.com/
"Ed is the standard text editor."
      http://groups.google.com/group/alt.religion.emacs/msg/8d94ddab6a9b0ad3



Re: pg_dump versus ancient server versions

От
Robert Haas
Дата:
On Mon, Oct 25, 2021 at 11:00 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:
> Actually, I think we do.  If I want to test against 7.4, ISTM I want
> to test against the last released 7.4 version, not something with
> arbitrary later changes.  Otherwise, what exactly is the point?

1. You're free to check out any commit you like.

2. Nothing I said can reasonably be confused with "let's allow
arbitrary later changes."

> Uh, don't we have that already?  I know you can configure a buildfarm
> animal to force a run at least every-so-often, but it's not required,
> and I don't think it's even the default.

Oh, OK. I wonder how that plays with the buildfarm status page's
desire to drop old results that are more than 30 days old. I guess
you'd just need to force a run at least every 28 days or something.

-- 
Robert Haas
EDB: http://www.enterprisedb.com



Re: pg_dump versus ancient server versions

От
Andrew Dunstan
Дата:
On 10/25/21 11:09, Robert Haas wrote:
> On Mon, Oct 25, 2021 at 11:00 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:
>> Actually, I think we do.  If I want to test against 7.4, ISTM I want
>> to test against the last released 7.4 version, not something with
>> arbitrary later changes.  Otherwise, what exactly is the point?
> 1. You're free to check out any commit you like.
>
> 2. Nothing I said can reasonably be confused with "let's allow
> arbitrary later changes."
>
>> Uh, don't we have that already?  I know you can configure a buildfarm
>> animal to force a run at least every-so-often, but it's not required,
>> and I don't think it's even the default.


Yes, in fact its rather discouraged. The default is just to build when
there's a code change detected.


> Oh, OK. I wonder how that plays with the buildfarm status page's
> desire to drop old results that are more than 30 days old. I guess
> you'd just need to force a run at least every 28 days or something.
>

Well, we could do that, or we could modify the way the server does the
status. The table it's based on has the last 500 records for each branch
for each animal, so the data is there.


cheers


andrew




--
Andrew Dunstan
EDB: https://www.enterprisedb.com




Re: pg_dump versus ancient server versions

От
Tom Lane
Дата:
Robert Haas <robertmhaas@gmail.com> writes:
> On Mon, Oct 25, 2021 at 11:00 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:
>> Actually, I think we do.  If I want to test against 7.4, ISTM I want
>> to test against the last released 7.4 version, not something with
>> arbitrary later changes.  Otherwise, what exactly is the point?

> 1. You're free to check out any commit you like.

Yeah, and get something that won't build.  If there's any point
to this work at all, it has to be that we'll maintain the closest
possible buildable approximation to the last released version.

> Oh, OK. I wonder how that plays with the buildfarm status page's
> desire to drop old results that are more than 30 days old. I guess
> you'd just need to force a run at least every 28 days or something.

I don't think it's a problem.  If we haven't committed anything to
branch X in a month, it's likely not interesting.  It might be worth
having a way to get the website to show results further back than
a month, but that doesn't need to be in the default view.

            regards, tom lane



Re: pg_dump versus ancient server versions

От
Andrew Dunstan
Дата:
On 10/25/21 10:23, Tom Lane wrote:
>
> Also, I concur with Andrew's point that we'd really have to have
> buildfarm support.  However, this might not be as bad as it seems.
> In principle we might just need to add resurrected branches back to
> the branches_to_build list.  Given my view of what the back-patching
> policy ought to be, a new build in an old branch might only be
> required a couple of times a year, which would not be an undue
> investment of buildfarm resources.  (Hmmm ... but disk space could
> become a problem, particularly on older machines with not so much
> disk.  Do we really need to maintain a separate checkout for each
> branch?  It seems like a fresh checkout from the repo would be
> little more expensive than the current copy-a-checkout process.)


If you set it up with these settings then the disk space used is minimal:

     git_use_workdirs => 1,
     rm_worktrees => 1,

So I have this on crake:

    andrew@emma:root $ du -sh REL*/pgsql
    5.5M    REL_10_STABLE/pgsql
    5.6M    REL_11_STABLE/pgsql
    5.6M    REL_12_STABLE/pgsql
    5.6M    REL_13_STABLE/pgsql
    2.0M    REL_14_STABLE/pgsql
    2.6M    REL9_5_STABLE/pgsql
    5.5M    REL9_6_STABLE/pgsql


cheers


andrew

--
Andrew Dunstan
EDB: https://www.enterprisedb.com




Re: pg_dump versus ancient server versions

От
Andrew Dunstan
Дата:
On 10/25/21 11:05, Alvaro Herrera wrote:
>
>> Also, I concur with Andrew's point that we'd really have to have
>> buildfarm support.  However, this might not be as bad as it seems.
>> In principle we might just need to add resurrected branches back to
>> the branches_to_build list.
> Well, we would add them to *some* list, but not to the one used by stock
> BF members -- not only because of the diskspace issue but also because
> of the time to build.  I suggest that we should have a separate
> list-of-branches file that would only be used by BF members especially
> configured to do so; and hopefully we won't allow more than a handful
> animals to do that but rather a well-chosen subset, and also maybe allow
> only GCC rather than try to support other compilers.  (There's no need
> to ensure compilability on any Windows platform, for example.)


Well, we do build with gcc on Windows :-) But yes, maybe we should make
this a more opt-in process.


cheers


andrew

--
Andrew Dunstan
EDB: https://www.enterprisedb.com




Re: pg_dump versus ancient server versions

От
Tom Lane
Дата:
Alvaro Herrera <alvherre@alvh.no-ip.org> writes:
> On 2021-Oct-25, Tom Lane wrote:
>> Also, I concur with Andrew's point that we'd really have to have
>> buildfarm support.  However, this might not be as bad as it seems.
>> In principle we might just need to add resurrected branches back to
>> the branches_to_build list.

> Well, we would add them to *some* list, but not to the one used by stock
> BF members -- not only because of the diskspace issue but also because
> of the time to build.  I suggest that we should have a separate
> list-of-branches file that would only be used by BF members especially
> configured to do so; and hopefully we won't allow more than a handful
> animals to do that but rather a well-chosen subset, and also maybe allow
> only GCC rather than try to support other compilers.  (There's no need
> to ensure compilability on any Windows platform, for example.)

Meh.  I don't think that's a great approach, because then we're only
ensuring buildability on a rather static set of platforms.  The whole
point here is that when release N+1 of $your_favorite_platform arrives,
we want to know whether the old branches still build on it.  If the
default behavior for new buildfarm animals is to ignore the old branches,
we're much less likely to find that out.

It's also unclear to me why we'd leave Windows out of this discussion.
We keep saying we want to encourage Windows-based hackers to contribute,
so doesn't that require testing it on the same basis as other platforms?

            regards, tom lane



Re: pg_dump versus ancient server versions

От
Tom Lane
Дата:
Andrew Dunstan <andrew@dunslane.net> writes:
> On 10/25/21 10:23, Tom Lane wrote:
>> (Hmmm ... but disk space could
>> become a problem, particularly on older machines with not so much
>> disk.  Do we really need to maintain a separate checkout for each
>> branch?  It seems like a fresh checkout from the repo would be
>> little more expensive than the current copy-a-checkout process.)

> If you set it up with these settings then the disk space used is minimal:
>      git_use_workdirs => 1,
>      rm_worktrees => 1,

Maybe we should make those the defaults?  AFAICS the current
default setup uses circa 200MB per back branch, even between runs.
I'm not sure what that is buying us.

            regards, tom lane



Re: pg_dump versus ancient server versions

От
Alvaro Herrera
Дата:
On 2021-Oct-25, Tom Lane wrote:

> It's also unclear to me why we'd leave Windows out of this discussion.
> We keep saying we want to encourage Windows-based hackers to contribute,
> so doesn't that require testing it on the same basis as other platforms?

Testing of in-support branches, sure -- I don't propose to break that.
But this is all about providing *some* server against which to test
client-side changes with, right?  Not to test the old servers
themselves.  Looking at Amit K's "Postgres person of the week" interview[1]
and remembering conversations with David Rowley, Windows hackers seem
perfectly familiar with getting Linux builds going, so we wouldn't need
to force MSVC fixes in order for them to have old servers available.

But anyway, I was thinking that the fixes required for MSVC buildability
were quite invasive, but on looking again they don't seem all that
bad[2], so I withdraw that comment.

I do think you have moved the goalposts: to reiterate what I said above,
I thought what we wanted was to have *some* server in order to test
client-side changes with; not to be able to get a server running on
every possible platform.  I'm not really on board with the idea that old
branches have to be buildable everywhere all the time.

[1] https://postgresql.life/post/amit_kapila/
[2] e.g., commit 2b1394fc2b52a2573d08aa626e7b49568f27464e

-- 
Álvaro Herrera         PostgreSQL Developer  —  https://www.EnterpriseDB.com/



Re: pg_dump versus ancient server versions

От
Tom Lane
Дата:
Alvaro Herrera <alvherre@alvh.no-ip.org> writes:
> I do think you have moved the goalposts: to reiterate what I said above,
> I thought what we wanted was to have *some* server in order to test
> client-side changes with; not to be able to get a server running on
> every possible platform.  I'm not really on board with the idea that old
> branches have to be buildable everywhere all the time.

Agreed, that might be too much work compared to the value.  But if we're
to be selective about support for this, I'm unclear on how we decide
which platforms are supported --- and, more importantly, how we keep
that list up to date over time.

            regards, tom lane



Re: pg_dump versus ancient server versions

От
Andres Freund
Дата:
Hi,

On 2021-10-22 19:30:25 -0400, Tom Lane wrote:
> Yeah.  I checked into when it was that we dropped pre-8.0 support
> from pg_dump, and the answer is just about five years ago (64f3524e2).
> So moving the bar forward by five releases isn't at all out of line.
> 8.4 would be eight years past EOL by the time v15 comes out.

I'd really like us to adopt a "default" policy on this. I think it's a waste
to spend time every few years arguing what exact versions to drop. I'd much
rather say that, unless there are concrete reasons to deviate from that, we
provide pg_dump compatibility for 5+3 releases, pg_upgrade for 5+1, and psql
for 5 releases or something like that.

It's fine to not actually spend the time to excise support for old versions
every release if not useful, but we should be able to "just do it" whenever
version compat is a meaningful hindrance.

Greetings,

Andres Freund



Re: pg_dump versus ancient server versions

От
Andres Freund
Дата:
Hi,

On 2021-10-25 10:23:40 -0400, Tom Lane wrote:
> Also, I concur with Andrew's point that we'd really have to have
> buildfarm support.  However, this might not be as bad as it seems.
> In principle we might just need to add resurrected branches back to
> the branches_to_build list.  Given my view of what the back-patching
> policy ought to be, a new build in an old branch might only be
> required a couple of times a year, which would not be an undue
> investment of buildfarm resources.

FWIW, if helpful I could easily specify a few additional branches to some of
my buildfarm animals. Perhaps serinus/flaviventris (snapshot gcc wo/w
optimizations) so we'd see problems coming early? I could also add
recent-clang one.

I think doing this to a few designated animals is a better idea than wasting
cycles and space on a lot of animals.


> It seems like a fresh checkout from the repo would be little more expensive
> than the current copy-a-checkout process.)

I haven't looked in detail, but from what I've seen in the logs the
is-there-anything-new check is already not cheap, and does a checkout / update
of the git directory.

Greetings,

Andres Freund



Re: pg_dump versus ancient server versions

От
Tom Lane
Дата:
Andres Freund <andres@anarazel.de> writes:
> On 2021-10-22 19:30:25 -0400, Tom Lane wrote:
>> Yeah.  I checked into when it was that we dropped pre-8.0 support
>> from pg_dump, and the answer is just about five years ago (64f3524e2).
>> So moving the bar forward by five releases isn't at all out of line.
>> 8.4 would be eight years past EOL by the time v15 comes out.

> I'd really like us to adopt a "default" policy on this. I think it's a waste
> to spend time every few years arguing what exact versions to drop. I'd much
> rather say that, unless there are concrete reasons to deviate from that, we
> provide pg_dump compatibility for 5+3 releases, pg_upgrade for 5+1, and psql
> for 5 releases or something like that.

I agree with considering something like that to be the minimum support
policy, but the actual changes need a bit more care.  For example, when
we last did this, the technical need was just to drop pre-7.4 versions,
but we chose to make the cutoff 8.0 on the grounds that that was more
understandable to users [1].  In the same way, I'm thinking of moving the
cutoff to 9.0 now, although 8.4 would be sufficient from a technical
standpoint.

OTOH, in the new world of one-part major versions, it's less clear that
there will be obvious division points for future cutoff changes.  Maybe
versions-divisible-by-five would work?  Or versions divisible by ten,
but experience so far suggests that we'll want to move the cutoff more
often than once every ten years.

            regards, tom lane

[1] https://www.postgresql.org/message-id/flat/2661.1475849167%40sss.pgh.pa.us



Re: pg_dump versus ancient server versions

От
Tom Lane
Дата:
Andres Freund <andres@anarazel.de> writes:
> On 2021-10-25 10:23:40 -0400, Tom Lane wrote:
>> It seems like a fresh checkout from the repo would be little more expensive
>> than the current copy-a-checkout process.)

> I haven't looked in detail, but from what I've seen in the logs the
> is-there-anything-new check is already not cheap, and does a checkout / update
> of the git directory.

Yeah, you probably need a checkout to apply the rule about don't rebuild
after documentation-only changes.  But it seems like the case where the
branch tip hasn't moved at all could be optimized fairly easily.  I'm not
sure it's worth the trouble to add code for that given our current usage
of the buildfarm; but if we were to start tracking branches that only
change a couple of times a year, it would be.

            regards, tom lane



Re: pg_dump versus ancient server versions

От
Andres Freund
Дата:
Hi,

On 2021-10-25 12:43:15 -0400, Tom Lane wrote:
> Agreed, that might be too much work compared to the value.  But if we're
> to be selective about support for this, I'm unclear on how we decide
> which platforms are supported --- and, more importantly, how we keep
> that list up to date over time.

I honestly think that if we just test on linux with a single distribution,
we're already covering most of the benefit. From memory there have been two
rough classes of doesn't-build-anymore:

1) New optimizations / warnings. At least between gcc and clang, within a year
   or two, most of the issues end up being visible with the other compiler
   too. These aren't particularly distribution / OS specific.

2) Library dependencies cause problems, like the ssl detection mentioned
   elsewhere in this thread. This is also not that OS dependent. It's also not
   that clear that we can do something about the issues with a reasonable
   amount of effort in all cases. It's easy enough if it's just a minor
   configure fix, but we'd not want to backpatch larger SSL changes or such.


Maybe there's also a case for building older releases with msvc, but that
seems like a pain due to the msvc project generation needing to support a
specific version of msvc.

Greetings,

Andres Freund



Re: pg_dump versus ancient server versions

От
Andres Freund
Дата:
Hi,

On 2021-10-25 13:09:43 -0400, Tom Lane wrote:
> Andres Freund <andres@anarazel.de> writes:
> > I'd really like us to adopt a "default" policy on this. I think it's a waste
> > to spend time every few years arguing what exact versions to drop. I'd much
> > rather say that, unless there are concrete reasons to deviate from that, we
> > provide pg_dump compatibility for 5+3 releases, pg_upgrade for 5+1, and psql
> > for 5 releases or something like that.
> 
> I agree with considering something like that to be the minimum support
> policy, but the actual changes need a bit more care.  For example, when
> we last did this, the technical need was just to drop pre-7.4 versions,
> but we chose to make the cutoff 8.0 on the grounds that that was more
> understandable to users [1].  In the same way, I'm thinking of moving the
> cutoff to 9.0 now, although 8.4 would be sufficient from a technical
> standpoint.

I think that'd be less of a concern if we had a documented policy
somewhere. It'd not be hard to include a version table in that policy to make
it easier to understand. We could even add it to the table in
https://www.postgresql.org/support/versioning/ or something similar.


> OTOH, in the new world of one-part major versions, it's less clear that
> there will be obvious division points for future cutoff changes.  Maybe
> versions-divisible-by-five would work?

I think that's more confusing than helpful, because the support timeframes
then differ between releases. It's easier to just subtract a number of major
releases for from a specific major version. Especially if there's a table
somewhere.


> Or versions divisible by ten, but experience so far suggests that we'll want
> to move the cutoff more often than once every ten years.

Yes, I think that'd be quite a bit too restrictive.

Greetings,

Andres Freund



Re: pg_dump versus ancient server versions

От
Andrew Dunstan
Дата:
On 10/25/21 13:06, Andres Freund wrote:
> Hi,
>
> On 2021-10-25 10:23:40 -0400, Tom Lane wrote:
>> Also, I concur with Andrew's point that we'd really have to have
>> buildfarm support.  However, this might not be as bad as it seems.
>> In principle we might just need to add resurrected branches back to
>> the branches_to_build list.  Given my view of what the back-patching
>> policy ought to be, a new build in an old branch might only be
>> required a couple of times a year, which would not be an undue
>> investment of buildfarm resources.
> FWIW, if helpful I could easily specify a few additional branches to some of
> my buildfarm animals. Perhaps serinus/flaviventris (snapshot gcc wo/w
> optimizations) so we'd see problems coming early? I could also add
> recent-clang one.
>
> I think doing this to a few designated animals is a better idea than wasting
> cycles and space on a lot of animals.


Right now the server will only accept results for something in
branches_of_interest.txt. So we would need to modify that.


I tend to agree that we don't need a whole lot of cross platform testing
here.


>
>
>> It seems like a fresh checkout from the repo would be little more expensive
>> than the current copy-a-checkout process.)
> I haven't looked in detail, but from what I've seen in the logs the
> is-there-anything-new check is already not cheap, and does a checkout / update
> of the git directory.
>
>

If you have removed the work tree (with the "rm_worktrees => 1" setting)
then it restores it by doing a checkout. It then does a "git fetch", and
then as you say looks to see if there is anything new. If you know of a
better way to manage it then please let me know. On crake (which is
actually checking out four different repos) the checkout step typically
takes one or two seconds.


Copying the work tree can take a few seconds - to avoid that on
Unix/msys use vpath builds.


cheers


andrew

--
Andrew Dunstan
EDB: https://www.enterprisedb.com




Re: pg_dump versus ancient server versions

От
Tom Lane
Дата:
Anyway, to get back to the original point ...

No one has spoken against moving up the cutoff for pg_dump support,
so I did a very quick pass to see how much code could be removed.
The answer is right about 1000 lines, counting both pg_dump and
pg_upgrade, so it seems like it's worth doing independently of the
unnest() issue.

The attached is just draft-quality, because I don't really want
to pursue the point until after committing the pg_dump changes
being discussed in the other thread.  If I push this first it'll
break a lot of those patches.  (Admittedly, pushing those first
will break this one, but this one is a lot easier to re-do.)

BTW, while looking at pg_upgrade I chanced to notice
check_for_isn_and_int8_passing_mismatch(), which seems like it's
not well thought out at all.  It's right that contrib/isn will
not upgrade nicely if the target cluster has a different
float8_pass_by_value setting from the source.  What's wrong is
the assumption that no other extension has the same issue.
We invented and publicized the "LIKE type" option for CREATE TYPE
precisely so that people could build types that act just like isn,
so it seems pretty foolish to imagine that no one has done so.

I think we should nuke check_for_isn_and_int8_passing_mismatch()
and just refuse to upgrade if float8_pass_by_value differs, full stop.
I can see little practical need to allow that case.

            regards, tom lane

diff --git a/doc/src/sgml/ref/pg_dump.sgml b/doc/src/sgml/ref/pg_dump.sgml
index 7682226b99..6e95148d11 100644
--- a/doc/src/sgml/ref/pg_dump.sgml
+++ b/doc/src/sgml/ref/pg_dump.sgml
@@ -1381,7 +1381,7 @@ CREATE DATABASE foo WITH TEMPLATE template0;
    <productname>PostgreSQL</productname> server versions newer than
    <application>pg_dump</application>'s version.  <application>pg_dump</application> can also
    dump from <productname>PostgreSQL</productname> servers older than its own version.
-   (Currently, servers back to version 8.0 are supported.)
+   (Currently, servers back to version 9.0 are supported.)
    However, <application>pg_dump</application> cannot dump from
    <productname>PostgreSQL</productname> servers newer than its own major version;
    it will refuse to even try, rather than risk making an invalid dump.
diff --git a/doc/src/sgml/ref/pgupgrade.sgml b/doc/src/sgml/ref/pgupgrade.sgml
index 20efdd771f..b15fbb44f1 100644
--- a/doc/src/sgml/ref/pgupgrade.sgml
+++ b/doc/src/sgml/ref/pgupgrade.sgml
@@ -68,7 +68,7 @@ PostgreSQL documentation
  </para>

   <para>
-   pg_upgrade supports upgrades from 8.4.X and later to the current
+   pg_upgrade supports upgrades from 9.0.X and later to the current
    major release of <productname>PostgreSQL</productname>, including snapshot and beta releases.
   </para>
  </refsect1>
diff --git a/src/bin/pg_dump/dumputils.c b/src/bin/pg_dump/dumputils.c
index ea67e52a3f..e4b04fd491 100644
--- a/src/bin/pg_dump/dumputils.c
+++ b/src/bin/pg_dump/dumputils.c
@@ -183,10 +183,6 @@ buildACLCommands(const char *name, const char *subname, const char *nspname,
                 appendPQExpBuffer(firstsql, "%s FROM ", name);
                 if (grantee->len == 0)
                     appendPQExpBufferStr(firstsql, "PUBLIC;\n");
-                else if (strncmp(grantee->data, "group ",
-                                 strlen("group ")) == 0)
-                    appendPQExpBuffer(firstsql, "GROUP %s;\n",
-                                      fmtId(grantee->data + strlen("group ")));
                 else
                     appendPQExpBuffer(firstsql, "%s;\n",
                                       fmtId(grantee->data));
@@ -194,20 +190,6 @@ buildACLCommands(const char *name, const char *subname, const char *nspname,
         }
     }

-    /*
-     * We still need some hacking though to cover the case where new default
-     * public privileges are added in new versions: the REVOKE ALL will revoke
-     * them, leading to behavior different from what the old version had,
-     * which is generally not what's wanted.  So add back default privs if the
-     * source database is too old to have had that particular priv.
-     */
-    if (remoteVersion < 80200 && strcmp(type, "DATABASE") == 0)
-    {
-        /* database CONNECT priv didn't exist before 8.2 */
-        appendPQExpBuffer(firstsql, "%sGRANT CONNECT ON %s %s TO PUBLIC;\n",
-                          prefix, type, name);
-    }
-
     /* Scan individual ACL items */
     for (i = 0; i < naclitems; i++)
     {
@@ -300,10 +282,6 @@ buildACLCommands(const char *name, const char *subname, const char *nspname,
                     appendPQExpBuffer(secondsql, "%s TO ", name);
                     if (grantee->len == 0)
                         appendPQExpBufferStr(secondsql, "PUBLIC;\n");
-                    else if (strncmp(grantee->data, "group ",
-                                     strlen("group ")) == 0)
-                        appendPQExpBuffer(secondsql, "GROUP %s;\n",
-                                          fmtId(grantee->data + strlen("group ")));
                     else
                         appendPQExpBuffer(secondsql, "%s;\n", fmtId(grantee->data));
                 }
@@ -316,10 +294,6 @@ buildACLCommands(const char *name, const char *subname, const char *nspname,
                     appendPQExpBuffer(secondsql, "%s TO ", name);
                     if (grantee->len == 0)
                         appendPQExpBufferStr(secondsql, "PUBLIC");
-                    else if (strncmp(grantee->data, "group ",
-                                     strlen("group ")) == 0)
-                        appendPQExpBuffer(secondsql, "GROUP %s",
-                                          fmtId(grantee->data + strlen("group ")));
                     else
                         appendPQExpBufferStr(secondsql, fmtId(grantee->data));
                     appendPQExpBufferStr(secondsql, " WITH GRANT OPTION;\n");
@@ -432,15 +406,12 @@ buildDefaultACLCommands(const char *type, const char *nspname,
 /*
  * This will parse an aclitem string, having the general form
  *        username=privilegecodes/grantor
- * or
- *        group groupname=privilegecodes/grantor
- * (the "group" case occurs only with servers before 8.1).
  *
  * Returns true on success, false on parse error.  On success, the components
  * of the string are returned in the PQExpBuffer parameters.
  *
- * The returned grantee string will be the dequoted username or groupname
- * (preceded with "group " in the latter case).  Note that a grant to PUBLIC
+ * The returned grantee string will be the dequoted username.
+ * Note that a grant to PUBLIC
  * is represented by an empty grantee string.  The returned grantor is the
  * dequoted grantor name.  Privilege characters are translated to GRANT/REVOKE
  * comma-separated privileges lists.  If "privswgo" is non-NULL, the result is
@@ -534,8 +505,7 @@ do { \
             {
                 CONVERT_PRIV('d', "DELETE");
                 CONVERT_PRIV('t', "TRIGGER");
-                if (remoteVersion >= 80400)
-                    CONVERT_PRIV('D', "TRUNCATE");
+                CONVERT_PRIV('D', "TRUNCATE");
             }
         }

diff --git a/src/bin/pg_dump/pg_backup_archiver.c b/src/bin/pg_dump/pg_backup_archiver.c
index 9b0e699ce8..01ca293b27 100644
--- a/src/bin/pg_dump/pg_backup_archiver.c
+++ b/src/bin/pg_dump/pg_backup_archiver.c
@@ -903,13 +903,10 @@ restore_toc_entry(ArchiveHandle *AH, TocEntry *te, bool is_parallel)
                         StartTransaction(&AH->public);

                         /*
-                         * If the server version is >= 8.4, make sure we issue
-                         * TRUNCATE with ONLY so that child tables are not
-                         * wiped.
+                         * Issue TRUNCATE with ONLY so that child tables are
+                         * not wiped.
                          */
-                        ahprintf(AH, "TRUNCATE TABLE %s%s;\n\n",
-                                 (PQserverVersion(AH->connection) >= 80400 ?
-                                  "ONLY " : ""),
+                        ahprintf(AH, "TRUNCATE TABLE ONLY %s;\n\n",
                                  fmtQualifiedId(te->namespace, te->tag));
                     }

diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index e9f68e8986..10b6ca8779 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -96,10 +96,6 @@ static bool dosync = true;        /* Issue fsync() to make dump durable on disk. */
 /* subquery used to convert user ID (eg, datdba) to user name */
 static const char *username_subquery;

-/*
- * For 8.0 and earlier servers, pulled from pg_database, for 8.1+ we use
- * FirstNormalObjectId - 1.
- */
 static Oid    g_last_builtin_oid; /* value of the last builtin oid */

 /* The specified names/patterns should to match at least one entity */
@@ -170,7 +166,6 @@ static void expand_table_name_patterns(Archive *fout,
 static NamespaceInfo *findNamespace(Oid nsoid);
 static void dumpTableData(Archive *fout, const TableDataInfo *tdinfo);
 static void refreshMatViewData(Archive *fout, const TableDataInfo *tdinfo);
-static void guessConstraintInheritance(TableInfo *tblinfo, int numTables);
 static void dumpCommentExtended(Archive *fout, const char *type,
                                 const char *name, const char *namespace,
                                 const char *owner, CatalogId catalogId,
@@ -260,17 +255,11 @@ static void buildMatViewRefreshDependencies(Archive *fout);
 static void getTableDataFKConstraints(void);
 static char *format_function_arguments(const FuncInfo *finfo, const char *funcargs,
                                        bool is_agg);
-static char *format_function_arguments_old(Archive *fout,
-                                           const FuncInfo *finfo, int nallargs,
-                                           char **allargtypes,
-                                           char **argmodes,
-                                           char **argnames);
 static char *format_function_signature(Archive *fout,
                                        const FuncInfo *finfo, bool honor_quotes);
 static char *convertRegProcReference(const char *proc);
 static char *getFormattedOperatorName(const char *oproid);
 static char *convertTSFunction(Archive *fout, Oid funcOid);
-static Oid    findLastBuiltinOid_V71(Archive *fout);
 static const char *getFormattedTypeName(Archive *fout, Oid oid, OidOptions opts);
 static void getBlobs(Archive *fout);
 static void dumpBlob(Archive *fout, const BlobInfo *binfo);
@@ -726,10 +715,10 @@ main(int argc, char **argv)


     /*
-     * We allow the server to be back to 8.0, and up to any minor release of
+     * We allow the server to be back to 9.0, and up to any minor release of
      * our own major version.  (See also version check in pg_dumpall.c.)
      */
-    fout->minRemoteVersion = 80000;
+    fout->minRemoteVersion = 90000;
     fout->maxRemoteVersion = (PG_VERSION_NUM / 100) * 100 + 99;

     fout->numWorkers = numWorkers;
@@ -756,10 +745,7 @@ main(int argc, char **argv)
         dopt.no_unlogged_table_data = true;

     /* Select the appropriate subquery to convert user IDs to names */
-    if (fout->remoteVersion >= 80100)
-        username_subquery = "SELECT rolname FROM pg_catalog.pg_roles WHERE oid =";
-    else
-        username_subquery = "SELECT usename FROM pg_catalog.pg_user WHERE usesysid =";
+    username_subquery = "SELECT rolname FROM pg_catalog.pg_roles WHERE oid =";

     /* check the version for the synchronized snapshots feature */
     if (numWorkers > 1 && fout->remoteVersion < 90200
@@ -777,10 +763,7 @@ main(int argc, char **argv)
      *
      * With 8.1 and above, we can just use FirstNormalObjectId - 1.
      */
-    if (fout->remoteVersion < 80100)
-        g_last_builtin_oid = findLastBuiltinOid_V71(fout);
-    else
-        g_last_builtin_oid = FirstNormalObjectId - 1;
+    g_last_builtin_oid = FirstNormalObjectId - 1;

     pg_log_info("last built-in OID is %u", g_last_builtin_oid);

@@ -848,9 +831,6 @@ main(int argc, char **argv)
      */
     tblinfo = getSchemaData(fout, &numTables);

-    if (fout->remoteVersion < 80400)
-        guessConstraintInheritance(tblinfo, numTables);
-
     if (!dopt.schemaOnly)
     {
         getTableData(&dopt, tblinfo, numTables, 0);
@@ -1117,7 +1097,7 @@ setup_connection(Archive *AH, const char *dumpencoding,
         use_role = AH->use_role;

     /* Set the role if requested */
-    if (use_role && AH->remoteVersion >= 80100)
+    if (use_role)
     {
         PQExpBuffer query = createPQExpBuffer();

@@ -1134,8 +1114,7 @@ setup_connection(Archive *AH, const char *dumpencoding,
     ExecuteSqlStatement(AH, "SET DATESTYLE = ISO");

     /* Likewise, avoid using sql_standard intervalstyle */
-    if (AH->remoteVersion >= 80400)
-        ExecuteSqlStatement(AH, "SET INTERVALSTYLE = POSTGRES");
+    ExecuteSqlStatement(AH, "SET INTERVALSTYLE = POSTGRES");

     /*
      * Use an explicitly specified extra_float_digits if it has been provided.
@@ -1151,17 +1130,14 @@ setup_connection(Archive *AH, const char *dumpencoding,
         ExecuteSqlStatement(AH, q->data);
         destroyPQExpBuffer(q);
     }
-    else if (AH->remoteVersion >= 90000)
-        ExecuteSqlStatement(AH, "SET extra_float_digits TO 3");
     else
-        ExecuteSqlStatement(AH, "SET extra_float_digits TO 2");
+        ExecuteSqlStatement(AH, "SET extra_float_digits TO 3");

     /*
-     * If synchronized scanning is supported, disable it, to prevent
-     * unpredictable changes in row ordering across a dump and reload.
+     * Disable synchronized scanning, to prevent unpredictable changes in row
+     * ordering across a dump and reload.
      */
-    if (AH->remoteVersion >= 80300)
-        ExecuteSqlStatement(AH, "SET synchronize_seqscans TO off");
+    ExecuteSqlStatement(AH, "SET synchronize_seqscans TO off");

     /*
      * Disable timeouts if supported.
@@ -1956,7 +1932,6 @@ dumpTableData_copy(Archive *fout, const void *dcontext)
      */
     if (tdinfo->filtercond || tbinfo->relkind == RELKIND_FOREIGN_TABLE)
     {
-        /* Note: this syntax is only supported in 8.2 and up */
         appendPQExpBufferStr(q, "COPY (SELECT ");
         /* klugery to get rid of parens in column list */
         if (strlen(column_list) > 2)
@@ -2696,81 +2671,6 @@ getTableDataFKConstraints(void)
 }


-/*
- * guessConstraintInheritance:
- *    In pre-8.4 databases, we can't tell for certain which constraints
- *    are inherited.  We assume a CHECK constraint is inherited if its name
- *    matches the name of any constraint in the parent.  Originally this code
- *    tried to compare the expression texts, but that can fail for various
- *    reasons --- for example, if the parent and child tables are in different
- *    schemas, reverse-listing of function calls may produce different text
- *    (schema-qualified or not) depending on search path.
- *
- *    In 8.4 and up we can rely on the conislocal field to decide which
- *    constraints must be dumped; much safer.
- *
- *    This function assumes all conislocal flags were initialized to true.
- *    It clears the flag on anything that seems to be inherited.
- */
-static void
-guessConstraintInheritance(TableInfo *tblinfo, int numTables)
-{
-    int            i,
-                j,
-                k;
-
-    for (i = 0; i < numTables; i++)
-    {
-        TableInfo  *tbinfo = &(tblinfo[i]);
-        int            numParents;
-        TableInfo **parents;
-        TableInfo  *parent;
-
-        /* Sequences and views never have parents */
-        if (tbinfo->relkind == RELKIND_SEQUENCE ||
-            tbinfo->relkind == RELKIND_VIEW)
-            continue;
-
-        /* Don't bother computing anything for non-target tables, either */
-        if (!(tbinfo->dobj.dump & DUMP_COMPONENT_DEFINITION))
-            continue;
-
-        numParents = tbinfo->numParents;
-        parents = tbinfo->parents;
-
-        if (numParents == 0)
-            continue;            /* nothing to see here, move along */
-
-        /* scan for inherited CHECK constraints */
-        for (j = 0; j < tbinfo->ncheck; j++)
-        {
-            ConstraintInfo *constr;
-
-            constr = &(tbinfo->checkexprs[j]);
-
-            for (k = 0; k < numParents; k++)
-            {
-                int            l;
-
-                parent = parents[k];
-                for (l = 0; l < parent->ncheck; l++)
-                {
-                    ConstraintInfo *pconstr = &(parent->checkexprs[l]);
-
-                    if (strcmp(pconstr->dobj.name, constr->dobj.name) == 0)
-                    {
-                        constr->conislocal = false;
-                        break;
-                    }
-                }
-                if (!constr->conislocal)
-                    break;
-            }
-        }
-    }
-}
-
-
 /*
  * dumpDatabase:
  *    dump the database definition
@@ -2875,7 +2775,7 @@ dumpDatabase(Archive *fout)
                           "WHERE datname = current_database()",
                           username_subquery);
     }
-    else if (fout->remoteVersion >= 80400)
+    else
     {
         appendPQExpBuffer(dbQry, "SELECT tableoid, oid, datname, "
                           "(%s datdba) AS dba, "
@@ -2889,33 +2789,6 @@ dumpDatabase(Archive *fout)
                           "WHERE datname = current_database()",
                           username_subquery);
     }
-    else if (fout->remoteVersion >= 80200)
-    {
-        appendPQExpBuffer(dbQry, "SELECT tableoid, oid, datname, "
-                          "(%s datdba) AS dba, "
-                          "pg_encoding_to_char(encoding) AS encoding, "
-                          "NULL AS datcollate, NULL AS datctype, datfrozenxid, 0 AS datminmxid, "
-                          "datacl, '' as rdatacl, datistemplate, datconnlimit, "
-                          "(SELECT spcname FROM pg_tablespace t WHERE t.oid = dattablespace) AS tablespace, "
-                          "shobj_description(oid, 'pg_database') AS description "
-
-                          "FROM pg_database "
-                          "WHERE datname = current_database()",
-                          username_subquery);
-    }
-    else
-    {
-        appendPQExpBuffer(dbQry, "SELECT tableoid, oid, datname, "
-                          "(%s datdba) AS dba, "
-                          "pg_encoding_to_char(encoding) AS encoding, "
-                          "NULL AS datcollate, NULL AS datctype, datfrozenxid, 0 AS datminmxid, "
-                          "datacl, '' as rdatacl, datistemplate, "
-                          "-1 as datconnlimit, "
-                          "(SELECT spcname FROM pg_tablespace t WHERE t.oid = dattablespace) AS tablespace "
-                          "FROM pg_database "
-                          "WHERE datname = current_database()",
-                          username_subquery);
-    }

     res = ExecuteSqlQueryForSingleRow(fout, dbQry->data);

@@ -3016,7 +2889,6 @@ dumpDatabase(Archive *fout)
     appendPQExpBuffer(labelq, "DATABASE %s", qdatname);

     /* Dump DB comment if any */
-    if (fout->remoteVersion >= 80200)
     {
         /*
          * 8.2 and up keep comments on shared objects in a shared table, so we
@@ -3047,11 +2919,6 @@ dumpDatabase(Archive *fout)
                                       .nDeps = 1));
         }
     }
-    else
-    {
-        dumpComment(fout, "DATABASE", qdatname, NULL, dba,
-                    dbCatId, 0, dbDumpId);
-    }

     /* Dump DB security label, if enabled */
     if (!dopt->no_security_labels && fout->remoteVersion >= 90200)
@@ -3222,15 +3089,13 @@ dumpDatabaseConfig(Archive *AH, PQExpBuffer outbuf,
     /*
      * First collect database-specific options.  Pre-8.4 server versions lack
      * unnest(), so we do this the hard way by querying once per subscript.
+     * XXX this could be improved now.
      */
     for (;;)
     {
-        if (AH->remoteVersion >= 90000)
             printfPQExpBuffer(buf, "SELECT setconfig[%d] FROM pg_db_role_setting "
                               "WHERE setrole = 0 AND setdatabase = '%u'::oid",
                               count, dboid);
-        else
-            printfPQExpBuffer(buf, "SELECT datconfig[%d] FROM pg_database WHERE oid = '%u'::oid", count, dboid);

         res = ExecuteSqlQuery(AH, buf->data, PGRES_TUPLES_OK);

@@ -3251,9 +3116,6 @@ dumpDatabaseConfig(Archive *AH, PQExpBuffer outbuf,
     }

     /* Now look for role-and-database-specific options */
-    if (AH->remoteVersion >= 90000)
-    {
-        /* Here we can assume we have unnest() */
         printfPQExpBuffer(buf, "SELECT rolname, unnest(setconfig) "
                           "FROM pg_db_role_setting s, pg_roles r "
                           "WHERE setrole = r.oid AND setdatabase = '%u'::oid",
@@ -3273,7 +3135,6 @@ dumpDatabaseConfig(Archive *AH, PQExpBuffer outbuf,
         }

         PQclear(res);
-    }

     destroyPQExpBuffer(buf);
 }
@@ -3412,7 +3273,7 @@ getBlobs(Archive *fout)

     pg_log_info("reading large objects");

-    /* Fetch BLOB OIDs, and owner/ACL data if >= 9.0 */
+    /* Fetch BLOB OIDs and owner/ACL data */
     if (fout->remoteVersion >= 90600)
     {
         PQExpBuffer acl_subquery = createPQExpBuffer();
@@ -3446,20 +3307,13 @@ getBlobs(Archive *fout)
         destroyPQExpBuffer(init_acl_subquery);
         destroyPQExpBuffer(init_racl_subquery);
     }
-    else if (fout->remoteVersion >= 90000)
+    else
         appendPQExpBuffer(blobQry,
                           "SELECT oid, (%s lomowner) AS rolname, lomacl, "
                           "NULL AS rlomacl, NULL AS initlomacl, "
                           "NULL AS initrlomacl "
                           " FROM pg_largeobject_metadata",
                           username_subquery);
-    else
-        appendPQExpBufferStr(blobQry,
-                             "SELECT DISTINCT loid AS oid, "
-                             "NULL::name AS rolname, NULL::oid AS lomacl, "
-                             "NULL::oid AS rlomacl, NULL::oid AS initlomacl, "
-                             "NULL::oid AS initrlomacl "
-                             " FROM pg_largeobject");

     res = ExecuteSqlQuery(fout, blobQry->data, PGRES_TUPLES_OK);

@@ -3598,14 +3452,9 @@ dumpBlobs(Archive *fout, const void *arg)
      * Currently, we re-fetch all BLOB OIDs using a cursor.  Consider scanning
      * the already-in-memory dumpable objects instead...
      */
-    if (fout->remoteVersion >= 90000)
         blobQry =
             "DECLARE bloboid CURSOR FOR "
             "SELECT oid FROM pg_largeobject_metadata ORDER BY 1";
-    else
-        blobQry =
-            "DECLARE bloboid CURSOR FOR "
-            "SELECT DISTINCT loid FROM pg_largeobject ORDER BY 1";

     ExecuteSqlStatement(fout, blobQry);

@@ -4593,7 +4442,6 @@ binary_upgrade_set_type_oids_by_type_oid(Archive *fout,
                       "SELECT pg_catalog.binary_upgrade_set_next_pg_type_oid('%u'::pg_catalog.oid);\n\n",
                       pg_type_oid);

-    /* we only support old >= 8.3 for binary upgrades */
     appendPQExpBuffer(upgrade_query,
                       "SELECT typarray "
                       "FROM pg_catalog.pg_type "
@@ -5169,7 +5017,7 @@ getTypes(Archive *fout, int *numTypes)
                           "FROM pg_type",
                           username_subquery);
     }
-    else if (fout->remoteVersion >= 80300)
+    else
     {
         appendPQExpBuffer(query, "SELECT tableoid, oid, typname, "
                           "typnamespace, NULL AS typacl, NULL as rtypacl, "
@@ -5184,20 +5032,6 @@ getTypes(Archive *fout, int *numTypes)
                           "FROM pg_type",
                           username_subquery);
     }
-    else
-    {
-        appendPQExpBuffer(query, "SELECT tableoid, oid, typname, "
-                          "typnamespace, NULL AS typacl, NULL as rtypacl, "
-                          "NULL AS inittypacl, NULL AS initrtypacl, "
-                          "(%s typowner) AS rolname, "
-                          "typelem, typrelid, "
-                          "CASE WHEN typrelid = 0 THEN ' '::\"char\" "
-                          "ELSE (SELECT relkind FROM pg_class WHERE oid = typrelid) END AS typrelkind, "
-                          "typtype, typisdefined, "
-                          "typname[0] = '_' AND typelem != 0 AS isarray "
-                          "FROM pg_type",
-                          username_subquery);
-    }

     res = ExecuteSqlQuery(fout, query->data, PGRES_TUPLES_OK);

@@ -5722,13 +5556,6 @@ getOpfamilies(Archive *fout, int *numOpfamilies)
     int            i_opfnamespace;
     int            i_rolname;

-    /* Before 8.3, there is no separate concept of opfamilies */
-    if (fout->remoteVersion < 80300)
-    {
-        *numOpfamilies = 0;
-        return NULL;
-    }
-
     query = createPQExpBuffer();

     /*
@@ -5870,7 +5697,7 @@ getAggregates(Archive *fout, int *numAggs)
         destroyPQExpBuffer(initacl_subquery);
         destroyPQExpBuffer(initracl_subquery);
     }
-    else if (fout->remoteVersion >= 80200)
+    else
     {
         appendPQExpBuffer(query, "SELECT tableoid, oid, proname AS aggname, "
                           "pronamespace AS aggnamespace, "
@@ -5894,22 +5721,6 @@ getAggregates(Archive *fout, int *numAggs)
                                  "deptype = 'e')");
         appendPQExpBufferChar(query, ')');
     }
-    else
-    {
-        appendPQExpBuffer(query, "SELECT tableoid, oid, proname AS aggname, "
-                          "pronamespace AS aggnamespace, "
-                          "CASE WHEN proargtypes[0] = 'pg_catalog.\"any\"'::pg_catalog.regtype THEN 0 ELSE 1 END AS
pronargs," 
-                          "proargtypes, "
-                          "(%s proowner) AS rolname, "
-                          "proacl AS aggacl, "
-                          "NULL AS raggacl, "
-                          "NULL AS initaggacl, NULL AS initraggacl "
-                          "FROM pg_proc "
-                          "WHERE proisagg "
-                          "AND pronamespace != "
-                          "(SELECT oid FROM pg_namespace WHERE nspname = 'pg_catalog')",
-                          username_subquery);
-    }

     res = ExecuteSqlQuery(fout, query->data, PGRES_TUPLES_OK);

@@ -6286,6 +6097,7 @@ getTables(Archive *fout, int *numTables)
                       "(%s c.relowner) AS rolname, "
                       "c.relchecks, "
                       "c.relhasindex, c.relhasrules, c.relpages, "
+                      "c.relhastriggers, "
                       "d.refobjid AS owning_tab, "
                       "d.refobjsubid AS owning_col, "
                       "tsp.spcname AS reltablespace, ",
@@ -6298,13 +6110,6 @@ getTables(Archive *fout, int *numTables)
         appendPQExpBufferStr(query,
                              "c.relhasoids, ");

-    if (fout->remoteVersion >= 80400)
-        appendPQExpBufferStr(query,
-                             "c.relhastriggers, ");
-    else
-        appendPQExpBufferStr(query,
-                             "(c.reltriggers <> 0) AS relhastriggers, ");
-
     if (fout->remoteVersion >= 90100)
         appendPQExpBufferStr(query,
                              "c.relpersistence, ");
@@ -6334,14 +6139,9 @@ getTables(Archive *fout, int *numTables)
                              "false AS relrowsecurity, "
                              "false AS relforcerowsecurity, ");

-    if (fout->remoteVersion >= 80200)
-        appendPQExpBufferStr(query,
-                             "c.relfrozenxid, tc.relfrozenxid AS tfrozenxid, "
-                             "tc.oid AS toid, ");
-    else
-        appendPQExpBufferStr(query,
-                             "0 AS relfrozenxid, 0 AS tfrozenxid, "
-                             "0 AS toid, ");
+    appendPQExpBufferStr(query,
+                         "c.relfrozenxid, tc.relfrozenxid AS tfrozenxid, "
+                         "tc.oid AS toid, ");

     if (fout->remoteVersion >= 90300)
         appendPQExpBufferStr(query,
@@ -6355,26 +6155,15 @@ getTables(Archive *fout, int *numTables)
                              "array_remove(array_remove(c.reloptions,'check_option=local'),'check_option=cascaded') AS
reloptions," 
                              "CASE WHEN 'check_option=local' = ANY (c.reloptions) THEN 'LOCAL'::text "
                              "WHEN 'check_option=cascaded' = ANY (c.reloptions) THEN 'CASCADED'::text ELSE NULL END AS
checkoption,"); 
-    else if (fout->remoteVersion >= 80200)
-        appendPQExpBufferStr(query,
-                             "c.reloptions, NULL AS checkoption, ");
     else
         appendPQExpBufferStr(query,
-                             "NULL AS reloptions, NULL AS checkoption, ");
+                             "c.reloptions, NULL AS checkoption, ");

-    if (fout->remoteVersion >= 80400)
-        appendPQExpBufferStr(query,
-                             "tc.reloptions AS toast_reloptions, ");
-    else
-        appendPQExpBufferStr(query,
-                             "NULL AS toast_reloptions, ");
+    appendPQExpBufferStr(query,
+                         "tc.reloptions AS toast_reloptions, ");

-    if (fout->remoteVersion >= 90000)
-        appendPQExpBufferStr(query,
-                             "CASE WHEN c.reloftype <> 0 THEN c.reloftype::pg_catalog.regtype ELSE NULL END AS
reloftype,"); 
-    else
-        appendPQExpBufferStr(query,
-                             "NULL AS reloftype, ");
+    appendPQExpBufferStr(query,
+                         "CASE WHEN c.reloftype <> 0 THEN c.reloftype::pg_catalog.regtype ELSE NULL END AS reloftype,
");

     if (fout->remoteVersion >= 90100)
         appendPQExpBufferStr(query,
@@ -6479,7 +6268,8 @@ getTables(Archive *fout, int *numTables)

     /*
      * Left join to pg_depend to pick up dependency info linking sequences to
-     * their owning column, if any (note this dependency is AUTO as of 8.2).
+     * their owning column, if any (note this dependency is AUTO except for
+     * identity sequences, where it's INTERNAL).
      * Also join to pg_tablespace to collect the spcname.
      */
     appendPQExpBufferStr(query,
@@ -6505,15 +6295,12 @@ getTables(Archive *fout, int *numTables)
                              "LEFT JOIN pg_am am ON (c.relam = am.oid)\n");

     /*
-     * We don't need any data from the TOAST table before 8.2.
-     *
      * We purposefully ignore toast OIDs for partitioned tables; the reason is
      * that versions 10 and 11 have them, but later versions do not, so
      * emitting them causes the upgrade to fail.
      */
-    if (fout->remoteVersion >= 80200)
-        appendPQExpBufferStr(query,
-                             "LEFT JOIN pg_class tc ON (c.reltoastrelid = tc.oid AND c.relkind <> "
CppAsString2(RELKIND_PARTITIONED_TABLE)")\n"); 
+    appendPQExpBufferStr(query,
+                         "LEFT JOIN pg_class tc ON (c.reltoastrelid = tc.oid AND c.relkind <> "
CppAsString2(RELKIND_PARTITIONED_TABLE)")\n"); 

     /*
      * Restrict to interesting relkinds (in particular, not indexes).  Not all
@@ -7001,7 +6788,7 @@ getIndexes(Archive *fout, TableInfo tblinfo[], int numTables)
                               "ORDER BY indexname",
                               tbinfo->dobj.catId.oid);
         }
-        else if (fout->remoteVersion >= 90000)
+        else
         {
             /*
              * the test on indisready is necessary in 9.2, and harmless in
@@ -7036,73 +6823,6 @@ getIndexes(Archive *fout, TableInfo tblinfo[], int numTables)
                               "ORDER BY indexname",
                               tbinfo->dobj.catId.oid);
         }
-        else if (fout->remoteVersion >= 80200)
-        {
-            appendPQExpBuffer(query,
-                              "SELECT t.tableoid, t.oid, "
-                              "t.relname AS indexname, "
-                              "0 AS parentidx, "
-                              "pg_catalog.pg_get_indexdef(i.indexrelid) AS indexdef, "
-                              "i.indnatts AS indnkeyatts, "
-                              "i.indnatts AS indnatts, "
-                              "i.indkey, i.indisclustered, "
-                              "false AS indisreplident, "
-                              "c.contype, c.conname, "
-                              "c.condeferrable, c.condeferred, "
-                              "c.tableoid AS contableoid, "
-                              "c.oid AS conoid, "
-                              "null AS condef, "
-                              "(SELECT spcname FROM pg_catalog.pg_tablespace s WHERE s.oid = t.reltablespace) AS
tablespace," 
-                              "t.reloptions AS indreloptions, "
-                              "'' AS indstatcols, "
-                              "'' AS indstatvals "
-                              "FROM pg_catalog.pg_index i "
-                              "JOIN pg_catalog.pg_class t ON (t.oid = i.indexrelid) "
-                              "LEFT JOIN pg_catalog.pg_depend d "
-                              "ON (d.classid = t.tableoid "
-                              "AND d.objid = t.oid "
-                              "AND d.deptype = 'i') "
-                              "LEFT JOIN pg_catalog.pg_constraint c "
-                              "ON (d.refclassid = c.tableoid "
-                              "AND d.refobjid = c.oid) "
-                              "WHERE i.indrelid = '%u'::pg_catalog.oid "
-                              "AND i.indisvalid "
-                              "ORDER BY indexname",
-                              tbinfo->dobj.catId.oid);
-        }
-        else
-        {
-            appendPQExpBuffer(query,
-                              "SELECT t.tableoid, t.oid, "
-                              "t.relname AS indexname, "
-                              "0 AS parentidx, "
-                              "pg_catalog.pg_get_indexdef(i.indexrelid) AS indexdef, "
-                              "t.relnatts AS indnkeyatts, "
-                              "t.relnatts AS indnatts, "
-                              "i.indkey, i.indisclustered, "
-                              "false AS indisreplident, "
-                              "c.contype, c.conname, "
-                              "c.condeferrable, c.condeferred, "
-                              "c.tableoid AS contableoid, "
-                              "c.oid AS conoid, "
-                              "null AS condef, "
-                              "(SELECT spcname FROM pg_catalog.pg_tablespace s WHERE s.oid = t.reltablespace) AS
tablespace," 
-                              "null AS indreloptions, "
-                              "'' AS indstatcols, "
-                              "'' AS indstatvals "
-                              "FROM pg_catalog.pg_index i "
-                              "JOIN pg_catalog.pg_class t ON (t.oid = i.indexrelid) "
-                              "LEFT JOIN pg_catalog.pg_depend d "
-                              "ON (d.classid = t.tableoid "
-                              "AND d.objid = t.oid "
-                              "AND d.deptype = 'i') "
-                              "LEFT JOIN pg_catalog.pg_constraint c "
-                              "ON (d.refclassid = c.tableoid "
-                              "AND d.refobjid = c.oid) "
-                              "WHERE i.indrelid = '%u'::pg_catalog.oid "
-                              "ORDER BY indexname",
-                              tbinfo->dobj.catId.oid);
-        }

         res = ExecuteSqlQuery(fout, query->data, PGRES_TUPLES_OK);

@@ -7557,24 +7277,12 @@ getRules(Archive *fout, int *numRules)
     int            i_is_instead;
     int            i_ev_enabled;

-    if (fout->remoteVersion >= 80300)
-    {
-        appendPQExpBufferStr(query, "SELECT "
-                             "tableoid, oid, rulename, "
-                             "ev_class AS ruletable, ev_type, is_instead, "
-                             "ev_enabled "
-                             "FROM pg_rewrite "
-                             "ORDER BY oid");
-    }
-    else
-    {
-        appendPQExpBufferStr(query, "SELECT "
-                             "tableoid, oid, rulename, "
-                             "ev_class AS ruletable, ev_type, is_instead, "
-                             "'O'::char AS ev_enabled "
-                             "FROM pg_rewrite "
-                             "ORDER BY oid");
-    }
+    appendPQExpBufferStr(query, "SELECT "
+                         "tableoid, oid, rulename, "
+                         "ev_class AS ruletable, ev_type, is_instead, "
+                         "ev_enabled "
+                         "FROM pg_rewrite "
+                         "ORDER BY oid");

     res = ExecuteSqlQuery(fout, query->data, PGRES_TUPLES_OK);

@@ -7741,7 +7449,7 @@ getTriggers(Archive *fout, TableInfo tblinfo[], int numTables)
                               tbinfo->ispartition ?
                               " OR t.tgenabled != pt.tgenabled" : "");
         }
-        else if (fout->remoteVersion >= 90000)
+        else
         {
             /* See above about pretty=true in pg_get_triggerdef */
             appendPQExpBuffer(query,
@@ -7755,48 +7463,6 @@ getTriggers(Archive *fout, TableInfo tblinfo[], int numTables)
                               "AND NOT tgisinternal",
                               tbinfo->dobj.catId.oid);
         }
-        else if (fout->remoteVersion >= 80300)
-        {
-            /*
-             * We ignore triggers that are tied to a foreign-key constraint
-             */
-            appendPQExpBuffer(query,
-                              "SELECT tgname, "
-                              "tgfoid::pg_catalog.regproc AS tgfname, "
-                              "tgtype, tgnargs, tgargs, tgenabled, "
-                              "false as tgisinternal, "
-                              "tgisconstraint, tgconstrname, tgdeferrable, "
-                              "tgconstrrelid, tginitdeferred, tableoid, oid, "
-                              "tgconstrrelid::pg_catalog.regclass AS tgconstrrelname "
-                              "FROM pg_catalog.pg_trigger t "
-                              "WHERE tgrelid = '%u'::pg_catalog.oid "
-                              "AND tgconstraint = 0",
-                              tbinfo->dobj.catId.oid);
-        }
-        else
-        {
-            /*
-             * We ignore triggers that are tied to a foreign-key constraint,
-             * but in these versions we have to grovel through pg_constraint
-             * to find out
-             */
-            appendPQExpBuffer(query,
-                              "SELECT tgname, "
-                              "tgfoid::pg_catalog.regproc AS tgfname, "
-                              "tgtype, tgnargs, tgargs, tgenabled, "
-                              "false as tgisinternal, "
-                              "tgisconstraint, tgconstrname, tgdeferrable, "
-                              "tgconstrrelid, tginitdeferred, tableoid, oid, "
-                              "tgconstrrelid::pg_catalog.regclass AS tgconstrrelname "
-                              "FROM pg_catalog.pg_trigger t "
-                              "WHERE tgrelid = '%u'::pg_catalog.oid "
-                              "AND (NOT tgisconstraint "
-                              " OR NOT EXISTS"
-                              "  (SELECT 1 FROM pg_catalog.pg_depend d "
-                              "   JOIN pg_catalog.pg_constraint c ON (d.refclassid = c.tableoid AND d.refobjid =
c.oid)" 
-                              "   WHERE d.classid = t.tableoid AND d.objid = t.oid AND d.deptype = 'i' AND c.contype =
'f'))",
-                              tbinfo->dobj.catId.oid);
-        }

         res = ExecuteSqlQuery(fout, query->data, PGRES_TUPLES_OK);

@@ -8049,7 +7715,7 @@ getProcLangs(Archive *fout, int *numProcLangs)
         destroyPQExpBuffer(initacl_subquery);
         destroyPQExpBuffer(initracl_subquery);
     }
-    else if (fout->remoteVersion >= 90000)
+    else
     {
         /* pg_language has a laninline column */
         appendPQExpBuffer(query, "SELECT tableoid, oid, "
@@ -8062,48 +7728,6 @@ getProcLangs(Archive *fout, int *numProcLangs)
                           "ORDER BY oid",
                           username_subquery);
     }
-    else if (fout->remoteVersion >= 80300)
-    {
-        /* pg_language has a lanowner column */
-        appendPQExpBuffer(query, "SELECT tableoid, oid, "
-                          "lanname, lanpltrusted, lanplcallfoid, "
-                          "0 AS laninline, lanvalidator, lanacl, "
-                          "NULL AS rlanacl, "
-                          "NULL AS initlanacl, NULL AS initrlanacl, "
-                          "(%s lanowner) AS lanowner "
-                          "FROM pg_language "
-                          "WHERE lanispl "
-                          "ORDER BY oid",
-                          username_subquery);
-    }
-    else if (fout->remoteVersion >= 80100)
-    {
-        /* Languages are owned by the bootstrap superuser, OID 10 */
-        appendPQExpBuffer(query, "SELECT tableoid, oid, "
-                          "lanname, lanpltrusted, lanplcallfoid, "
-                          "0 AS laninline, lanvalidator, lanacl, "
-                          "NULL AS rlanacl, "
-                          "NULL AS initlanacl, NULL AS initrlanacl, "
-                          "(%s '10') AS lanowner "
-                          "FROM pg_language "
-                          "WHERE lanispl "
-                          "ORDER BY oid",
-                          username_subquery);
-    }
-    else
-    {
-        /* Languages are owned by the bootstrap superuser, sysid 1 */
-        appendPQExpBuffer(query, "SELECT tableoid, oid, "
-                          "lanname, lanpltrusted, lanplcallfoid, "
-                          "0 AS laninline, lanvalidator, lanacl, "
-                          "NULL AS rlanacl, "
-                          "NULL AS initlanacl, NULL AS initrlanacl, "
-                          "(%s '1') AS lanowner "
-                          "FROM pg_language "
-                          "WHERE lanispl "
-                          "ORDER BY oid",
-                          username_subquery);
-    }

     res = ExecuteSqlQuery(fout, query->data, PGRES_TUPLES_OK);

@@ -8199,18 +7823,11 @@ getCasts(Archive *fout, int *numCasts)
                              ") "
                              "ORDER BY 3,4");
     }
-    else if (fout->remoteVersion >= 80400)
-    {
-        appendPQExpBufferStr(query, "SELECT tableoid, oid, "
-                             "castsource, casttarget, castfunc, castcontext, "
-                             "castmethod "
-                             "FROM pg_cast ORDER BY 3,4");
-    }
     else
     {
         appendPQExpBufferStr(query, "SELECT tableoid, oid, "
                              "castsource, casttarget, castfunc, castcontext, "
-                             "CASE WHEN castfunc = 0 THEN 'b' ELSE 'f' END AS castmethod "
+                             "castmethod "
                              "FROM pg_cast ORDER BY 3,4");
     }

@@ -8460,14 +8077,8 @@ getTableAttrs(Archive *fout, TableInfo *tblinfo, int numTables)
                              "a.attlen,\n"
                              "a.attalign,\n"
                              "a.attislocal,\n"
-                             "pg_catalog.format_type(t.oid, a.atttypmod) AS atttypname,\n");
-
-        if (fout->remoteVersion >= 90000)
-            appendPQExpBufferStr(q,
-                                 "array_to_string(a.attoptions, ', ') AS attoptions,\n");
-        else
-            appendPQExpBufferStr(q,
-                                 "'' AS attoptions,\n");
+                             "pg_catalog.format_type(t.oid, a.atttypmod) AS atttypname,\n"
+                             "array_to_string(a.attoptions, ', ') AS attoptions,\n");

         if (fout->remoteVersion >= 90100)
         {
@@ -8759,7 +8370,7 @@ getTableAttrs(Archive *fout, TableInfo *tblinfo, int numTables)
                                   "ORDER BY conname",
                                   tbinfo->dobj.catId.oid);
             }
-            else if (fout->remoteVersion >= 80400)
+            else
             {
                 /* conislocal is new in 8.4 */
                 appendPQExpBuffer(q, "SELECT tableoid, oid, conname, "
@@ -8771,17 +8382,6 @@ getTableAttrs(Archive *fout, TableInfo *tblinfo, int numTables)
                                   "ORDER BY conname",
                                   tbinfo->dobj.catId.oid);
             }
-            else
-            {
-                appendPQExpBuffer(q, "SELECT tableoid, oid, conname, "
-                                  "pg_catalog.pg_get_constraintdef(oid) AS consrc, "
-                                  "true AS conislocal, true AS convalidated "
-                                  "FROM pg_catalog.pg_constraint "
-                                  "WHERE conrelid = '%u'::pg_catalog.oid "
-                                  "   AND contype = 'c' "
-                                  "ORDER BY conname",
-                                  tbinfo->dobj.catId.oid);
-            }

             res = ExecuteSqlQuery(fout, q->data, PGRES_TUPLES_OK);

@@ -8907,13 +8507,6 @@ getTSParsers(Archive *fout, int *numTSParsers)
     int            i_prsheadline;
     int            i_prslextype;

-    /* Before 8.3, there is no built-in text search support */
-    if (fout->remoteVersion < 80300)
-    {
-        *numTSParsers = 0;
-        return NULL;
-    }
-
     query = createPQExpBuffer();

     /*
@@ -8995,13 +8588,6 @@ getTSDictionaries(Archive *fout, int *numTSDicts)
     int            i_dicttemplate;
     int            i_dictinitoption;

-    /* Before 8.3, there is no built-in text search support */
-    if (fout->remoteVersion < 80300)
-    {
-        *numTSDicts = 0;
-        return NULL;
-    }
-
     query = createPQExpBuffer();

     appendPQExpBuffer(query, "SELECT tableoid, oid, dictname, "
@@ -9077,13 +8663,6 @@ getTSTemplates(Archive *fout, int *numTSTemplates)
     int            i_tmplinit;
     int            i_tmpllexize;

-    /* Before 8.3, there is no built-in text search support */
-    if (fout->remoteVersion < 80300)
-    {
-        *numTSTemplates = 0;
-        return NULL;
-    }
-
     query = createPQExpBuffer();

     appendPQExpBufferStr(query, "SELECT tableoid, oid, tmplname, "
@@ -9152,13 +8731,6 @@ getTSConfigurations(Archive *fout, int *numTSConfigs)
     int            i_rolname;
     int            i_cfgparser;

-    /* Before 8.3, there is no built-in text search support */
-    if (fout->remoteVersion < 80300)
-    {
-        *numTSConfigs = 0;
-        return NULL;
-    }
-
     query = createPQExpBuffer();

     appendPQExpBuffer(query, "SELECT tableoid, oid, cfgname, "
@@ -9234,13 +8806,6 @@ getForeignDataWrappers(Archive *fout, int *numForeignDataWrappers)
     int            i_initrfdwacl;
     int            i_fdwoptions;

-    /* Before 8.4, there are no foreign-data wrappers */
-    if (fout->remoteVersion < 80400)
-    {
-        *numForeignDataWrappers = 0;
-        return NULL;
-    }
-
     query = createPQExpBuffer();

     if (fout->remoteVersion >= 90600)
@@ -9401,13 +8966,6 @@ getForeignServers(Archive *fout, int *numForeignServers)
     int            i_initrsrvacl;
     int            i_srvoptions;

-    /* Before 8.4, there are no foreign servers */
-    if (fout->remoteVersion < 80400)
-    {
-        *numForeignServers = 0;
-        return NULL;
-    }
-
     query = createPQExpBuffer();

     if (fout->remoteVersion >= 90600)
@@ -9548,12 +9106,6 @@ getDefaultACLs(Archive *fout, int *numDefaultACLs)
     int            i,
                 ntups;

-    if (fout->remoteVersion < 90000)
-    {
-        *numDefaultACLs = 0;
-        return NULL;
-    }
-
     query = createPQExpBuffer();

     if (fout->remoteVersion >= 90600)
@@ -10269,9 +9821,7 @@ dumpNamespace(Archive *fout, const NamespaceInfo *nspinfo)
         const char *initdb_comment = NULL;

         if (!nspinfo->create && strcmp(qnspname, "public") == 0)
-            initdb_comment = (fout->remoteVersion >= 80300 ?
-                              "standard public schema" :
-                              "Standard public schema");
+            initdb_comment = "standard public schema";
         dumpCommentExtended(fout, "SCHEMA", qnspname,
                             NULL, nspinfo->rolname,
                             nspinfo->dobj.catId, 0, nspinfo->dobj.dumpId,
@@ -10845,24 +10395,11 @@ dumpBaseType(Archive *fout, const TypeInfo *tyinfo)
                          "typsend::pg_catalog.oid AS typsendoid, "
                          "typanalyze, "
                          "typanalyze::pg_catalog.oid AS typanalyzeoid, "
-                         "typdelim, typbyval, typalign, typstorage, ");
-
-    if (fout->remoteVersion >= 80300)
-        appendPQExpBufferStr(query,
-                             "typmodin, typmodout, "
-                             "typmodin::pg_catalog.oid AS typmodinoid, "
-                             "typmodout::pg_catalog.oid AS typmodoutoid, ");
-    else
-        appendPQExpBufferStr(query,
-                             "'-' AS typmodin, '-' AS typmodout, "
-                             "0 AS typmodinoid, 0 AS typmodoutoid, ");
-
-    if (fout->remoteVersion >= 80400)
-        appendPQExpBufferStr(query,
-                             "typcategory, typispreferred, ");
-    else
-        appendPQExpBufferStr(query,
-                             "'U' AS typcategory, false AS typispreferred, ");
+                         "typdelim, typbyval, typalign, typstorage, "
+                         "typmodin, typmodout, "
+                         "typmodin::pg_catalog.oid AS typmodinoid, "
+                         "typmodout::pg_catalog.oid AS typmodoutoid, "
+                         "typcategory, typispreferred, ");

     if (fout->remoteVersion >= 90100)
         appendPQExpBufferStr(query, "(typcollation <> 0) AS typcollatable, ");
@@ -10877,13 +10414,8 @@ dumpBaseType(Archive *fout, const TypeInfo *tyinfo)
         appendPQExpBufferStr(query,
                              "'-' AS typsubscript, 0 AS typsubscriptoid, ");

-    /* Before 8.4, pg_get_expr does not allow 0 for its second arg */
-    if (fout->remoteVersion >= 80400)
-        appendPQExpBufferStr(query,
-                             "pg_catalog.pg_get_expr(typdefaultbin, 0) AS typdefaultbin, typdefault ");
-    else
-        appendPQExpBufferStr(query,
-                             "pg_catalog.pg_get_expr(typdefaultbin, 'pg_catalog.pg_type'::pg_catalog.regclass) AS
typdefaultbin,typdefault "); 
+    appendPQExpBufferStr(query,
+                         "pg_catalog.pg_get_expr(typdefaultbin, 0) AS typdefaultbin, typdefault ");

     appendPQExpBuffer(query, "FROM pg_catalog.pg_type "
                       "WHERE oid = '%u'::pg_catalog.oid",
@@ -11758,82 +11290,11 @@ format_function_arguments(const FuncInfo *finfo, const char *funcargs, bool is_a
     return fn.data;
 }

-/*
- * format_function_arguments_old: generate function name and argument list
- *
- * The argument type names are qualified if needed.  The function name
- * is never qualified.
- *
- * This is used only with pre-8.4 servers, so we aren't expecting to see
- * VARIADIC or TABLE arguments, nor are there any defaults for arguments.
- *
- * Any or all of allargtypes, argmodes, argnames may be NULL.
- */
-static char *
-format_function_arguments_old(Archive *fout,
-                              const FuncInfo *finfo, int nallargs,
-                              char **allargtypes,
-                              char **argmodes,
-                              char **argnames)
-{
-    PQExpBufferData fn;
-    int            j;
-
-    initPQExpBuffer(&fn);
-    appendPQExpBuffer(&fn, "%s(", fmtId(finfo->dobj.name));
-    for (j = 0; j < nallargs; j++)
-    {
-        Oid            typid;
-        const char *typname;
-        const char *argmode;
-        const char *argname;
-
-        typid = allargtypes ? atooid(allargtypes[j]) : finfo->argtypes[j];
-        typname = getFormattedTypeName(fout, typid, zeroIsError);
-
-        if (argmodes)
-        {
-            switch (argmodes[j][0])
-            {
-                case PROARGMODE_IN:
-                    argmode = "";
-                    break;
-                case PROARGMODE_OUT:
-                    argmode = "OUT ";
-                    break;
-                case PROARGMODE_INOUT:
-                    argmode = "INOUT ";
-                    break;
-                default:
-                    pg_log_warning("bogus value in proargmodes array");
-                    argmode = "";
-                    break;
-            }
-        }
-        else
-            argmode = "";
-
-        argname = argnames ? argnames[j] : (char *) NULL;
-        if (argname && argname[0] == '\0')
-            argname = NULL;
-
-        appendPQExpBuffer(&fn, "%s%s%s%s%s",
-                          (j > 0) ? ", " : "",
-                          argmode,
-                          argname ? fmtId(argname) : "",
-                          argname ? " " : "",
-                          typname);
-    }
-    appendPQExpBufferChar(&fn, ')');
-    return fn.data;
-}
-
 /*
  * format_function_signature: generate function name and argument list
  *
- * This is like format_function_arguments_old except that only a minimal
- * list of input argument types is generated; this is sufficient to
- * reference the function, but not to define it.
+ * Only a minimal list of input argument types is generated; this is
+ * sufficient to reference the function, but not to define it.
  *
  * If honor_quotes is false then the function name is never quoted.
  * This is appropriate for use in TOC tags, but not in SQL commands.
@@ -11929,40 +11390,13 @@ dumpFunc(Archive *fout, const FuncInfo *finfo)
                          "provolatile,\n"
                          "proisstrict,\n"
                          "prosecdef,\n"
-                         "lanname,\n");
-
-    if (fout->remoteVersion >= 80300)
-        appendPQExpBufferStr(query,
-                             "proconfig,\n"
-                             "procost,\n"
-                             "prorows,\n");
-    else
-        appendPQExpBufferStr(query,
-                             "null AS proconfig,\n"
-                             "0 AS procost,\n"
-                             "0 AS prorows,\n");
-
-    if (fout->remoteVersion >= 80400)
-    {
-        /*
-         * In 8.4 and up we rely on pg_get_function_arguments and
-         * pg_get_function_result instead of examining proallargtypes etc.
-         */
-        appendPQExpBufferStr(query,
-                             "pg_catalog.pg_get_function_arguments(p.oid) AS funcargs,\n"
-                             "pg_catalog.pg_get_function_identity_arguments(p.oid) AS funciargs,\n"
-                             "pg_catalog.pg_get_function_result(p.oid) AS funcresult,\n");
-    }
-    else if (fout->remoteVersion >= 80100)
-        appendPQExpBufferStr(query,
-                             "proallargtypes,\n"
-                             "proargmodes,\n"
-                             "proargnames,\n");
-    else
-        appendPQExpBufferStr(query,
-                             "null AS proallargtypes,\n"
-                             "null AS proargmodes,\n"
-                             "proargnames,\n");
+                         "lanname,\n"
+                         "proconfig,\n"
+                         "procost,\n"
+                         "prorows,\n"
+                         "pg_catalog.pg_get_function_arguments(p.oid) AS funcargs,\n"
+                         "pg_catalog.pg_get_function_identity_arguments(p.oid) AS funciargs,\n"
+                         "pg_catalog.pg_get_function_result(p.oid) AS funcresult,\n");

     if (fout->remoteVersion >= 90200)
         appendPQExpBufferStr(query,
@@ -11985,12 +11419,9 @@ dumpFunc(Archive *fout, const FuncInfo *finfo)
     if (fout->remoteVersion >= 110000)
         appendPQExpBufferStr(query,
                              "prokind,\n");
-    else if (fout->remoteVersion >= 80400)
-        appendPQExpBufferStr(query,
-                             "CASE WHEN proiswindow THEN 'w' ELSE 'f' END AS prokind,\n");
     else
         appendPQExpBufferStr(query,
-                             "'f' AS prokind,\n");
+                             "CASE WHEN proiswindow THEN 'w' ELSE 'f' END AS prokind,\n");

     if (fout->remoteVersion >= 120000)
         appendPQExpBufferStr(query,
@@ -12027,20 +11458,10 @@ dumpFunc(Archive *fout, const FuncInfo *finfo)
         probin = NULL;
         prosqlbody = PQgetvalue(res, 0, PQfnumber(res, "prosqlbody"));
     }
-    if (fout->remoteVersion >= 80400)
-    {
-        funcargs = PQgetvalue(res, 0, PQfnumber(res, "funcargs"));
-        funciargs = PQgetvalue(res, 0, PQfnumber(res, "funciargs"));
-        funcresult = PQgetvalue(res, 0, PQfnumber(res, "funcresult"));
-        proallargtypes = proargmodes = proargnames = NULL;
-    }
-    else
-    {
-        proallargtypes = PQgetvalue(res, 0, PQfnumber(res, "proallargtypes"));
-        proargmodes = PQgetvalue(res, 0, PQfnumber(res, "proargmodes"));
-        proargnames = PQgetvalue(res, 0, PQfnumber(res, "proargnames"));
-        funcargs = funciargs = funcresult = NULL;
-    }
+    funcargs = PQgetvalue(res, 0, PQfnumber(res, "funcargs"));
+    funciargs = PQgetvalue(res, 0, PQfnumber(res, "funciargs"));
+    funcresult = PQgetvalue(res, 0, PQfnumber(res, "funcresult"));
+    proallargtypes = proargmodes = proargnames = NULL;
     if (PQfnumber(res, "protrftypes") != -1)
         protrftypes = PQgetvalue(res, 0, PQfnumber(res, "protrftypes"));
     else
@@ -12156,17 +11577,8 @@ dumpFunc(Archive *fout, const FuncInfo *finfo)
         nconfigitems = 0;
     }

-    if (funcargs)
-    {
-        /* 8.4 or later; we rely on server-side code for most of the work */
-        funcfullsig = format_function_arguments(finfo, funcargs, false);
-        funcsig = format_function_arguments(finfo, funciargs, false);
-    }
-    else
-        /* pre-8.4, do it ourselves */
-        funcsig = format_function_arguments_old(fout,
-                                                finfo, nallargs, allargtypes,
-                                                argmodes, argnames);
+    funcfullsig = format_function_arguments(finfo, funcargs, false);
+    funcsig = format_function_arguments(finfo, funciargs, false);

     funcsig_tag = format_function_signature(fout, finfo, false);

@@ -12677,8 +12089,6 @@ dumpOpr(Archive *fout, const OprInfo *oprinfo)
     oprid = createPQExpBuffer();
     details = createPQExpBuffer();

-    if (fout->remoteVersion >= 80300)
-    {
         appendPQExpBuffer(query, "SELECT oprkind, "
                           "oprcode::pg_catalog.regprocedure, "
                           "oprleft::pg_catalog.regtype, "
@@ -12691,23 +12101,6 @@ dumpOpr(Archive *fout, const OprInfo *oprinfo)
                           "FROM pg_catalog.pg_operator "
                           "WHERE oid = '%u'::pg_catalog.oid",
                           oprinfo->dobj.catId.oid);
-    }
-    else
-    {
-        appendPQExpBuffer(query, "SELECT oprkind, "
-                          "oprcode::pg_catalog.regprocedure, "
-                          "oprleft::pg_catalog.regtype, "
-                          "oprright::pg_catalog.regtype, "
-                          "oprcom, "
-                          "oprnegate, "
-                          "oprrest::pg_catalog.regprocedure, "
-                          "oprjoin::pg_catalog.regprocedure, "
-                          "(oprlsortop != 0) AS oprcanmerge, "
-                          "oprcanhash "
-                          "FROM pg_catalog.pg_operator "
-                          "WHERE oid = '%u'::pg_catalog.oid",
-                          oprinfo->dobj.catId.oid);
-    }

     res = ExecuteSqlQueryForSingleRow(fout, query->data);

@@ -13031,7 +12424,6 @@ dumpOpclass(Archive *fout, const OpclassInfo *opcinfo)
     int            i_opcfamilynsp;
     int            i_amname;
     int            i_amopstrategy;
-    int            i_amopreqcheck;
     int            i_amopopr;
     int            i_sortfamily;
     int            i_sortfamilynsp;
@@ -13047,7 +12439,6 @@ dumpOpclass(Archive *fout, const OpclassInfo *opcinfo)
     char       *opcfamilynsp;
     char       *amname;
     char       *amopstrategy;
-    char       *amopreqcheck;
     char       *amopopr;
     char       *sortfamily;
     char       *sortfamilynsp;
@@ -13068,8 +12459,6 @@ dumpOpclass(Archive *fout, const OpclassInfo *opcinfo)
     nameusing = createPQExpBuffer();

     /* Get additional fields from the pg_opclass row */
-    if (fout->remoteVersion >= 80300)
-    {
         appendPQExpBuffer(query, "SELECT opcintype::pg_catalog.regtype, "
                           "opckeytype::pg_catalog.regtype, "
                           "opcdefault, opcfamily, "
@@ -13081,19 +12470,6 @@ dumpOpclass(Archive *fout, const OpclassInfo *opcinfo)
                           "LEFT JOIN pg_catalog.pg_namespace n ON n.oid = opfnamespace "
                           "WHERE c.oid = '%u'::pg_catalog.oid",
                           opcinfo->dobj.catId.oid);
-    }
-    else
-    {
-        appendPQExpBuffer(query, "SELECT opcintype::pg_catalog.regtype, "
-                          "opckeytype::pg_catalog.regtype, "
-                          "opcdefault, NULL AS opcfamily, "
-                          "NULL AS opcfamilyname, "
-                          "NULL AS opcfamilynsp, "
-                          "(SELECT amname FROM pg_catalog.pg_am WHERE oid = opcamid) AS amname "
-                          "FROM pg_catalog.pg_opclass "
-                          "WHERE oid = '%u'::pg_catalog.oid",
-                          opcinfo->dobj.catId.oid);
-    }

     res = ExecuteSqlQueryForSingleRow(fout, query->data);

@@ -13153,18 +12529,12 @@ dumpOpclass(Archive *fout, const OpclassInfo *opcinfo)
      *
      * Print only those opfamily members that are tied to the opclass by
      * pg_depend entries.
-     *
-     * XXX RECHECK is gone as of 8.4, but we'll still print it if dumping an
-     * older server's opclass in which it is used.  This is to avoid
-     * hard-to-detect breakage if a newer pg_dump is used to dump from an
-     * older server and then reload into that old version.  This can go away
-     * once 8.3 is so old as to not be of interest to anyone.
      */
     resetPQExpBuffer(query);

     if (fout->remoteVersion >= 90100)
     {
-        appendPQExpBuffer(query, "SELECT amopstrategy, false AS amopreqcheck, "
+        appendPQExpBuffer(query, "SELECT amopstrategy, "
                           "amopopr::pg_catalog.regoperator, "
                           "opfname AS sortfamily, "
                           "nspname AS sortfamilynsp "
@@ -13179,23 +12549,9 @@ dumpOpclass(Archive *fout, const OpclassInfo *opcinfo)
                           opcinfo->dobj.catId.oid,
                           opcfamily);
     }
-    else if (fout->remoteVersion >= 80400)
-    {
-        appendPQExpBuffer(query, "SELECT amopstrategy, false AS amopreqcheck, "
-                          "amopopr::pg_catalog.regoperator, "
-                          "NULL AS sortfamily, "
-                          "NULL AS sortfamilynsp "
-                          "FROM pg_catalog.pg_amop ao, pg_catalog.pg_depend "
-                          "WHERE refclassid = 'pg_catalog.pg_opclass'::pg_catalog.regclass "
-                          "AND refobjid = '%u'::pg_catalog.oid "
-                          "AND classid = 'pg_catalog.pg_amop'::pg_catalog.regclass "
-                          "AND objid = ao.oid "
-                          "ORDER BY amopstrategy",
-                          opcinfo->dobj.catId.oid);
-    }
-    else if (fout->remoteVersion >= 80300)
+    else
     {
-        appendPQExpBuffer(query, "SELECT amopstrategy, amopreqcheck, "
+        appendPQExpBuffer(query, "SELECT amopstrategy, "
                           "amopopr::pg_catalog.regoperator, "
                           "NULL AS sortfamily, "
                           "NULL AS sortfamilynsp "
@@ -13207,28 +12563,12 @@ dumpOpclass(Archive *fout, const OpclassInfo *opcinfo)
                           "ORDER BY amopstrategy",
                           opcinfo->dobj.catId.oid);
     }
-    else
-    {
-        /*
-         * Here, we print all entries since there are no opfamilies and hence
-         * no loose operators to worry about.
-         */
-        appendPQExpBuffer(query, "SELECT amopstrategy, amopreqcheck, "
-                          "amopopr::pg_catalog.regoperator, "
-                          "NULL AS sortfamily, "
-                          "NULL AS sortfamilynsp "
-                          "FROM pg_catalog.pg_amop "
-                          "WHERE amopclaid = '%u'::pg_catalog.oid "
-                          "ORDER BY amopstrategy",
-                          opcinfo->dobj.catId.oid);
-    }

     res = ExecuteSqlQuery(fout, query->data, PGRES_TUPLES_OK);

     ntups = PQntuples(res);

     i_amopstrategy = PQfnumber(res, "amopstrategy");
-    i_amopreqcheck = PQfnumber(res, "amopreqcheck");
     i_amopopr = PQfnumber(res, "amopopr");
     i_sortfamily = PQfnumber(res, "sortfamily");
     i_sortfamilynsp = PQfnumber(res, "sortfamilynsp");
@@ -13236,7 +12576,6 @@ dumpOpclass(Archive *fout, const OpclassInfo *opcinfo)
     for (i = 0; i < ntups; i++)
     {
         amopstrategy = PQgetvalue(res, i, i_amopstrategy);
-        amopreqcheck = PQgetvalue(res, i, i_amopreqcheck);
         amopopr = PQgetvalue(res, i, i_amopopr);
         sortfamily = PQgetvalue(res, i, i_sortfamily);
         sortfamilynsp = PQgetvalue(res, i, i_sortfamilynsp);
@@ -13254,9 +12593,6 @@ dumpOpclass(Archive *fout, const OpclassInfo *opcinfo)
             appendPQExpBufferStr(q, fmtId(sortfamily));
         }

-        if (strcmp(amopreqcheck, "t") == 0)
-            appendPQExpBufferStr(q, " RECHECK");
-
         needComma = true;
     }

@@ -13276,8 +12612,6 @@ dumpOpclass(Archive *fout, const OpclassInfo *opcinfo)
      */
     resetPQExpBuffer(query);

-    if (fout->remoteVersion >= 80300)
-    {
         appendPQExpBuffer(query, "SELECT amprocnum, "
                           "amproc::pg_catalog.regprocedure, "
                           "amproclefttype::pg_catalog.regtype, "
@@ -13289,18 +12623,6 @@ dumpOpclass(Archive *fout, const OpclassInfo *opcinfo)
                           "AND objid = ap.oid "
                           "ORDER BY amprocnum",
                           opcinfo->dobj.catId.oid);
-    }
-    else
-    {
-        appendPQExpBuffer(query, "SELECT amprocnum, "
-                          "amproc::pg_catalog.regprocedure, "
-                          "'' AS amproclefttype, "
-                          "'' AS amprocrighttype "
-                          "FROM pg_catalog.pg_amproc "
-                          "WHERE amopclaid = '%u'::pg_catalog.oid "
-                          "ORDER BY amprocnum",
-                          opcinfo->dobj.catId.oid);
-    }

     res = ExecuteSqlQuery(fout, query->data, PGRES_TUPLES_OK);

@@ -13399,7 +12721,6 @@ dumpOpfamily(Archive *fout, const OpfamilyInfo *opfinfo)
     int            ntups;
     int            i_amname;
     int            i_amopstrategy;
-    int            i_amopreqcheck;
     int            i_amopopr;
     int            i_sortfamily;
     int            i_sortfamilynsp;
@@ -13409,7 +12730,6 @@ dumpOpfamily(Archive *fout, const OpfamilyInfo *opfinfo)
     int            i_amprocrighttype;
     char       *amname;
     char       *amopstrategy;
-    char       *amopreqcheck;
     char       *amopopr;
     char       *sortfamily;
     char       *sortfamilynsp;
@@ -13432,16 +12752,10 @@ dumpOpfamily(Archive *fout, const OpfamilyInfo *opfinfo)
     /*
      * Fetch only those opfamily members that are tied directly to the
      * opfamily by pg_depend entries.
-     *
-     * XXX RECHECK is gone as of 8.4, but we'll still print it if dumping an
-     * older server's opclass in which it is used.  This is to avoid
-     * hard-to-detect breakage if a newer pg_dump is used to dump from an
-     * older server and then reload into that old version.  This can go away
-     * once 8.3 is so old as to not be of interest to anyone.
      */
     if (fout->remoteVersion >= 90100)
     {
-        appendPQExpBuffer(query, "SELECT amopstrategy, false AS amopreqcheck, "
+        appendPQExpBuffer(query, "SELECT amopstrategy, "
                           "amopopr::pg_catalog.regoperator, "
                           "opfname AS sortfamily, "
                           "nspname AS sortfamilynsp "
@@ -13456,23 +12770,9 @@ dumpOpfamily(Archive *fout, const OpfamilyInfo *opfinfo)
                           opfinfo->dobj.catId.oid,
                           opfinfo->dobj.catId.oid);
     }
-    else if (fout->remoteVersion >= 80400)
-    {
-        appendPQExpBuffer(query, "SELECT amopstrategy, false AS amopreqcheck, "
-                          "amopopr::pg_catalog.regoperator, "
-                          "NULL AS sortfamily, "
-                          "NULL AS sortfamilynsp "
-                          "FROM pg_catalog.pg_amop ao, pg_catalog.pg_depend "
-                          "WHERE refclassid = 'pg_catalog.pg_opfamily'::pg_catalog.regclass "
-                          "AND refobjid = '%u'::pg_catalog.oid "
-                          "AND classid = 'pg_catalog.pg_amop'::pg_catalog.regclass "
-                          "AND objid = ao.oid "
-                          "ORDER BY amopstrategy",
-                          opfinfo->dobj.catId.oid);
-    }
     else
     {
-        appendPQExpBuffer(query, "SELECT amopstrategy, amopreqcheck, "
+        appendPQExpBuffer(query, "SELECT amopstrategy, "
                           "amopopr::pg_catalog.regoperator, "
                           "NULL AS sortfamily, "
                           "NULL AS sortfamilynsp "
@@ -13548,7 +12848,6 @@ dumpOpfamily(Archive *fout, const OpfamilyInfo *opfinfo)
         ntups = PQntuples(res_ops);

         i_amopstrategy = PQfnumber(res_ops, "amopstrategy");
-        i_amopreqcheck = PQfnumber(res_ops, "amopreqcheck");
         i_amopopr = PQfnumber(res_ops, "amopopr");
         i_sortfamily = PQfnumber(res_ops, "sortfamily");
         i_sortfamilynsp = PQfnumber(res_ops, "sortfamilynsp");
@@ -13556,7 +12855,6 @@ dumpOpfamily(Archive *fout, const OpfamilyInfo *opfinfo)
         for (i = 0; i < ntups; i++)
         {
             amopstrategy = PQgetvalue(res_ops, i, i_amopstrategy);
-            amopreqcheck = PQgetvalue(res_ops, i, i_amopreqcheck);
             amopopr = PQgetvalue(res_ops, i, i_amopopr);
             sortfamily = PQgetvalue(res_ops, i, i_sortfamily);
             sortfamilynsp = PQgetvalue(res_ops, i, i_sortfamilynsp);
@@ -13574,9 +12872,6 @@ dumpOpfamily(Archive *fout, const OpfamilyInfo *opfinfo)
                 appendPQExpBufferStr(q, fmtId(sortfamily));
             }

-            if (strcmp(amopreqcheck, "t") == 0)
-                appendPQExpBufferStr(q, " RECHECK");
-
             needComma = true;
         }

@@ -13980,19 +13275,10 @@ dumpAgg(Archive *fout, const AggInfo *agginfo)
                          "aggtransfn,\n"
                          "aggfinalfn,\n"
                          "aggtranstype::pg_catalog.regtype,\n"
-                         "agginitval,\n");
-
-    if (fout->remoteVersion >= 80100)
-        appendPQExpBufferStr(query,
-                             "aggsortop,\n");
-    else
-        appendPQExpBufferStr(query,
-                             "0 AS aggsortop,\n");
-
-    if (fout->remoteVersion >= 80400)
-        appendPQExpBufferStr(query,
-                             "pg_catalog.pg_get_function_arguments(p.oid) AS funcargs,\n"
-                             "pg_catalog.pg_get_function_identity_arguments(p.oid) AS funciargs,\n");
+                         "agginitval,\n"
+                         "aggsortop,\n"
+                         "pg_catalog.pg_get_function_arguments(p.oid) AS funcargs,\n"
+                         "pg_catalog.pg_get_function_identity_arguments(p.oid) AS funciargs,\n");

     if (fout->remoteVersion >= 90400)
         appendPQExpBufferStr(query,
@@ -14074,9 +13360,7 @@ dumpAgg(Archive *fout, const AggInfo *agginfo)
     aggminitval = PQgetvalue(res, 0, i_aggminitval);
     proparallel = PQgetvalue(res, 0, PQfnumber(res, "proparallel"));

-    if (fout->remoteVersion >= 80400)
     {
-        /* 8.4 or later; we rely on server-side code for most of the work */
         char       *funcargs;
         char       *funciargs;

@@ -14085,9 +13369,6 @@ dumpAgg(Archive *fout, const AggInfo *agginfo)
         aggfullsig = format_function_arguments(&agginfo->aggfn, funcargs, true);
         aggsig = format_function_arguments(&agginfo->aggfn, funciargs, true);
     }
-    else
-        /* pre-8.4, do it ourselves */
-        aggsig = format_aggregate_signature(agginfo, fout, true);

     aggsig_tag = format_aggregate_signature(agginfo, fout, false);

@@ -15422,7 +14703,7 @@ dumpTable(Archive *fout, const TableInfo *tbinfo)
      * rather than trying to fetch them during getTableAttrs, so that we won't
      * miss ACLs on system columns.
      */
-    if (fout->remoteVersion >= 80400 && tbinfo->dobj.dump & DUMP_COMPONENT_ACL)
+    if (tbinfo->dobj.dump & DUMP_COMPONENT_ACL)
     {
         PQExpBuffer query = createPQExpBuffer();
         PGresult   *res;
@@ -17067,29 +16348,6 @@ dumpTableConstraintComment(Archive *fout, const ConstraintInfo *coninfo)
     free(qtabname);
 }

-/*
- * findLastBuiltinOid_V71 -
- *
- * find the last built in oid
- *
- * For 7.1 through 8.0, we do this by retrieving datlastsysoid from the
- * pg_database entry for the current database.  (Note: current_database()
- * requires 7.3; pg_dump requires 8.0 now.)
- */
-static Oid
-findLastBuiltinOid_V71(Archive *fout)
-{
-    PGresult   *res;
-    Oid            last_oid;
-
-    res = ExecuteSqlQueryForSingleRow(fout,
-                                      "SELECT datlastsysoid FROM pg_database WHERE datname = current_database()");
-    last_oid = atooid(PQgetvalue(res, 0, PQfnumber(res, "datlastsysoid")));
-    PQclear(res);
-
-    return last_oid;
-}
-
 /*
  * dumpSequence
  *      write the declaration (not data) of one user-defined sequence
@@ -17128,7 +16386,7 @@ dumpSequence(Archive *fout, const TableInfo *tbinfo)
                           "WHERE seqrelid = '%u'::oid",
                           tbinfo->dobj.catId.oid);
     }
-    else if (fout->remoteVersion >= 80400)
+    else
     {
         /*
          * Before PostgreSQL 10, sequence metadata is in the sequence itself.
@@ -17142,14 +16400,6 @@ dumpSequence(Archive *fout, const TableInfo *tbinfo)
                           "cache_value, is_cycled FROM %s",
                           fmtQualifiedDumpable(tbinfo));
     }
-    else
-    {
-        appendPQExpBuffer(query,
-                          "SELECT 'bigint' AS sequence_type, "
-                          "0 AS start_value, increment_by, max_value, min_value, "
-                          "cache_value, is_cycled FROM %s",
-                          fmtQualifiedDumpable(tbinfo));
-    }

     res = ExecuteSqlQuery(fout, query->data, PGRES_TUPLES_OK);

@@ -17255,8 +16505,7 @@ dumpSequence(Archive *fout, const TableInfo *tbinfo)
             appendPQExpBuffer(query, "    AS %s\n", seqtype);
     }

-    if (fout->remoteVersion >= 80400)
-        appendPQExpBuffer(query, "    START WITH %s\n", startv);
+    appendPQExpBuffer(query, "    START WITH %s\n", startv);

     appendPQExpBuffer(query, "    INCREMENT BY %s\n", incby);

@@ -18222,13 +17471,7 @@ getDependencies(Archive *fout)
      * entries will have dependencies on their parent opfamily, which we
      * should drop since they'd likewise become useless self-dependencies.
      * (But be sure to keep deps on *other* opfamilies; see amopsortfamily.)
-     *
-     * Skip this for pre-8.3 source servers: pg_opfamily doesn't exist there,
-     * and the (known) cases where it would matter to have these dependencies
-     * can't arise anyway.
      */
-    if (fout->remoteVersion >= 80300)
-    {
         appendPQExpBufferStr(query, "UNION ALL\n"
                              "SELECT 'pg_opfamily'::regclass AS classid, amopfamily AS objid, refclassid, refobjid,
deptype" 
                              "FROM pg_depend d, pg_amop o "
@@ -18243,7 +17486,6 @@ getDependencies(Archive *fout)
                              "WHERE deptype NOT IN ('p', 'e', 'i') AND "
                              "classid = 'pg_amproc'::regclass AND objid = p.oid "
                              "AND NOT (refclassid = 'pg_opfamily'::regclass AND amprocfamily = refobjid)\n");
-    }

     /* Sort the output for efficiency below */
     appendPQExpBufferStr(query, "ORDER BY 1,2");
diff --git a/src/bin/pg_dump/pg_dumpall.c b/src/bin/pg_dump/pg_dumpall.c
index c29101704a..9c0673271e 100644
--- a/src/bin/pg_dump/pg_dumpall.c
+++ b/src/bin/pg_dump/pg_dumpall.c
@@ -36,7 +36,6 @@ static void help(void);
 static void dropRoles(PGconn *conn);
 static void dumpRoles(PGconn *conn);
 static void dumpRoleMembership(PGconn *conn);
-static void dumpGroups(PGconn *conn);
 static void dropTablespaces(PGconn *conn);
 static void dumpTablespaces(PGconn *conn);
 static void dropDBs(PGconn *conn);
@@ -440,8 +439,7 @@ main(int argc, char *argv[])
     /*
      * If there was a database specified on the command line, use that,
      * otherwise try to connect to database "postgres", and failing that
-     * "template1".  "postgres" is the preferred choice for 8.1 and later
-     * servers, but it usually will not exist on older ones.
+     * "template1".
      */
     if (pgdb)
     {
@@ -517,7 +515,7 @@ main(int argc, char *argv[])
         std_strings = "off";

     /* Set the role if requested */
-    if (use_role && server_version >= 80100)
+    if (use_role)
     {
         PQExpBuffer query = createPQExpBuffer();

@@ -581,11 +579,8 @@ main(int argc, char *argv[])
             /* Dump roles (users) */
             dumpRoles(conn);

-            /* Dump role memberships --- need different method for pre-8.1 */
-            if (server_version >= 80100)
-                dumpRoleMembership(conn);
-            else
-                dumpGroups(conn);
+            /* Dump role memberships */
+            dumpRoleMembership(conn);
         }

         /* Dump tablespaces */
@@ -698,19 +693,11 @@ dropRoles(PGconn *conn)
                           "FROM %s "
                           "WHERE rolname !~ '^pg_' "
                           "ORDER BY 1", role_catalog);
-    else if (server_version >= 80100)
+    else
         printfPQExpBuffer(buf,
                           "SELECT rolname "
                           "FROM %s "
                           "ORDER BY 1", role_catalog);
-    else
-        printfPQExpBuffer(buf,
-                          "SELECT usename as rolname "
-                          "FROM pg_shadow "
-                          "UNION "
-                          "SELECT groname as rolname "
-                          "FROM pg_group "
-                          "ORDER BY 1");

     res = executeQuery(conn, buf->data);

@@ -793,7 +780,7 @@ dumpRoles(PGconn *conn)
                           "rolname = current_user AS is_current_user "
                           "FROM %s "
                           "ORDER BY 2", role_catalog, role_catalog);
-    else if (server_version >= 80200)
+    else
         printfPQExpBuffer(buf,
                           "SELECT oid, rolname, rolsuper, rolinherit, "
                           "rolcreaterole, rolcreatedb, "
@@ -804,51 +791,6 @@ dumpRoles(PGconn *conn)
                           "rolname = current_user AS is_current_user "
                           "FROM %s "
                           "ORDER BY 2", role_catalog, role_catalog);
-    else if (server_version >= 80100)
-        printfPQExpBuffer(buf,
-                          "SELECT oid, rolname, rolsuper, rolinherit, "
-                          "rolcreaterole, rolcreatedb, "
-                          "rolcanlogin, rolconnlimit, rolpassword, "
-                          "rolvaliduntil, false as rolreplication, "
-                          "false as rolbypassrls, "
-                          "null as rolcomment, "
-                          "rolname = current_user AS is_current_user "
-                          "FROM %s "
-                          "ORDER BY 2", role_catalog);
-    else
-        printfPQExpBuffer(buf,
-                          "SELECT 0 as oid, usename as rolname, "
-                          "usesuper as rolsuper, "
-                          "true as rolinherit, "
-                          "usesuper as rolcreaterole, "
-                          "usecreatedb as rolcreatedb, "
-                          "true as rolcanlogin, "
-                          "-1 as rolconnlimit, "
-                          "passwd as rolpassword, "
-                          "valuntil as rolvaliduntil, "
-                          "false as rolreplication, "
-                          "false as rolbypassrls, "
-                          "null as rolcomment, "
-                          "usename = current_user AS is_current_user "
-                          "FROM pg_shadow "
-                          "UNION ALL "
-                          "SELECT 0 as oid, groname as rolname, "
-                          "false as rolsuper, "
-                          "true as rolinherit, "
-                          "false as rolcreaterole, "
-                          "false as rolcreatedb, "
-                          "false as rolcanlogin, "
-                          "-1 as rolconnlimit, "
-                          "null::text as rolpassword, "
-                          "null::timestamptz as rolvaliduntil, "
-                          "false as rolreplication, "
-                          "false as rolbypassrls, "
-                          "null as rolcomment, "
-                          "false AS is_current_user "
-                          "FROM pg_group "
-                          "WHERE NOT EXISTS (SELECT 1 FROM pg_shadow "
-                          " WHERE usename = groname) "
-                          "ORDER BY 2");

     res = executeQuery(conn, buf->data);

@@ -992,7 +934,7 @@ dumpRoles(PGconn *conn)


 /*
- * Dump role memberships.  This code is used for 8.1 and later servers.
+ * Dump role memberships.
  *
  * Note: we expect dumpRoles already created all the roles, but there is
  * no membership yet.
@@ -1049,75 +991,6 @@ dumpRoleMembership(PGconn *conn)
     fprintf(OPF, "\n\n");
 }

-/*
- * Dump group memberships from a pre-8.1 server.  It's annoying that we
- * can't share any useful amount of code with the post-8.1 case, but
- * the catalog representations are too different.
- *
- * Note: we expect dumpRoles already created all the roles, but there is
- * no membership yet.
- */
-static void
-dumpGroups(PGconn *conn)
-{
-    PQExpBuffer buf = createPQExpBuffer();
-    PGresult   *res;
-    int            i;
-
-    res = executeQuery(conn,
-                       "SELECT groname, grolist FROM pg_group ORDER BY 1");
-
-    if (PQntuples(res) > 0)
-        fprintf(OPF, "--\n-- Role memberships\n--\n\n");
-
-    for (i = 0; i < PQntuples(res); i++)
-    {
-        char       *groname = PQgetvalue(res, i, 0);
-        char       *grolist = PQgetvalue(res, i, 1);
-        PGresult   *res2;
-        int            j;
-
-        /*
-         * Array representation is {1,2,3} ... convert to (1,2,3)
-         */
-        if (strlen(grolist) < 3)
-            continue;
-
-        grolist = pg_strdup(grolist);
-        grolist[0] = '(';
-        grolist[strlen(grolist) - 1] = ')';
-        printfPQExpBuffer(buf,
-                          "SELECT usename FROM pg_shadow "
-                          "WHERE usesysid IN %s ORDER BY 1",
-                          grolist);
-        free(grolist);
-
-        res2 = executeQuery(conn, buf->data);
-
-        for (j = 0; j < PQntuples(res2); j++)
-        {
-            char       *usename = PQgetvalue(res2, j, 0);
-
-            /*
-             * Don't try to grant a role to itself; can happen if old
-             * installation has identically named user and group.
-             */
-            if (strcmp(groname, usename) == 0)
-                continue;
-
-            fprintf(OPF, "GRANT %s", fmtId(groname));
-            fprintf(OPF, " TO %s;\n", fmtId(usename));
-        }
-
-        PQclear(res2);
-    }
-
-    PQclear(res);
-    destroyPQExpBuffer(buf);
-
-    fprintf(OPF, "\n\n");
-}
-

 /*
  * Drop tablespaces.
@@ -1220,7 +1093,7 @@ dumpTablespaces(PGconn *conn)
                            "FROM pg_catalog.pg_tablespace "
                            "WHERE spcname !~ '^pg_' "
                            "ORDER BY 1");
-    else if (server_version >= 90000)
+    else
         res = executeQuery(conn, "SELECT oid, spcname, "
                            "pg_catalog.pg_get_userbyid(spcowner) AS spcowner, "
                            "spclocation, spcacl, '' as rspcacl, "
@@ -1229,22 +1102,6 @@ dumpTablespaces(PGconn *conn)
                            "FROM pg_catalog.pg_tablespace "
                            "WHERE spcname !~ '^pg_' "
                            "ORDER BY 1");
-    else if (server_version >= 80200)
-        res = executeQuery(conn, "SELECT oid, spcname, "
-                           "pg_catalog.pg_get_userbyid(spcowner) AS spcowner, "
-                           "spclocation, spcacl, '' as rspcacl, null, "
-                           "pg_catalog.shobj_description(oid, 'pg_tablespace') "
-                           "FROM pg_catalog.pg_tablespace "
-                           "WHERE spcname !~ '^pg_' "
-                           "ORDER BY 1");
-    else
-        res = executeQuery(conn, "SELECT oid, spcname, "
-                           "pg_catalog.pg_get_userbyid(spcowner) AS spcowner, "
-                           "spclocation, spcacl, '' as rspcacl, "
-                           "null, null "
-                           "FROM pg_catalog.pg_tablespace "
-                           "WHERE spcname !~ '^pg_' "
-                           "ORDER BY 1");

     if (PQntuples(res) > 0)
         fprintf(OPF, "--\n-- Tablespaces\n--\n\n");
@@ -1371,17 +1228,11 @@ dumpUserConfig(PGconn *conn, const char *username)
     {
         PGresult   *res;

-        if (server_version >= 90000)
-            printfPQExpBuffer(buf, "SELECT setconfig[%d] FROM pg_db_role_setting WHERE "
-                              "setdatabase = 0 AND setrole = "
-                              "(SELECT oid FROM %s WHERE rolname = ", count, role_catalog);
-        else if (server_version >= 80100)
-            printfPQExpBuffer(buf, "SELECT rolconfig[%d] FROM %s WHERE rolname = ", count, role_catalog);
-        else
-            printfPQExpBuffer(buf, "SELECT useconfig[%d] FROM pg_shadow WHERE usename = ", count);
+        printfPQExpBuffer(buf, "SELECT setconfig[%d] FROM pg_db_role_setting WHERE "
+                          "setdatabase = 0 AND setrole = "
+                          "(SELECT oid FROM %s WHERE rolname = ", count, role_catalog);
         appendStringLiteralConn(buf, username, conn);
-        if (server_version >= 90000)
-            appendPQExpBufferChar(buf, ')');
+        appendPQExpBufferChar(buf, ')');

         res = executeQuery(conn, buf->data);
         if (PQntuples(res) == 1 &&
@@ -1816,11 +1667,11 @@ connectDatabase(const char *dbname, const char *connection_string,
     my_version = PG_VERSION_NUM;

     /*
-     * We allow the server to be back to 8.0, and up to any minor release of
+     * We allow the server to be back to 9.0, and up to any minor release of
      * our own major version.  (See also version check in pg_dump.c.)
      */
     if (my_version != server_version
-        && (server_version < 80000 ||
+        && (server_version < 90000 ||
             (server_version / 100) > (my_version / 100)))
     {
         pg_log_error("server version: %s; %s version: %s",
diff --git a/src/bin/pg_upgrade/check.c b/src/bin/pg_upgrade/check.c
index ad5f391995..15c83c4d40 100644
--- a/src/bin/pg_upgrade/check.c
+++ b/src/bin/pg_upgrade/check.c
@@ -159,10 +159,6 @@ check_and_dump_old_cluster(bool live_check)
     if (GET_MAJOR_VERSION(old_cluster.major_version) <= 903)
         old_9_3_check_for_line_data_type_usage(&old_cluster);

-    /* Pre-PG 9.0 had no large object permissions */
-    if (GET_MAJOR_VERSION(old_cluster.major_version) <= 804)
-        new_9_0_populate_pg_largeobject_metadata(&old_cluster, true);
-
     /*
      * While not a check option, we do this now because this is the only time
      * the old server is running.
@@ -233,10 +229,6 @@ issue_warnings_and_set_wal_level(void)
      */
     start_postmaster(&new_cluster, true);

-    /* Create dummy large object permissions for old < PG 9.0? */
-    if (GET_MAJOR_VERSION(old_cluster.major_version) <= 804)
-        new_9_0_populate_pg_largeobject_metadata(&new_cluster, false);
-
     /* Reindex hash indexes for old < 10.0 */
     if (GET_MAJOR_VERSION(old_cluster.major_version) <= 906)
         old_9_6_invalidate_hash_indexes(&new_cluster, false);
@@ -295,8 +287,8 @@ check_cluster_versions(void)
      * upgrades
      */

-    if (GET_MAJOR_VERSION(old_cluster.major_version) < 804)
-        pg_fatal("This utility can only upgrade from PostgreSQL version 8.4 and later.\n");
+    if (GET_MAJOR_VERSION(old_cluster.major_version) < 900)
+        pg_fatal("This utility can only upgrade from PostgreSQL version 9.0 and later.\n");

     /* Only current PG version is supported as a target */
     if (GET_MAJOR_VERSION(new_cluster.major_version) != GET_MAJOR_VERSION(PG_VERSION_NUM))
@@ -479,11 +471,6 @@ check_databases_are_compatible(void)
  * they do, it would cause an error while restoring global objects.
  * This allows the failure to be detected at check time, rather than
  * during schema restore.
- *
- * Note, v8.4 has no tablespace_suffix, which is fine so long as the
- * version being upgraded *to* has a suffix, since it's not allowed
- * to pg_upgrade from a version to the same version if tablespaces are
- * in use.
  */
 static void
 check_for_new_tablespace_dir(ClusterInfo *new_cluster)
@@ -597,11 +584,6 @@ create_script_for_old_cluster_deletion(char **deletion_script_file_name)
             int            dbnum;

             fprintf(script, "\n");
-            /* remove PG_VERSION? */
-            if (GET_MAJOR_VERSION(old_cluster.major_version) <= 804)
-                fprintf(script, RM_CMD " %s%cPG_VERSION\n",
-                        fix_path_separator(os_info.old_tablespaces[tblnum]),
-                        PATH_SEPARATOR);

             for (dbnum = 0; dbnum < old_cluster.dbarr.ndbs; dbnum++)
                 fprintf(script, RMDIR_CMD " %c%s%c%u%c\n", PATH_QUOTE,
diff --git a/src/bin/pg_upgrade/info.c b/src/bin/pg_upgrade/info.c
index 5d9a26cf82..fb8b1dae77 100644
--- a/src/bin/pg_upgrade/info.c
+++ b/src/bin/pg_upgrade/info.c
@@ -118,15 +118,9 @@ gen_db_file_maps(DbInfo *old_db, DbInfo *new_db,
          * Verify that rels of same OID have same name.  The namespace name
          * should always match, but the relname might not match for TOAST
          * tables (and, therefore, their indexes).
-         *
-         * TOAST table names initially match the heap pg_class oid, but
-         * pre-9.0 they can change during certain commands such as CLUSTER, so
-         * don't insist on a match if old cluster is < 9.0.
          */
         if (strcmp(old_rel->nspname, new_rel->nspname) != 0 ||
-            (strcmp(old_rel->relname, new_rel->relname) != 0 &&
-             (GET_MAJOR_VERSION(old_cluster.major_version) >= 900 ||
-              strcmp(old_rel->nspname, "pg_toast") != 0)))
+            strcmp(old_rel->relname, new_rel->relname) != 0)
         {
             pg_log(PG_WARNING, "Relation names for OID %u in database \"%s\" do not match: "
                    "old name \"%s.%s\", new name \"%s.%s\"\n",
diff --git a/src/bin/pg_upgrade/pg_upgrade.h b/src/bin/pg_upgrade/pg_upgrade.h
index ca0795f68f..5e2fc1d217 100644
--- a/src/bin/pg_upgrade/pg_upgrade.h
+++ b/src/bin/pg_upgrade/pg_upgrade.h
@@ -447,8 +447,6 @@ bool        check_for_data_types_usage(ClusterInfo *cluster,
 bool        check_for_data_type_usage(ClusterInfo *cluster,
                                       const char *type_name,
                                       const char *output_path);
-void        new_9_0_populate_pg_largeobject_metadata(ClusterInfo *cluster,
-                                                     bool check_mode);
 void        old_9_3_check_for_line_data_type_usage(ClusterInfo *cluster);
 void        old_9_6_check_for_unknown_data_type_usage(ClusterInfo *cluster);
 void        old_9_6_invalidate_hash_indexes(ClusterInfo *cluster,
diff --git a/src/bin/pg_upgrade/tablespace.c b/src/bin/pg_upgrade/tablespace.c
index bd49d300db..de26946eb4 100644
--- a/src/bin/pg_upgrade/tablespace.c
+++ b/src/bin/pg_upgrade/tablespace.c
@@ -105,15 +105,10 @@ get_tablespace_paths(void)
 static void
 set_tablespace_directory_suffix(ClusterInfo *cluster)
 {
-    if (GET_MAJOR_VERSION(cluster->major_version) <= 804)
-        cluster->tablespace_suffix = pg_strdup("");
-    else
-    {
         /* This cluster has a version-specific subdirectory */

         /* The leading slash is needed to start a new directory. */
         cluster->tablespace_suffix = psprintf("/PG_%s_%d",
                                               cluster->major_version_str,
                                               cluster->controldata.cat_ver);
-    }
 }
diff --git a/src/bin/pg_upgrade/test.sh b/src/bin/pg_upgrade/test.sh
index 8593488907..e574f2639a 100644
--- a/src/bin/pg_upgrade/test.sh
+++ b/src/bin/pg_upgrade/test.sh
@@ -183,11 +183,6 @@ if "$MAKE" -C "$oldsrc" installcheck-parallel; then
     if [ "$newsrc" != "$oldsrc" ]; then
         fix_sql=""
         # Get rid of objects not feasible in later versions
-        case $oldpgversion in
-            804??)
-                fix_sql="DROP FUNCTION public.myfunc(integer);"
-                ;;
-        esac

         # Last appeared in v9.6
         if [ $oldpgversion -lt 100000 ]; then
@@ -244,9 +239,6 @@ if "$MAKE" -C "$oldsrc" installcheck-parallel; then
         # update references to old source tree's regress.so etc
         fix_sql=""
         case $oldpgversion in
-            804??)
-                fix_sql="UPDATE pg_proc SET probin = replace(probin::text, '$oldsrc', '$newsrc')::bytea WHERE probin
LIKE'$oldsrc%';" 
-                ;;
             *)
                 fix_sql="UPDATE pg_proc SET probin = replace(probin, '$oldsrc', '$newsrc') WHERE probin LIKE
'$oldsrc%';"
                 ;;
diff --git a/src/bin/pg_upgrade/version.c b/src/bin/pg_upgrade/version.c
index ccb012657b..82244a93a7 100644
--- a/src/bin/pg_upgrade/version.c
+++ b/src/bin/pg_upgrade/version.c
@@ -13,88 +13,6 @@
 #include "fe_utils/string_utils.h"
 #include "pg_upgrade.h"

-/*
- * new_9_0_populate_pg_largeobject_metadata()
- *    new >= 9.0, old <= 8.4
- *    9.0 has a new pg_largeobject permission table
- */
-void
-new_9_0_populate_pg_largeobject_metadata(ClusterInfo *cluster, bool check_mode)
-{
-    int            dbnum;
-    FILE       *script = NULL;
-    bool        found = false;
-    char        output_path[MAXPGPATH];
-
-    prep_status("Checking for large objects");
-
-    snprintf(output_path, sizeof(output_path), "pg_largeobject.sql");
-
-    for (dbnum = 0; dbnum < cluster->dbarr.ndbs; dbnum++)
-    {
-        PGresult   *res;
-        int            i_count;
-        DbInfo       *active_db = &cluster->dbarr.dbs[dbnum];
-        PGconn       *conn = connectToServer(cluster, active_db->db_name);
-
-        /* find if there are any large objects */
-        res = executeQueryOrDie(conn,
-                                "SELECT count(*) "
-                                "FROM    pg_catalog.pg_largeobject ");
-
-        i_count = PQfnumber(res, "count");
-        if (atoi(PQgetvalue(res, 0, i_count)) != 0)
-        {
-            found = true;
-            if (!check_mode)
-            {
-                PQExpBufferData connectbuf;
-
-                if (script == NULL && (script = fopen_priv(output_path, "w")) == NULL)
-                    pg_fatal("could not open file \"%s\": %s\n", output_path,
-                             strerror(errno));
-
-                initPQExpBuffer(&connectbuf);
-                appendPsqlMetaConnect(&connectbuf, active_db->db_name);
-                fputs(connectbuf.data, script);
-                termPQExpBuffer(&connectbuf);
-
-                fprintf(script,
-                        "SELECT pg_catalog.lo_create(t.loid)\n"
-                        "FROM (SELECT DISTINCT loid FROM pg_catalog.pg_largeobject) AS t;\n");
-            }
-        }
-
-        PQclear(res);
-        PQfinish(conn);
-    }
-
-    if (script)
-        fclose(script);
-
-    if (found)
-    {
-        report_status(PG_WARNING, "warning");
-        if (check_mode)
-            pg_log(PG_WARNING, "\n"
-                   "Your installation contains large objects.  The new database has an\n"
-                   "additional large object permission table.  After upgrading, you will be\n"
-                   "given a command to populate the pg_largeobject_metadata table with\n"
-                   "default permissions.\n\n");
-        else
-            pg_log(PG_WARNING, "\n"
-                   "Your installation contains large objects.  The new database has an\n"
-                   "additional large object permission table, so default permissions must be\n"
-                   "defined for all large objects.  The file\n"
-                   "    %s\n"
-                   "when executed by psql by the database superuser will set the default\n"
-                   "permissions.\n\n",
-                   output_path);
-    }
-    else
-        check_ok();
-}
-

 /*
  * check_for_data_types_usage()

Re: pg_dump versus ancient server versions

От
Andrew Dunstan
Дата:
On 10/25/21 19:12, Justin Pryzby wrote:
> On Mon, Oct 25, 2021 at 11:38:51AM -0400, Tom Lane wrote:
>> Andrew Dunstan <andrew@dunslane.net> writes:
>>> On 10/25/21 10:23, Tom Lane wrote:
>>>> (Hmmm ... but disk space could
>>>> become a problem, particularly on older machines with not so much
>>>> disk.  Do we really need to maintain a separate checkout for each
>>>> branch?  It seems like a fresh checkout from the repo would be
>>>> little more expensive than the current copy-a-checkout process.)
>>> If you set it up with these settings then the disk space used is minimal:
>>>      git_use_workdirs => 1,
>>>      rm_worktrees => 1,
>> Maybe we should make those the defaults?  AFAICS the current
>> default setup uses circa 200MB per back branch, even between runs.
>> I'm not sure what that is buying us.
> Maybe git's shared/"alternates" would be helpful to minimize the size of
> .git/objects?
>
> I'm not sure - it looks like the BF client does its own stuff with symlinks.
> Is that for compatibility with old git ?
> https://github.com/PGBuildFarm/client-code/blob/main/PGBuild/SCM.pm


It's actually based on the git contrib script git-new-workdir. And using
it is the default (except on Windows, where it doesn't work due to
issues with symlinking plain files :-( )

Since what we have is not broken I'm not inclined to fix it.

The issue Tom was complaining about is different, namely the storage for
each branch's working tree. As I mentioned upthread, you can alleviate
that by setting "rm_worktrees => 1" in your config. That works
everywhere, including Windows, and will be the default in the next
buildfarm release.


cheers


andrew


-- 

Andrew Dunstan
EDB: https://www.enterprisedb.com




Re: pg_dump versus ancient server versions

От
Andrew Dunstan
Дата:
On 10/25/21 13:09, Tom Lane wrote:
> Andres Freund <andres@anarazel.de> writes:
>> On 2021-10-22 19:30:25 -0400, Tom Lane wrote:
>>> Yeah.  I checked into when it was that we dropped pre-8.0 support
>>> from pg_dump, and the answer is just about five years ago (64f3524e2).
>>> So moving the bar forward by five releases isn't at all out of line.
>>> 8.4 would be eight years past EOL by the time v15 comes out.
>> I'd really like us to adopt a "default" policy on this. I think it's a waste
>> to spend time every few years arguing what exact versions to drop. I'd much
>> rather say that, unless there are concrete reasons to deviate from that, we
>> provide pg_dump compatibility for 5+3 releases, pg_upgrade for 5+1, and psql
>> for 5 releases or something like that.
> I agree with considering something like that to be the minimum support
> policy, but the actual changes need a bit more care.  For example, when
> we last did this, the technical need was just to drop pre-7.4 versions,
> but we chose to make the cutoff 8.0 on the grounds that that was more
> understandable to users [1].  In the same way, I'm thinking of moving the
> cutoff to 9.0 now, although 8.4 would be sufficient from a technical
> standpoint.
>
> OTOH, in the new world of one-part major versions, it's less clear that
> there will be obvious division points for future cutoff changes.  Maybe
> versions-divisible-by-five would work?  Or versions divisible by ten,
> but experience so far suggests that we'll want to move the cutoff more
> often than once every ten years.
>
>     


pg_upgrade claims to be able to operate on 8.4, which might be all the
better for some regular testing (which this could enable), so that seems
to me more like where the cutoff should be at least for this round.


cheers


andrew

--
Andrew Dunstan
EDB: https://www.enterprisedb.com




Re: pg_dump versus ancient server versions

От
Peter Eisentraut
Дата:
I was thinking a bit about formulating a policy for pg_dump backward
compatibility, based on the discussions in this thread.

Premises and preparatory thoughts:

- Users (and developers) want pg_dump to support server versions that
   are much older than non-EOL versions.

- Less critically, much-longer backward compatibility has also
   historically been provided for psql, so keeping those two the same
   would make sense.

- The policy for other client-side tools (list at [0]) is less clear
   and arguably less important.  I suggest we focus on pg_dump and psql
   first, and then we can decide for the rest whether they want to
   match a longer window, a shorter window, or a different policy
   altogether (e.g., ecpg).

- If we are going to maintain compatibility with very old server
   versions, we need to make sure the older server versions can at
   least still be built while an allegedly-compatible client tool is
   under support.

[0]: https://www.postgresql.org/docs/devel/reference-client.html

Proposal:

* pg_dump and psql will maintain compatibility with servers at least
   ten major releases back.

This assumes a yearly major release cadence.

I use the count of major releases here instead of some number of
years, as was previously discussed, for two reasons.  First, it makes
computing the cutoff easier, because you are not bothered by whether
some old release was released a few weeks before or after the
equivalent date in the current year for the new release.  Second,
there is no ambiguity about what happens during the lifetime of a
major release: If major release $NEW supports major release $OLD at
the time of $NEW's release, then that stays the same for the whole
life of $NEW; we don't start dropping support for $OLD in $NEW.5
because a year has passed.

I say "at least" because I wouldn't go around aggressively removing
support for old releases.  If $NEW is supposed to support 9.5 but
there is code that says `if (version > 9.4)`, I would not s/9.4/9.5/
that unless that code is touched for other reasons.

Then ...

* We keep old major release branches buildable as long as a new major
   release that has support for that old release is under support.

Buildable for this purpose means just enough that you can use it to
test pg_dump and psql.  This probably includes being able to run make
installcheck and use pg_dump and psql against the regression database.
It does not require support for any additional build-time options that
are not required for this purpose (e.g., new OpenSSL releases).
Conversely, it should be buildable with default compiler options.  For
example, if it fails to build and test cleanly unless you use -O0,
that should be fixed.  Fixes in very-old branches should normally be
backpatches that have stabilized in under-support branches.  Changes
that silence compiler warnings in newer compilers are by themselves
not considered a backpatch-worthy fix.

(In some cases, the support window of typical compilers should be
considered.  If adding support for a very new compiler with new
aggressive optimizations turns out to be too invasive, then that
compiler might simply be declared not supported for that release.  But
we should strive to support at least one compiler that still has some
upstream support.)

This keep-buildable effort is on an as-needed basis.  There is no
requirement to keep the buildability current at all times, and there
is no requirement to keep all platforms working at all times.
Obviously, any changes made to improve buildability should not
knowingly adversely affect other platforms.

(The above could be reconsidered if buildfarm support is available,
but I don't consider that necessary and wouldn't want to wait for it.)

There is no obligation on anyone backpatching fixes to supported
branches to also backpatch them to keep-buildable branches.  It is up
to those working on pg_dump/psql and requiring testing against old
versions to pick available fixes and apply them to keep-buildable
branches as needed.

Finally, none of this is meant to imply that there will be any
releases, packages, security support, production support, or community
support for keep-buildable branches.  This is a Git-repo-only,
developer-focusing effort.

Example under this proposal:

PG 15 supports PG 9.2
PG 14 supports PG 9.1
PG 13 supports PG 9.0
PG 12 supports PG 8.4
PG 11 supports PG 8.3
PG 10 supports PG 8.2

In practice, the effort can focus on keeping the most recent cutoff
release buildable.  So in the above example, we really only need to
keep PG >=9.2 buildable to support ongoing development.  The chances
that some needs to touch code pertaining to older versions in
backbranches is lower, so those really would need to be dealt with
very rarely.


The parent message has proposed to remove support for PG <9.0 from
master.  But I think that was chosen mainly because it was a round
number.  I suggest we pick a cutoff based on years, as I had
described, and then proceed with that patch.



Re: pg_dump versus ancient server versions

От
Robert Haas
Дата:
On Thu, Dec 2, 2021 at 5:01 AM Peter Eisentraut
<peter.eisentraut@enterprisedb.com> wrote:
> * pg_dump and psql will maintain compatibility with servers at least
>    ten major releases back.
>
> * We keep old major release branches buildable as long as a new major
>    release that has support for that old release is under support.
>
> Buildable for this purpose means just enough that you can use it to
> test pg_dump and psql.  This probably includes being able to run make
> installcheck and use pg_dump and psql against the regression database.
> It does not require support for any additional build-time options that
> are not required for this purpose (e.g., new OpenSSL releases).
> Conversely, it should be buildable with default compiler options.  For
> example, if it fails to build and test cleanly unless you use -O0,
> that should be fixed.  Fixes in very-old branches should normally be
> backpatches that have stabilized in under-support branches.  Changes
> that silence compiler warnings in newer compilers are by themselves
> not considered a backpatch-worthy fix.

Sounds reasonable. It doesn't really make sense to insist that the
tools have to be compatible with releases that most developers can't
actually build.

-- 
Robert Haas
EDB: http://www.enterprisedb.com



Re: pg_dump versus ancient server versions

От
Tom Lane
Дата:
Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:
> Proposal:

> * pg_dump and psql will maintain compatibility with servers at least
>    ten major releases back.
> * We keep old major release branches buildable as long as a new major
>    release that has support for that old release is under support.

> This assumes a yearly major release cadence.

If the point is to not have to count dates carefully, why does the cadence
matter?

> I say "at least" because I wouldn't go around aggressively removing
> support for old releases.  If $NEW is supposed to support 9.5 but
> there is code that says `if (version > 9.4)`, I would not s/9.4/9.5/
> that unless that code is touched for other reasons.

I can get behind something roughly like this, but I wonder if it wouldn't
be better to formulate the policy in a reactive way, i.e. when X happens
we'll do Y.  If we don't plan to proactively remove some code every year,
then it seems like the policy really is more like "when something breaks,
then we'll make an attempt to keep it working if the release is less than
ten majors back; otherwise we'll declare that release no longer
buildable."

However, this'd imply continuing to test against releases that are out of
the ten-year window but have not yet been found to be broken.  Not sure
if that's a useful expenditure of test resources or not.

> Buildable for this purpose means just enough that you can use it to
> test pg_dump and psql.  This probably includes being able to run make
> installcheck and use pg_dump and psql against the regression database.
> It does not require support for any additional build-time options that
> are not required for this purpose (e.g., new OpenSSL releases).

I agree with the idea of being conservative about what outside
dependencies we will worry about for "buildable" old versions.
(Your nearby message about Python breakage is a good example of
why we must limit that.)  But I wonder about, say, libxml or libicu,
or even if we can afford to drop all the non-plpgsql PLs.  An
example of why that seems worrisome is that it's not clear we'd
have any meaningful coverage of transforms in pg_dump with no PLs.
I don't have any immediate proposal here, but it seems like an area
that needs some thought and specific policy.

> Example under this proposal:

> PG 15 supports PG 9.2
> PG 14 supports PG 9.1
> PG 13 supports PG 9.0
> PG 12 supports PG 8.4
> PG 11 supports PG 8.3
> PG 10 supports PG 8.2

I was going to express concern about having to resurrect branches
back to 8.2, but:

> In practice, the effort can focus on keeping the most recent cutoff
> release buildable.  So in the above example, we really only need to
> keep PG >=9.2 buildable to support ongoing development.  The chances
> that some needs to touch code pertaining to older versions in
> backbranches is lower, so those really would need to be dealt with
> very rarely.

OK.  Also, when you do need to check that, there are often other ways
than rebuilding the old branch on modern platforms --- people may
well have still-executable builds laying about, even if rebuilding
from source would be problematic.

            regards, tom lane



Re: pg_dump versus ancient server versions

От
Andrew Dunstan
Дата:
On 12/2/21 12:30, Tom Lane wrote:
>
>> In practice, the effort can focus on keeping the most recent cutoff
>> release buildable.  So in the above example, we really only need to
>> keep PG >=9.2 buildable to support ongoing development.  The chances
>> that some needs to touch code pertaining to older versions in
>> backbranches is lower, so those really would need to be dealt with
>> very rarely.
> OK.  Also, when you do need to check that, there are often other ways
> than rebuilding the old branch on modern platforms --- people may
> well have still-executable builds laying about, even if rebuilding
> from source would be problematic.
>
>             



I have a very old fedora instance where I can build every release back
to 7.2 :-) And with only slight massaging for the very old releases,
these builds run on my Fedora 34 development system. Certainly 8.2 and
up wouldn't be a problem. Currently I have only tested building without
any extra libraries/PLs, but I can look at other combinations. So, long
story short this is fairly doable at least in some environments. This
provides a good use case for the work I have been doing on backwards
compatibility of the TAP framework. I need to get back to that now that
the great module namespace adjustment has settled down.


cheers


andrew


--
Andrew Dunstan
EDB: https://www.enterprisedb.com




Re: pg_dump versus ancient server versions

От
Andres Freund
Дата:
Hi,

On 2021-12-02 11:01:47 +0100, Peter Eisentraut wrote:
> - The policy for other client-side tools (list at [0]) is less clear
>   and arguably less important.  I suggest we focus on pg_dump and psql
>   first, and then we can decide for the rest whether they want to
>   match a longer window, a shorter window, or a different policy
>   altogether (e.g., ecpg).

I think we should at least include pg_upgrade in this as well, it's pretty
closely tied to at least pg_dump.


> * pg_dump and psql will maintain compatibility with servers at least
>   ten major releases back.

Personally I think that's too long... It boils down keeping branches buildable
for ~15 years after they've been released. That strikes me as pretty far into
diminishing-returns, and steeply increasing costs, territory.

I realize it's more complicated for users, but a policy based on supporting a
certain number of out-of-support branches calculated from the newest major
version is more realistic. I'd personally go for something like newest-major -
7 (i.e. 2 extra releases), but I realize that others think it's worthwhile to
support a few more.  I think there's a considerable advantage of having one
cutoff date across all branches.

That's not to say we'd remove support for older versions from back
branches. Just that we don't ever consider them supported (or test them) once
below the cutoff.


> I use the count of major releases here instead of some number of
> years, as was previously discussed, for two reasons.  First, it makes
> computing the cutoff easier, because you are not bothered by whether
> some old release was released a few weeks before or after the
> equivalent date in the current year for the new release.  Second,
> there is no ambiguity about what happens during the lifetime of a
> major release: If major release $NEW supports major release $OLD at
> the time of $NEW's release, then that stays the same for the whole
> life of $NEW; we don't start dropping support for $OLD in $NEW.5
> because a year has passed.

Makes sense.


> * We keep old major release branches buildable as long as a new major
>   release that has support for that old release is under support.

> Buildable for this purpose means just enough that you can use it to
> test pg_dump and psql.  This probably includes being able to run make
> installcheck and use pg_dump and psql against the regression database.

I think we should explicitly limit the number of platforms we care about for
this purpose. I don't think we should even try to keep 8.2 compile on AIX or
whatnot.

Greetings,

Andres Freund



Re: pg_dump versus ancient server versions

От
Tom Lane
Дата:
Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:
> On 02.12.21 23:16, Andres Freund wrote:
>> I realize it's more complicated for users, but a policy based on supporting a
>> certain number of out-of-support branches calculated from the newest major
>> version is more realistic. I'd personally go for something like newest-major -
>> 7 (i.e. 2 extra releases), but I realize that others think it's worthwhile to
>> support a few more.  I think there's a considerable advantage of having one
>> cutoff date across all branches.

> I'm not sure it will be clear what this would actually mean.  Assume
> PG11 supports back to 9.4 (14-7) now, but when PG15 comes out, we drop
> 9.4 support. But the PG11 code hasn't changed, and PG9.4 hasn't changed,
> so it will most likely still work.  Then we have messaging that is out
> of sync with reality.  I can see the advantage of this approach, but the
> communication around it might have to be refined.

I don't find this suggestion to be an improvement over Peter's original
formulation, for two reasons:

* I'm not convinced that it saves us any actual work; as you say, the
code doesn't stop working just because we declare it out-of-support.

* There's a real-world use-case underneath here.  If somewhere you've
discovered a decades-old server that you need to upgrade, and current
pg_dump won't dump from it, you would like it to be well-defined
which intermediate pg_dump versions you can use.  So if 10.19 can
dump from that hoary server, it would not be nice if 10.20 can't;
nor if the documentation lies to you about that based on which minor
version you happen to consult.

>> I think we should explicitly limit the number of platforms we care about for
>> this purpose. I don't think we should even try to keep 8.2 compile on AIX or
>> whatnot.

> It's meant to be developer-facing, so only for platforms that developers
> use.  I think that can police itself, if we define it that way.

I agree that if you care about doing this sort of test on platform X,
it's up to you to patch for that.  I think Andres' concern is about
the amount of committer bandwidth that might be needed to handle
such patches submitted by non-committers.  However, based on the
experiment I just ran, I think it's not really likely to be a big deal:
there are not that many problems, and most of them just amount to
back-patching something that originally wasn't back-patched.

What's most likely to happen IMO is that committers will just start
back-patching essential portability fixes into out-of-support-but-
still-in-the-buildability-window branches, contemporaneously with
the original fix.  Yeah, that does mean more committer effort,
but only for a very small number of patches.

            regards, tom lane



Re: pg_dump versus ancient server versions

От
Robert Haas
Дата:
On Fri, Dec 3, 2021 at 1:30 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:
> What's most likely to happen IMO is that committers will just start
> back-patching essential portability fixes into out-of-support-but-
> still-in-the-buildability-window branches, contemporaneously with
> the original fix.  Yeah, that does mean more committer effort,
> but only for a very small number of patches.

I agree. I think that's exactly what we want to have happen, and if a
given policy won't have exactly this result then the policy needs
adjusting.

-- 
Robert Haas
EDB: http://www.enterprisedb.com



Re: pg_dump versus ancient server versions

От
Tom Lane
Дата:
I ran a new set of experiments concerning building back branches
on modern platforms, this time trying Fedora 35 (gcc 11.2.1)
on x86_64.  I widened the scope of the testing a bit by adding
"--enable-nls --with-perl" and running check-world not just the
core tests.  Salient results:

* Everything back to 9.2 passes the test, although with more
and more compile warnings the further back you go.

* 9.1 fails with "conflicting types for 'base_yylex'", much as
I saw on macOS except it's a hard error on this compiler.

* Parallel check-world is pretty unreliable before v10 (I knew
this already, actually).  But without parallelism, it's fine.

Based on these results, I think maybe we should raise our ambitions
a bit compared to Peter's original proposal.  Specifically,
I wonder if it wouldn't be wise to try to silence compile warnings
in these branches.  The argument for this is basically that if we
don't, then every time someone builds one of these branches, they
have to tediously go through the warnings and verify that
they're not important.  It won't take long for the accumulated
time-wastage from that to exceed the cost of back-patching whatever
we did to silence the warning in later branches.

Now, I'm still not interested in trying to silence
maybe-uninitialized warnings pre-9.3, mainly because of the
ereport-ERROR-doesnt-return issue.  (I saw far fewer of those
under gcc than clang, but not zero.)  We could ignore those
figuring that 9.2 will be out of scope in a year anyway, or else
teach 9.2's configure to select -Wno-maybe-uninitialized where
possible.

Likewise, getting check-world to parallelize successfully pre-v10
seems like a bridge too far.  But I would, for example, be in favor
of back-patching eb9812f27 (Make pg_upgrade's test.sh less chatty).
It's just annoying to run check-world and get that output now.

            regards, tom lane



Re: pg_dump versus ancient server versions

От
Robert Haas
Дата:
On Sun, Dec 5, 2021 at 7:41 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:
> Based on these results, I think maybe we should raise our ambitions
> a bit compared to Peter's original proposal.  Specifically,
> I wonder if it wouldn't be wise to try to silence compile warnings
> in these branches.  The argument for this is basically that if we
> don't, then every time someone builds one of these branches, they
> have to tediously go through the warnings and verify that
> they're not important.  It won't take long for the accumulated
> time-wastage from that to exceed the cost of back-patching whatever
> we did to silence the warning in later branches.

Yep. I have long been of the view, and have said before, that there is
very little harm in doing some maintenance of EOL branches. Making it
easy to test against them is a great way to improve our chances of
actually having the amount of backward-compatibility that we say we
want to have.

-- 
Robert Haas
EDB: http://www.enterprisedb.com



Re: pg_dump versus ancient server versions

От
Tom Lane
Дата:
Robert Haas <robertmhaas@gmail.com> writes:
> On Sun, Dec 5, 2021 at 7:41 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:
>> Based on these results, I think maybe we should raise our ambitions
>> a bit compared to Peter's original proposal.  Specifically,
>> I wonder if it wouldn't be wise to try to silence compile warnings
>> in these branches.

> Yep. I have long been of the view, and have said before, that there is
> very little harm in doing some maintenance of EOL branches. Making it
> easy to test against them is a great way to improve our chances of
> actually having the amount of backward-compatibility that we say we
> want to have.

Right.  The question that's on the table is how much is the right
amount of maintenance.  I think that back-patching user-visible bug
fixes, for example, is taking things too far.  What we want is to
be able to replicate the behavior of the branch's last released
version, using whatever build tools we are currently using.  So
back-patching something like that is counterproductive, because
now the behavior is not what was released.

A minimal amount of maintenance would be "only back-patch fixes
for issues that cause failure-to-build".  The next step up is "fix
issues that cause failure-to-pass-regression-tests", and then above
that is "fix developer-facing annoyances, such as compiler warnings
or unwanted test output, as long as you aren't changing user-facing
behavior".  I now think that it'd be reasonable to include this
last group, although I'm pretty sure Peter didn't have that in mind
in his policy sketch.

            regards, tom lane



Re: pg_dump versus ancient server versions

От
Robert Haas
Дата:
On Mon, Dec 6, 2021 at 4:19 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:
> Right.  The question that's on the table is how much is the right
> amount of maintenance.  I think that back-patching user-visible bug
> fixes, for example, is taking things too far.  What we want is to
> be able to replicate the behavior of the branch's last released
> version, using whatever build tools we are currently using.  So
> back-patching something like that is counterproductive, because
> now the behavior is not what was released.
>
> A minimal amount of maintenance would be "only back-patch fixes
> for issues that cause failure-to-build".  The next step up is "fix
> issues that cause failure-to-pass-regression-tests", and then above
> that is "fix developer-facing annoyances, such as compiler warnings
> or unwanted test output, as long as you aren't changing user-facing
> behavior".  I now think that it'd be reasonable to include this
> last group, although I'm pretty sure Peter didn't have that in mind
> in his policy sketch.

Yep, that seems reasonable to me.

I guess the point about user-visible bug fixes is that, as soon as we
start doing that, we don't really want it to be hit-or-miss. We could
make a decision to back-patch all bug fixes or those of a certain
severity or whatever we like back to older branches, and then those
branches would be supported or semi-supported depending on what rule
we adopted, and we could even continue to do releases for them if we
so chose. However, it wouldn't be a great idea to back-patch a
completely arbitrary subset of our fixes into those branches, because
then it sort of gets confusing to understand what the status of that
branch is. I don't know that I'm terribly bothered by the idea that
the behavior of the branch might deviate from the last official
release, because most bug fixes are pretty minor and wouldn't really
affect testing much, but it would be a little annoying to explain to
users that those branches contain an arbitrary subset of newer fixes,
and a little hard for us to understand what is and is not there.

That being said, suppose that a new compiler version comes out and on
that new compiler version, 'make check' crashes on the older branch
due to a missing WhateverGetDatum() call that we rectified in a later,
back-patched commit. I would consider it reasonable to back-patch that
particular bug fix into an unsupported branch to make it testable,
just like we would do for a failure-to-build issue.

-- 
Robert Haas
EDB: http://www.enterprisedb.com



Re: pg_dump versus ancient server versions

От
Tom Lane
Дата:
Robert Haas <robertmhaas@gmail.com> writes:
> I guess the point about user-visible bug fixes is that, as soon as we
> start doing that, we don't really want it to be hit-or-miss. We could
> make a decision to back-patch all bug fixes or those of a certain
> severity or whatever we like back to older branches, and then those
> branches would be supported or semi-supported depending on what rule
> we adopted, and we could even continue to do releases for them if we
> so chose. However, it wouldn't be a great idea to back-patch a
> completely arbitrary subset of our fixes into those branches, because
> then it sort of gets confusing to understand what the status of that
> branch is.

Yup, and also confusing to understand whether a given new fix should
be back-patched into the out-of-support-but-keep-buildable branches.
I want to settle on a reasonably well-defined policy for that.

I'm basically suggesting that the policy should be "back-patch the
minimal fix needed so that you can still get a clean build and clean
check-world run, using thus-and-such configure options".  (The point
of the configure options limitation being to exclude moving-target
external dependencies, such as Python.)  I think that Peter's
original suggestion could be read the same way except for the
adjective "clean".  He also said that only core regression needs
to pass not check-world; but if we're trying to test things like
pg_dump compatibility, I think we want the wider scope of what to
keep working.

            regards, tom lane



Re: pg_dump versus ancient server versions

От
Mark Dilger
Дата:

> On Dec 7, 2021, at 8:33 AM, Robert Haas <robertmhaas@gmail.com> wrote:
>
> However, it wouldn't be a great idea to back-patch a
> completely arbitrary subset of our fixes into those branches, because
> then it sort of gets confusing to understand what the status of that
> branch is. I don't know that I'm terribly bothered by the idea that
> the behavior of the branch might deviate from the last official
> release, because most bug fixes are pretty minor and wouldn't really
> affect testing much, but it would be a little annoying to explain to
> users that those branches contain an arbitrary subset of newer fixes,
> and a little hard for us to understand what is and is not there.

Wouldn't you be able to see what changed by comparing the last released tag for version X.Y against the RELX_Y_STABLE
branch? Something like `git diff REL8_4_22 origin/REL8_4_STABLE > buildability.patch`? 

Having such a patch should make reproducing old corruption bugs easier, as you could apply the buildability.patch to
thelast branch that contained the bug.  If anybody did that work, would we want it committed somewhere?
REL8_4_19_BUILDABLEor such?  For patches that apply trivially, that might not be worth keeping, but if the merge is
difficult,maybe sharing with the community would make sense. 

—
Mark Dilger
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company






Re: pg_dump versus ancient server versions

От
Tom Lane
Дата:
Mark Dilger <mark.dilger@enterprisedb.com> writes:
> Wouldn't you be able to see what changed by comparing the last released tag for version X.Y against the RELX_Y_STABLE
branch? Something like `git diff REL8_4_22 origin/REL8_4_STABLE > buildability.patch`? 

> Having such a patch should make reproducing old corruption bugs easier, as you could apply the buildability.patch to
thelast branch that contained the bug.  If anybody did that work, would we want it committed somewhere?
REL8_4_19_BUILDABLEor such?  For patches that apply trivially, that might not be worth keeping, but if the merge is
difficult,maybe sharing with the community would make sense. 

I'm not entirely following ... are you suggesting that each released minor
version needs to be kept buildable separately?  That seems like a huge
amount of extra committer effort with not much added value.  If someone
comes to me and wants to investigate a bug in a branch that's already
out-of-support, and they then say they're not running the last minor
release, I'm going to tell them to come back after updating.

It is (I suspect) true that diffing the last release against branch
tip would often yield a patch that could be used to make an older
minor release buildable again.  But when that patch doesn't work
trivially, I for one am not interested in making it work.  And
especially not interested in doing so "on spec", with no certainty
that anyone would ever need it.

            regards, tom lane



Re: pg_dump versus ancient server versions

От
Mark Dilger
Дата:

> On Dec 7, 2021, at 10:52 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
>
> I'm not entirely following ... are you suggesting that each released minor
> version needs to be kept buildable separately?

No.  I'm just wondering if we want to share the product of such efforts if anybody (me, for instance) volunteers to do
itfor some subset of minor releases.  For my heap corruption checking work, I might want to be able to build a small
numberof old minor releases that I know had corruption bugs. 

—
Mark Dilger
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company






Re: pg_dump versus ancient server versions

От
Andrew Dunstan
Дата:
On 12/7/21 13:59, Mark Dilger wrote:
>> On Dec 7, 2021, at 10:52 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
>>
>> I'm not entirely following ... are you suggesting that each released minor
>> version needs to be kept buildable separately?
> No.  I'm just wondering if we want to share the product of such efforts if anybody (me, for instance) volunteers to
doit for some subset of minor releases.  For my heap corruption checking work, I might want to be able to build a small
numberof old minor releases that I know had corruption bugs.
 
>

I doubt there's going to be a whole lot of changes. You should just be
able to cherry-pick them in most cases I suspect.


cheers


andrew


--
Andrew Dunstan
EDB: https://www.enterprisedb.com




Re: pg_dump versus ancient server versions

От
Peter Eisentraut
Дата:
On 06.12.21 22:19, Tom Lane wrote:
> A minimal amount of maintenance would be "only back-patch fixes
> for issues that cause failure-to-build".  The next step up is "fix
> issues that cause failure-to-pass-regression-tests", and then above
> that is "fix developer-facing annoyances, such as compiler warnings
> or unwanted test output, as long as you aren't changing user-facing
> behavior".  I now think that it'd be reasonable to include this
> last group, although I'm pretty sure Peter didn't have that in mind
> in his policy sketch.

I would be okay with that.



Re: pg_dump versus ancient server versions

От
Tom Lane
Дата:
[ mostly for the archives' sake ]

I wrote:
> I ran a new set of experiments concerning building back branches
> on modern platforms, this time trying Fedora 35 (gcc 11.2.1)
> on x86_64.  I widened the scope of the testing a bit by adding
> "--enable-nls --with-perl" and running check-world not just the
> core tests.  Salient results:

> * 9.1 fails with "conflicting types for 'base_yylex'", much as
> I saw on macOS except it's a hard error on this compiler.

I poked a little harder at what might be needed to get 9.1 compiled
on modern platforms.  It looks like the base_yylex issue is down
to newer versions of flex doing things differently.  We fixed
that in the v10 era via 72b1e3a21 (Build backend/parser/scan.l and
interfaces/ecpg/preproc/pgc.l standalone) and 92fb64983 (Use "%option
prefix" to set API names in ecpg's lexer), which were later back-patched
as far down as 9.2.  It might not be out of the question to back-patch
those further, but the 9.2 patches don't apply cleanly to 9.1, so some
effort would be needed.

Worrisomely, I also noted warnings like

parse_coerce.c:791:67: warning: array subscript 1 is above array bounds of 'Oid[1]' {aka 'unsigned int[1]'}
[-Warray-bounds]
  791 |                 Assert(nargs < 2 || procstruct->proargtypes.values[1] == INT4OID);
      |                                     ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~

which remind me that 9.1 lacks 8137f2c32 (Hide most variable-length fields
from Form_pg_* structs).  We did stick -fno-aggressive-loop-optimizations
into 9.1 and older branches back in 2015, but I don't have a lot of
confidence that that'd be sufficient to prevent misoptimizations in
current-vintage compilers.  Back-patching 8137f2c32 and all the follow-on
work is very clearly not something to consider, so dialing down the -O
level might be necessary if you were interested in making this go.

In short then, there is a really large gap between 9.1 and 9.2 in terms
of how hard they are to build on current toolchains.  It's kind of
fortunate that Peter proposed 9.2 rather than some earlier cutoff.
In any case, I've completely lost interest in trying to move the
keep-it-buildable cutoff to any earlier than 9.2; it doesn't look
like the effort-to-benefit ratio would be attractive at all.

            regards, tom lane



Re: pg_dump versus ancient server versions

От
Andrew Dunstan
Дата:
On 12/9/21 12:50, Tom Lane wrote:
>
> In short then, there is a really large gap between 9.1 and 9.2 in terms
> of how hard they are to build on current toolchains.  It's kind of
> fortunate that Peter proposed 9.2 rather than some earlier cutoff.
> In any case, I've completely lost interest in trying to move the
> keep-it-buildable cutoff to any earlier than 9.2; it doesn't look
> like the effort-to-benefit ratio would be attractive at all.
>
>             


9.2 is how far back crake goes in testing pg_ugrade from old versions,
so that could well be a convenient stopping point. For older versions
there is still the possibility of building on older toolchains and
running on modern ones. Yes it's more cumbersome, but it does mean we
can test an awful long way back. I don't remember the last time I saw a
really old version in the wild, but I'm sure there are some out there
sitting in a cupboard humming along.

This might also be a good time to revive work on making the TAP test
framework backwards compatible via subclassing.


cheers


andrew


--
Andrew Dunstan
EDB: https://www.enterprisedb.com




Re: pg_dump versus ancient server versions

От
Tom Lane
Дата:
Andrew Dunstan <andrew@dunslane.net> writes:
> 9.2 is how far back crake goes in testing pg_ugrade from old versions,
> so that could well be a convenient stopping point. For older versions
> there is still the possibility of building on older toolchains and
> running on modern ones. Yes it's more cumbersome, but it does mean we
> can test an awful long way back.

Right.  I think the point of the current discussion is to ensure that,
if we expect new patches for pg_dump or psql to work against version-N
servers, that it's not too unpleasant for patch submitters to build
and test against version N.  There's a different discussion to be had
about what we do if we receive a bug report about compatibility with
some more-ancient-than-that version.  But that is, I hope, a far less
common scenario; so it's okay if it requires extra effort, and/or use
of setups that not everyone has handy.

Anyway, it seems like there's some consensus that 9.2 is a good
stopping place for today.  I'll push forward with
(1) back-patching as necessary to make 9.2 and up build cleanly
on the platforms I have handy;
(2) ripping out pg_dump's support for pre-9.2 servers;
(3) ripping out psql's support for pre-9.2 servers.

In a preliminary look, it did not seem that (3) would save very
much code, but it seems like we ought to do it if we're being
consistent.

A point we've not discussed is whether to drop any bits of libpq
that are only needed for such old servers.  I feel a bit more
uncomfortable about that, mainly because I'm pretty sure that
only a few lines of code would be involved, and it seems to have
more of an air of burning-the-bridges finality about it than (say)
dropping psql/describe.c support.  On the other hand, the point
about what's required to test future patches still applies.

            regards, tom lane



Re: pg_dump versus ancient server versions

От
Tom Lane
Дата:
I wrote:
> Anyway, it seems like there's some consensus that 9.2 is a good
> stopping place for today.  I'll push forward with
> (1) back-patching as necessary to make 9.2 and up build cleanly
> on the platforms I have handy;

I've done as much as I plan to do in that direction.  As of the
respective branch tips, I see clean builds and check-world
results with minimal configure options in all branches back to 9.2
on Fedora 35 (gcc 11.2.1) and macOS Monterey (Apple clang 13.0.0).

A few notes for the archives' sake:

* As discussed, parallel check-world is unreliable before v10;
perhaps this is worth improving, but I doubt it.  I did find
that aggressive parallelism in the build process is fine.

* On some compilers, pre-v10 branches produce this warning:

scan.c: In function 'yy_try_NUL_trans':
scan.c:10189:23: warning: unused variable 'yyg' [-Wunused-variable]
     struct yyguts_t * yyg = (struct yyguts_t*)yyscanner; /* This var may be unused depending upon options. */

In principle we could back-patch 65d508fd4 to silence that,
but I think that fix is more invasive than what we want to
do in these branches.  We lived with that warning for years
before figuring out how to get rid of it, so I think we can
continue to accept it in these branches.

* 9.2's plperl fails to compile on macOS:

./plperl.h:53:10: fatal error: 'EXTERN.h' file not found
#include "EXTERN.h"

This is evidently because 9.2 predates the "sysroot" hacking
we did later (5e2217131 and many many subsequent tweaks).
I judge this not worth the trouble to fix, because the argument
for supporting --with-perl in these branches is basically that
we need a PL with transforms to test pg_dump ... but transforms
didn't come in until 9.5.  (Reviewing the commit log, I suppose
that 9.3 and 9.4 would also fail to build in some macOS
configurations, but by the same argument I see no need to work
further on those branches either.  9.5 does have all the
sysroot changes I can find in the log.)

            regards, tom lane



Re: pg_dump versus ancient server versions

От
Robert Haas
Дата:
On Mon, Dec 13, 2021 at 12:23 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:
> I've done as much as I plan to do in that direction.  As of the
> respective branch tips, I see clean builds and check-world
> results with minimal configure options in all branches back to 9.2
> on Fedora 35 (gcc 11.2.1) and macOS Monterey (Apple clang 13.0.0).

I think this is great. Thanks for being willing to work on it.

-- 
Robert Haas
EDB: http://www.enterprisedb.com



Re: pg_dump versus ancient server versions

От
Tom Lane
Дата:
I wrote:
> Anyway, it seems like there's some consensus that 9.2 is a good
> stopping place for today.  I'll push forward with
> (1) back-patching as necessary to make 9.2 and up build cleanly
> on the platforms I have handy;
> (2) ripping out pg_dump's support for pre-9.2 servers;
> (3) ripping out psql's support for pre-9.2 servers.

I've completed the pg_dump/pg_dumpall part of that, but while
updating the docs I started to wonder whether we shouldn't nuke
pg_dump's --no-synchronized-snapshots option.  As far as I can
make out, the remaining use case for that is to let you perform an
unsafe parallel dump from a standby server of an out-of-support
major version.  I'm not very clear why we allowed that at all,
ever, rather than saying you can't parallelize in such cases.
But for sure that remaining use case is paper thin, and leaving
the option available seems way more likely to let people shoot
themselves in the foot than to let them do anything helpful.

Barring objections, I'll remove that option in a day or two.

            regards, tom lane



Re: pg_dump versus ancient server versions

От
Tom Lane
Дата:
Justin Pryzby <pryzby@telsasoft.com> writes:
> On Tue, Dec 14, 2021 at 05:18:44PM -0500, Tom Lane wrote:
>> I've completed the pg_dump/pg_dumpall part of that, but while

> Is it possible to clean up pg_upgrade, too ?

https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=e469f0aaf3c586c8390bd65923f97d4b1683cd9f

I'm still working on psql.

            regards, tom lane



Re: pg_dump versus ancient server versions

От
Tom Lane
Дата:
Justin Pryzby <pryzby@telsasoft.com> writes:
> I think you missed a few parts though ?

Um.  I think those are leftover from when I was intending the
cutoff to be 9.0 not 9.2.  I'll take a fresh look tomorrow.

            regards, tom lane



Re: pg_dump versus ancient server versions

От
Greg Stark
Дата:
On Fri, 22 Oct 2021 at 19:27, Robert Haas <robertmhaas@gmail.com> wrote:
>
> Another thing to think about in that regard: how likely is that
> PostgreSQL 7.4 and PostgreSQL 15 both compile and run on the same
> operating system? I suspect the answer is "not very." I seem to recall
> Greg Stark trying to compile really old versions of PostgreSQL for a
> conference talk some years ago, and he got back to a point where it
> just became impossible to make work on modern toolchains even with a
> decent amount of hackery.

That was when I compared sorting performance over time. I was able to
get Postgres to build back to the point where 64-bit architecture
support was added. From Andrew Dunstans comment later in this thread
I'm guessing that was 7.2

That was basically at the point where 64-bit architecture support was
added. It looks like the earliest date on the graphs in the talk are
2002-11-27 which matches the 7.3 release date. I think building
earlier versions would have been doable if I had built them in 32-bit
mode.

-- 
greg