Обсуждение: doc review for v14

Поиск
Список
Период
Сортировка

doc review for v14

От
Justin Pryzby
Дата:

Re: doc review for v14

От
Michael Paquier
Дата:
On Mon, Dec 21, 2020 at 10:11:53PM -0600, Justin Pryzby wrote:
> As I did last 2 years, I reviewed docs for v14...

Thanks for gathering all that!

> This year I've started early, since it takes more than a little effort and it's
> not much fun to argue the change in each individual hunk.

0001-pgindent-typos.not-a-patch touches pg_bsd_indent.

>      /*
> -     * XmlTable returns table - set of composite values. The error context, is
> -     * used for producement more values, between two calls, there can be
> -     * created and used another libxml2 error context. It is libxml2 global
> -     * value, so it should be refreshed any time before any libxml2 usage,
> -     * that is finished by returning some value.
> +     * XmlTable returns a table-set of composite values. The error context is
> +     * used for providing more detail. Between two calls, other libxml2
> +     * error contexts might have been created and used ; since they're libxml2
> +     * global values, they should be refreshed each time before any libxml2 usage
> +     * that finishes by returning some value.
>       */

That's indeed incorrect, but I am not completely sure if what you have
here is correct either.  I'll try to study this code a bit more first,
though I have said that once in the past.  :p

> --- a/src/bin/pg_dump/pg_restore.c
> +++ b/src/bin/pg_dump/pg_restore.c
> @@ -305,7 +305,7 @@ main(int argc, char **argv)
>      /* Complain if neither -f nor -d was specified (except if dumping TOC) */
>      if (!opts->cparams.dbname && !opts->filename && !opts->tocSummary)
>      {
> -        pg_log_error("one of -d/--dbname and -f/--file must be specified");
> +        pg_log_error("one of -d/--dbname, -f/--file or -l/--list must be specified");
>          exit_nicely(1);
>      }

You have forgotten to update the TAP test pg_dump/t/001_basic.pl.
The message does not seem completely incorrect to me either.  Hmm.
Restraining more the set of options is something to consider, though
it could be annoying.  I have discarded this one for now.

>          Specifies the amount of memory that should be allocated at server
> -        startup time for use by parallel queries.  When this memory region is
> +        startup for use by parallel queries.  When this memory region is
>          insufficient or exhausted by concurrent queries, new parallel queries
>          try to allocate extra shared memory temporarily from the operating
>          system using the method configured with
>          <varname>dynamic_shared_memory_type</varname>, which may be slower due
>          to memory management overheads.  Memory that is allocated at startup
> -        time with <varname>min_dynamic_shared_memory</varname> is affected by
> +        with <varname>min_dynamic_shared_memory</varname> is affected by
>          the <varname>huge_pages</varname> setting on operating systems where
>          that is supported, and may be more likely to benefit from larger pages
>          on operating systems where that is managed automatically.

The current formulation is not that confusing, but I agree that this
is an improvement.  Thomas, you are behind this one.  What do you
think?

I have applied most of it on HEAD, except 0011 and the things noted
above.  Thanks again.
--
Michael

Вложения

Re: doc review for v14

От
Justin Pryzby
Дата:
On Thu, Dec 24, 2020 at 05:12:02PM +0900, Michael Paquier wrote:
> I have applied most of it on HEAD, except 0011 and the things noted
> above.  Thanks again.

Thank you.

I see that I accidentally included ZSTD_COMPRESSION in pg_backup_archiver.h
while cherry-picking from the branch where I first fixed this.  Sorry :(

> 0001-pgindent-typos.not-a-patch touches pg_bsd_indent.

I'm hoping that someone will apply it there, but I realize that access to its
repository is tightly controlled :)

On Thu, Dec 24, 2020 at 05:12:02PM +0900, Michael Paquier wrote:
> Restraining more the set of options is something to consider, though
> it could be annoying.  I have discarded this one for now.

Even though its -d is unused, I guess since wouldn't serve any significant
purpose, we shouldn't make pg_restore -l -d fail for no reason.

I think a couple of these should be backpatched.
doc/src/sgml/ref/pg_dump.sgml
doc/src/sgml/sources.sgml
doc/src/sgml/cube.sgml?
doc/src/sgml/func.sgml?

-- 
Justin

Вложения

Re: doc review for v14

От
Magnus Hagander
Дата:


On Sun, Dec 27, 2020 at 9:26 PM Justin Pryzby <pryzby@telsasoft.com> wrote:
On Thu, Dec 24, 2020 at 05:12:02PM +0900, Michael Paquier wrote:
> 0001-pgindent-typos.not-a-patch touches pg_bsd_indent.

I'm hoping that someone will apply it there, but I realize that access to its
repository is tightly controlled :)

Not as much "tightly controlled" as "nobody's really bothered to grant any permissions".

I've applied the patch, thanks! While at it I fixed the indentation of the "target" row in the patch, I think you didn't take the fix all the way :)

You may also want to submit those fixes upstream in freebsd? The typos seem to be present at https://github.com/freebsd/freebsd/tree/master/usr.bin/indent as well. (If so, please include the updated version that I applied, so we don't diverge on that)

--

Re: doc review for v14

От
Michael Paquier
Дата:
On Mon, Dec 28, 2020 at 11:42:03AM +0100, Magnus Hagander wrote:
> Not as much "tightly controlled" as "nobody's really bothered to grant any
> permissions".

Magnus, do I have an access to that?  This is the second time I am
crossing an issue with this issue, but I don't really know if I should
act on it or not :)
--
Michael

Вложения

Re: doc review for v14

От
Thomas Munro
Дата:
On Thu, Dec 24, 2020 at 9:12 PM Michael Paquier <michael@paquier.xyz> wrote:
> On Mon, Dec 21, 2020 at 10:11:53PM -0600, Justin Pryzby wrote:
> >          Specifies the amount of memory that should be allocated at server
> > -        startup time for use by parallel queries.  When this memory region is
> > +        startup for use by parallel queries.  When this memory region is
> >          insufficient or exhausted by concurrent queries, new parallel queries
> >          try to allocate extra shared memory temporarily from the operating
> >          system using the method configured with
> >          <varname>dynamic_shared_memory_type</varname>, which may be slower due
> >          to memory management overheads.  Memory that is allocated at startup
> > -        time with <varname>min_dynamic_shared_memory</varname> is affected by
> > +        with <varname>min_dynamic_shared_memory</varname> is affected by
> >          the <varname>huge_pages</varname> setting on operating systems where
> >          that is supported, and may be more likely to benefit from larger pages
> >          on operating systems where that is managed automatically.
>
> The current formulation is not that confusing, but I agree that this
> is an improvement.  Thomas, you are behind this one.  What do you
> think?

LGTM.



Re: doc review for v14

От
Michael Paquier
Дата:
On Tue, Dec 29, 2020 at 01:59:58PM +1300, Thomas Munro wrote:
> LGTM.

Thanks, I have done this one then.
--
Michael

Вложения

Re: doc review for v14

От
Michael Paquier
Дата:
On Sun, Dec 27, 2020 at 02:26:05PM -0600, Justin Pryzby wrote:
> I think a couple of these should be backpatched.
> doc/src/sgml/ref/pg_dump.sgml

This part can go down to 9.5.

> doc/src/sgml/sources.sgml

Yes, I have done an extra effort on those fixes where needed.  On top
of that, I have included catalogs.sgml, pgstatstatements.sgml,
explain.sgml, pg_verifybackup.sgml and wal.sgml in 13.

> doc/src/sgml/cube.sgml?
> doc/src/sgml/func.sgml?

These two are some beautification for the format of the function, so I
have left them out.
--
Michael

Вложения

Re: doc review for v14

От
Magnus Hagander
Дата:


On Tue, Dec 29, 2020 at 1:37 AM Michael Paquier <michael@paquier.xyz> wrote:
On Mon, Dec 28, 2020 at 11:42:03AM +0100, Magnus Hagander wrote:
> Not as much "tightly controlled" as "nobody's really bothered to grant any
> permissions".

Magnus, do I have an access to that?  This is the second time I am
crossing an issue with this issue, but I don't really know if I should
act on it or not :)

No, at this point it's just Tom (who has all the commits) and me (who set it up, and now has one commit). It's all manually handled.

--

Re: doc review for v14

От
Tom Lane
Дата:
Magnus Hagander <magnus@hagander.net> writes:
> On Tue, Dec 29, 2020 at 1:37 AM Michael Paquier <michael@paquier.xyz> wrote:
>> Magnus, do I have an access to that?  This is the second time I am
>> crossing an issue with this issue, but I don't really know if I should
>> act on it or not :)

> No, at this point it's just Tom (who has all the commits) and me (who set
> it up, and now has one commit). It's all manually handled.

FTR, I have no objection to Michael (or any other PG committer) having
write access to that repo.  I think so far it's a matter of nobody's
bothered because there's so little need.

            regards, tom lane



Re: doc review for v14

От
Michael Paquier
Дата:
On Tue, Dec 29, 2020 at 06:22:43PM +0900, Michael Paquier wrote:
> Yes, I have done an extra effort on those fixes where needed.  On top
> of that, I have included catalogs.sgml, pgstatstatements.sgml,
> explain.sgml, pg_verifybackup.sgml and wal.sgml in 13.

Justin, I got to look at the libxml2 part, and finished by rewording
the comment block as follows:
+    * XmlTable returns a table-set of composite values.  This error context
+    * is used for providing more details, and needs to be reset between two
+    * internal calls of libxml2 as different error contexts might have been
+    * created or used.

What do you think?
--
Michael

Вложения

Re: doc review for v14

От
Justin Pryzby
Дата:
On Sun, Jan 03, 2021 at 03:10:54PM +0900, Michael Paquier wrote:
> On Tue, Dec 29, 2020 at 06:22:43PM +0900, Michael Paquier wrote:
> > Yes, I have done an extra effort on those fixes where needed.  On top
> > of that, I have included catalogs.sgml, pgstatstatements.sgml,
> > explain.sgml, pg_verifybackup.sgml and wal.sgml in 13.
> 
> Justin, I got to look at the libxml2 part, and finished by rewording
> the comment block as follows:
> +    * XmlTable returns a table-set of composite values.  This error context
> +    * is used for providing more details, and needs to be reset between two
> +    * internal calls of libxml2 as different error contexts might have been
> +    * created or used.

I don't like "this error context", since "this" seems to be referring to the
"tableset of composite values" as an err context.

I guess you mean: "needs to be reset between each internal call to libxml2.."

So I'd suggest:

> +    * XmlTable returns a table-set of composite values.  The error context
> +    * is used for providing additional detail. It needs to be reset between each
> +    * call to libxml2, since different error contexts might have been
> +    * created or used since it was last set.


But actually, maybe we should just use the comment that exists everywhere else
for that.

        /* Propagate context related error context to libxml2 */
        xmlSetStructuredErrorFunc((void *) xtCxt->xmlerrcxt, xml_errorHandler);

Maybe should elaborate and say:
    /*
     * Propagate context related error context to libxml2 (needs to be
     * reset before each call, in case other error contexts have been assigned since
     * it was first set) */
     */
        xmlSetStructuredErrorFunc((void *) xtCxt->xmlerrcxt, xml_errorHandler);

-- 
Justin



Re: doc review for v14

От
Michael Paquier
Дата:
On Sun, Jan 03, 2021 at 12:33:54AM -0600, Justin Pryzby wrote:
>
> But actually, maybe we should just use the comment that exists everywhere else
> for that.
>
>         /* Propagate context related error context to libxml2 */
>         xmlSetStructuredErrorFunc((void *) xtCxt->xmlerrcxt, xml_errorHandler);

I quite like your suggestion to be a maximum simple here, and the docs
of upstream also give a lot of context:
http://xmlsoft.org/html/libxml-xmlerror.html#xmlSetStructuredErrorFunc

So let's use this version and call it a day for this part.
--
Michael

Вложения

Re: doc review for v14

От
Michael Paquier
Дата:
On Sun, Jan 03, 2021 at 09:05:09PM +0900, Michael Paquier wrote:
> So let's use this version and call it a day for this part.

This has been done as of b49154b.
--
Michael

Вложения

Re: doc review for v14

От
Masahiko Sawada
Дата:
On Wed, Jan 6, 2021 at 10:37 AM Michael Paquier <michael@paquier.xyz> wrote:
>
> On Sun, Jan 03, 2021 at 09:05:09PM +0900, Michael Paquier wrote:
> > So let's use this version and call it a day for this part.
>
> This has been done as of b49154b.

It seems to me that all work has been done. Can we mark this patch
entry as "Committed"? Or waiting for something on the author?

Regards,

-- 
Masahiko Sawada
EDB:  https://www.enterprisedb.com/



Re: doc review for v14

От
Michael Paquier
Дата:
On Fri, Jan 22, 2021 at 09:53:13PM +0900, Masahiko Sawada wrote:
> It seems to me that all work has been done. Can we mark this patch
> entry as "Committed"? Or waiting for something on the author?

Patch 0005 posted on [1], related to some docs of replication slots,
still needs a lookup.

[1]: https://www.postgresql.org/message-id/20201227202604.GC26311@telsasoft.com
--
Michael

Вложения

Re: doc review for v14

От
Michael Paquier
Дата:
Hi Justin,

On Sun, Dec 27, 2020 at 02:26:05PM -0600, Justin Pryzby wrote:
> Thank you.

I have been looking at 0005, the patch dealing with the docs of the
replication stats, and have some comments.

        <para>
         Number of times transactions were spilled to disk while decoding changes
-        from WAL for this slot. Transactions may get spilled repeatedly, and
-        this counter gets incremented on every such invocation.
+        from WAL for this slot. A given transaction may be spilled multiple times, and
+        this counter is incremented each time.
       </para></entry>
The original can be a bit hard to read, and I don't think that the new
formulation is an improvement.  I actually find confusing that this
mixes in the same sentence that a transaction can be spilled multiple
times and increment this counter each time.  What about splitting that
into two sentences?  Here is an idea:
"This counter is incremented each time a transaction is spilled.  The
same transaction may be spilled multiple times."

-        Number of transactions spilled to disk after the memory used by
-        logical decoding of changes from WAL for this slot exceeds
+        Number of transactions spilled to disk because the memory used by
+        logical decoding of changes from WAL for this slot exceeded
What does "logical decoding of changes from WAL" mean?  Here is an
idea to clarify all that:
"Number of transactions spilled to disk once the memory used by
logical decoding to decode changes from WAL has exceeded
logical_decoding_work_mem."

         Number of in-progress transactions streamed to the decoding output plugin
-        after the memory used by logical decoding of changes from WAL for this
-        slot exceeds <literal>logical_decoding_work_mem</literal>. Streaming only
+        because the memory used by logical decoding of changes from WAL for this
+        slot exceeded <literal>logical_decoding_work_mem</literal>. Streaming only
         works with toplevel transactions (subtransactions can't be streamed
-        independently), so the counter does not get incremented for subtransactions+        independently), so the
counteris not incremented for subtransactions.
 
I have the same issue here with "by logical decoding of changes from
WAL".  I'd say "after the memory used by logical decoding to decode
changes from WAL for this slot has exceeded logical_decoding_work_mem".

         output plugin while decoding changes from WAL for this slot. Transactions
-        may get streamed repeatedly, and this counter gets incremented on every
-        such invocation.
+        may be streamed multiple times, and this counter is incremented each time.
I would split this stuff into two sentences:
"This counter is incremented each time a transaction is streamed.  The
same transaction may be streamed multiple times.

          Resets statistics to zero for a single replication slot, or for all
-         replication slots in the cluster.  The argument can be either the name
-         of the slot to reset the stats or NULL.  If the argument is NULL, all
-         counters shown in the <structname>pg_stat_replication_slots</structname>
-         view for all replication slots are reset.
+         replication slots in the cluster.  The argument can be either NULL or the name
+         of a slot for which stats are to be reset.  If the argument is NULL, all
+         counters in the <structname>pg_stat_replication_slots</structname>
+         view are reset for all replication slots.
Here also, I find rather confusing that this paragraph tells multiple
times that NULL resets the stats for all the replication slots.  NULL
should use a <literal> markup, and it is cleaner to use "statistics"
rather than "stats" IMO.  So I guess we could simplify things as
follows:
"Resets statistics of the replication slot defined by the argument. If
the argument is NULL, resets statistics for all the replication
slots."
--
Michael

Вложения

Re: doc review for v14

От
Michael Paquier
Дата:
On Sat, Jan 23, 2021 at 07:15:40PM +0900, Michael Paquier wrote:
> I have been looking at 0005, the patch dealing with the docs of the
> replication stats, and have some comments.

And attached is a patch to clarify all that.  I am letting that sleep
for a couple of days for now, so please let me know if you have any
comments.
--
Michael

Вложения

Re: doc review for v14

От
Michael Paquier
Дата:
On Wed, Jan 27, 2021 at 02:52:14PM +0900, Michael Paquier wrote:
> And attached is a patch to clarify all that.  I am letting that sleep
> for a couple of days for now, so please let me know if you have any
> comments.

I have spent some time on that, and applied this stuff as of 2a5862f
after some extra tweaks.  As there is nothing left, this CF entry is
now closed.
--
Michael

Вложения

Re: doc review for v14

От
Justin Pryzby
Дата:
Another round of doc fixen.

wdiff to follow

commit 389c4ac2febe21fd48480a86819d94fd2eb9c1cc
Author: Justin Pryzby <pryzbyj@telsasoft.com>
Date:   Wed Feb 10 17:19:51 2021 -0600

    doc review for pg_stat_progress_create_index
    
    ab0dfc961b6a821f23d9c40c723d11380ce195a6
    
    should backpatch to v13

diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index c602ee4427..16eb1d9e9c 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -5725,7 +5725,7 @@ SELECT pg_stat_get_backend_pid(s.backendid) AS pid,
      </para>
      <para>
       When creating an index on a partitioned table, this column is set to
       the number of partitions on which the index has been [-completed.-]{+created.+}
      </para></entry>
     </row>
    </tbody>

commit bff6f0b557ff79365fc21d0ae261bad0fcb96539
Author: Justin Pryzby <pryzbyj@telsasoft.com>
Date:   Sat Feb 6 15:17:51 2021 -0600

    *an old and "deleted [has] happened"
    
    Heikki missed this in 6b387179baab8d0e5da6570678eefbe61f3acc79

diff --git a/doc/src/sgml/protocol.sgml b/doc/src/sgml/protocol.sgml
index 3763b4b995..a51f2c9920 100644
--- a/doc/src/sgml/protocol.sgml
+++ b/doc/src/sgml/protocol.sgml
@@ -6928,8 +6928,8 @@ Delete
</term>
<listitem>
<para>
                Identifies the following TupleData message as [-a-]{+an+} old tuple.
                This field is present if the table in which the delete[-has-]
                happened has REPLICA IDENTITY set to FULL.
</para>
</listitem>

commit 9bd601fa82ceeaf09573ce31eb3c081b4ae7a45d
Author: Justin Pryzby <pryzbyj@telsasoft.com>
Date:   Sat Jan 23 21:03:37 2021 -0600

    doc review for logical decoding of prepared xacts
    
    0aa8a01d04c8fe200b7a106878eebc3d0af9105c

diff --git a/doc/src/sgml/logicaldecoding.sgml b/doc/src/sgml/logicaldecoding.sgml
index b854f2ccfc..71e9f36b8e 100644
--- a/doc/src/sgml/logicaldecoding.sgml
+++ b/doc/src/sgml/logicaldecoding.sgml
@@ -791,9 +791,9 @@ typedef void (*LogicalDecodeMessageCB) (struct LogicalDecodingContext *ctx,
     <para>
       The optional <function>filter_prepare_cb</function> callback
       is called to determine whether data that is part of the current
       two-phase commit transaction should be considered for [-decode-]{+decoding+}
       at this prepare stage or {+later+} as a regular one-phase transaction at
       <command>COMMIT PREPARED</command> [-time later.-]{+time.+} To signal that
       decoding should be skipped, return <literal>true</literal>;
       <literal>false</literal> otherwise. When the callback is not
       defined, <literal>false</literal> is assumed (i.e. nothing is
@@ -820,11 +820,11 @@ typedef bool (*LogicalDecodeFilterPrepareCB) (struct LogicalDecodingContext *ctx
      The required <function>begin_prepare_cb</function> callback is called
      whenever the start of a prepared transaction has been decoded. The
      <parameter>gid</parameter> field, which is part of the
      <parameter>txn</parameter> [-parameter-]{+parameter,+} can be used in this callback to
      check if the plugin has already received this [-prepare-]{+PREPARE+} in which case it
      can skip the remaining changes of the transaction. This can only happen
      if the user restarts the decoding after receiving the [-prepare-]{+PREPARE+} for a
      transaction but before receiving the [-commit prepared-]{+COMMIT PREPARED,+} say because of some
      error.
      <programlisting>
       typedef void (*LogicalDecodeBeginPrepareCB) (struct LogicalDecodingContext *ctx,
@@ -842,7 +842,7 @@ typedef bool (*LogicalDecodeFilterPrepareCB) (struct LogicalDecodingContext *ctx
      decoded. The <function>change_cb</function> callback for all modified
      rows will have been called before this, if there have been any modified
      rows. The <parameter>gid</parameter> field, which is part of the
      <parameter>txn</parameter> [-parameter-]{+parameter,+} can be used in this callback.
      <programlisting>
       typedef void (*LogicalDecodePrepareCB) (struct LogicalDecodingContext *ctx,
                                               ReorderBufferTXN *txn,
@@ -856,9 +856,9 @@ typedef bool (*LogicalDecodeFilterPrepareCB) (struct LogicalDecodingContext *ctx

     <para>
      The required <function>commit_prepared_cb</function> callback is called
      whenever a transaction [-commit prepared-]{+COMMIT PREPARED+} has been decoded. The
      <parameter>gid</parameter> field, which is part of the
      <parameter>txn</parameter> [-parameter-]{+parameter,+} can be used in this callback.
      <programlisting>
       typedef void (*LogicalDecodeCommitPreparedCB) (struct LogicalDecodingContext *ctx,
                                                      ReorderBufferTXN *txn,
@@ -872,15 +872,15 @@ typedef bool (*LogicalDecodeFilterPrepareCB) (struct LogicalDecodingContext *ctx

     <para>
      The required <function>rollback_prepared_cb</function> callback is called
      whenever a transaction [-rollback prepared-]{+ROLLBACK PREPARED+} has been decoded. The
      <parameter>gid</parameter> field, which is part of the
      <parameter>txn</parameter> [-parameter-]{+parameter,+} can be used in this callback. The
      parameters <parameter>prepare_end_lsn</parameter> and
      <parameter>prepare_time</parameter> can be used to check if the plugin
      has received this [-prepare transaction-]{+PREPARE TRANSACTION+} in which case it can apply the
      rollback, otherwise, it can skip the rollback operation. The
      <parameter>gid</parameter> alone is not sufficient because the downstream
      node can have {+a+} prepared transaction with same identifier.
      <programlisting>
       typedef void (*LogicalDecodeRollbackPreparedCB) (struct LogicalDecodingContext *ctx,
                                                        ReorderBufferTXN *txn,
@@ -1122,7 +1122,7 @@ OutputPluginWrite(ctx, true);
    the <function>stream_commit_cb</function> callback
    (or possibly aborted using the <function>stream_abort_cb</function> callback).
    If two-phase commits are supported, the transaction can be prepared using the
    <function>stream_prepare_cb</function> callback, [-commit prepared-]{+COMMIT PREPARED+} using the
    <function>commit_prepared_cb</function> callback or aborted using the
    <function>rollback_prepared_cb</function>.
   </para>

commit 7ddf562c7b384b4a802111ac1b0eab3698982c8e
Author: Justin Pryzby <pryzbyj@telsasoft.com>
Date:   Sat Jan 23 21:02:47 2021 -0600

    doc review for multiranges
    
    6df7a9698bb036610c1e8c6d375e1be38cb26d5f

diff --git a/doc/src/sgml/extend.sgml b/doc/src/sgml/extend.sgml
index 6e3d82b85b..ec95b4eb01 100644
--- a/doc/src/sgml/extend.sgml
+++ b/doc/src/sgml/extend.sgml
@@ -448,7 +448,7 @@
     of <type>anycompatible</type> and <type>anycompatiblenonarray</type>
     inputs, the array element types of <type>anycompatiblearray</type>
     inputs, the range subtypes of <type>anycompatiblerange</type> inputs,
     and the multirange subtypes of [-<type>anycompatiablemultirange</type>-]{+<type>anycompatiblemultirange</type>+}
     inputs.  If <type>anycompatiblenonarray</type> is present then the
     common type is required to be a non-array type.  Once a common type is
     identified, arguments in <type>anycompatible</type>

commit 4fa1fd9769c93dbec71fa92097ebfea5f420bb09
Author: Justin Pryzby <pryzbyj@telsasoft.com>
Date:   Sat Jan 23 20:33:10 2021 -0600

    doc review: logical decode in prepare
    
    a271a1b50e9bec07e2ef3a05e38e7285113e4ce6

diff --git a/doc/src/sgml/logicaldecoding.sgml b/doc/src/sgml/logicaldecoding.sgml
index cf705ed9cd..b854f2ccfc 100644
--- a/doc/src/sgml/logicaldecoding.sgml
+++ b/doc/src/sgml/logicaldecoding.sgml
@@ -1214,7 +1214,7 @@ stream_commit_cb(...);  <-- commit of the streamed transaction
   </para>

   <para>
    When a prepared transaction is [-rollbacked-]{+rolled back+} using the
    <command>ROLLBACK PREPARED</command>, then the
    <function>rollback_prepared_cb</function> callback is invoked and when the
    prepared transaction is committed using <command>COMMIT PREPARED</command>,

commit d27a74968b61354ad1186a4740063dd4ac0b1bea
Author: Justin Pryzby <pryzbyj@telsasoft.com>
Date:   Sat Jan 23 17:17:58 2021 -0600

    doc review for FDW bulk inserts
    
    b663a4136331de6c7364226e3dbf7c88bfee7145

diff --git a/doc/src/sgml/fdwhandler.sgml b/doc/src/sgml/fdwhandler.sgml
index 854913ae5f..12e00bfc2f 100644
--- a/doc/src/sgml/fdwhandler.sgml
+++ b/doc/src/sgml/fdwhandler.sgml
@@ -672,9 +672,8 @@ GetForeignModifyBatchSize(ResultRelInfo *rinfo);

     Report the maximum number of tuples that a single
     <function>ExecForeignBatchInsert</function> call can handle for
     the specified foreign table.[-That is,-]  The executor passes at most
     the {+given+} number of tuples[-that this function returns-] to <function>ExecForeignBatchInsert</function>.
     <literal>rinfo</literal> is the <structname>ResultRelInfo</structname> struct describing
     the target foreign table.
     The FDW is expected to provide a foreign server and/or foreign

commit 2b8fdcc91562045b6b2cec0e69a724e078cfbdb5
Author: Justin Pryzby <pryzbyj@telsasoft.com>
Date:   Wed Feb 3 00:51:25 2021 -0600

    doc review: piecemeal construction of partitioned indexes
    
    5efd604ec0a3bdde98fe19d8cada69ab4ef80db3
    
    backpatch to v11

diff --git a/doc/src/sgml/ddl.sgml b/doc/src/sgml/ddl.sgml
index 1e9a4625cc..a8cbd45d35 100644
--- a/doc/src/sgml/ddl.sgml
+++ b/doc/src/sgml/ddl.sgml
@@ -3962,8 +3962,8 @@ ALTER TABLE measurement ATTACH PARTITION measurement_y2008m02
     As explained above, it is possible to create indexes on partitioned tables
     so that they are applied automatically to the entire hierarchy.
     This is very
     convenient, as not only[-will-] the existing partitions [-become-]{+will be+} indexed, but
     [-also-]{+so will+} any partitions that are created in the [-future will.-]{+future.+}  One limitation is
     that it's not possible to use the <literal>CONCURRENTLY</literal>
     qualifier when creating such a partitioned index.  To avoid long lock
     times, it is possible to use <command>CREATE INDEX ON ONLY</command>

commit 2f6d8a4d0157b632ad1e0ff3b0a54c4d38199637
Author: Justin Pryzby <pryzbyj@telsasoft.com>
Date:   Sat Jan 30 18:10:21 2021 -0600

    duplicate words
    
    commit 9c4f5192f69ed16c99e0d079f0b5faebd7bad212
        Allow pg_rewind to use a standby server as the source system.
    
    commit 4a252996d5fda7662b2afdf329a5c95be0fe3b01
        Add tests for tuplesort.c.
    
    commit 0a2bc5d61e713e3fe72438f020eea5fcc90b0f0b
        Move per-agg and per-trans duplicate finding to the planner.
    
    commit 623a9ba79bbdd11c5eccb30b8bd5c446130e521c
        snapshot scalability: cache snapshots using a xact completion counter.
    
    commit 2c03216d831160bedd72d45f712601b6f7d03f1c
        Revamp the WAL record format.

diff --git a/src/backend/access/transam/xlogutils.c b/src/backend/access/transam/xlogutils.c
index e723253297..25d6df1659 100644
--- a/src/backend/access/transam/xlogutils.c
+++ b/src/backend/access/transam/xlogutils.c
@@ -433,8 +433,7 @@ XLogReadBufferForRedoExtended(XLogReaderState *record,
 * NB: A redo function should normally not call this directly. To get a page
 * to modify, use XLogReadBufferForRedoExtended instead. It is important that
 * all pages modified by a WAL record are registered in the WAL records, or
 * they will be invisible to tools that[-that-] need to know which pages are[-*-] modified.
 */
Buffer
XLogReadBufferExtended(RelFileNode rnode, ForkNumber forknum,
diff --git a/src/backend/optimizer/prep/prepagg.c b/src/backend/optimizer/prep/prepagg.c
index 929a8ea13b..89046f9afb 100644
--- a/src/backend/optimizer/prep/prepagg.c
+++ b/src/backend/optimizer/prep/prepagg.c
@@ -71,7 +71,7 @@ static Datum GetAggInitVal(Datum textInitVal, Oid transtype);
 *
 * Information about the aggregates and transition functions are collected
 * in the root->agginfos and root->aggtransinfos lists.  The 'aggtranstype',
 * 'aggno', and 'aggtransno' fields [-in-]{+of each Aggref+} are filled [-in in each Aggref.-]{+in.+}
 *
 * NOTE: This modifies the Aggrefs in the input expression in-place!
 *
diff --git a/src/backend/storage/ipc/procarray.c b/src/backend/storage/ipc/procarray.c
index cf12eda504..b9fbdcb88f 100644
--- a/src/backend/storage/ipc/procarray.c
+++ b/src/backend/storage/ipc/procarray.c
@@ -2049,7 +2049,7 @@ GetSnapshotDataReuse(Snapshot snapshot)
     * holding ProcArrayLock) exclusively). Thus the xactCompletionCount check
     * ensures we would detect if the snapshot would have changed.
     *
     * As the snapshot contents are the same as it was before, it is[-is-] safe
     * to re-enter the snapshot's xmin into the PGPROC array. None of the rows
     * visible under the snapshot could already have been removed (that'd
     * require the set of running transactions to change) and it fulfills the
diff --git a/src/bin/pg_rewind/libpq_source.c b/src/bin/pg_rewind/libpq_source.c
index 86d2adcaee..ac794cf4eb 100644
--- a/src/bin/pg_rewind/libpq_source.c
+++ b/src/bin/pg_rewind/libpq_source.c
@@ -539,7 +539,7 @@ process_queued_fetch_requests(libpq_source *src)
                         chunkoff, rq->path, (int64) rq->offset);

            /*
             * We should not receive[-receive-] more data than we requested, or
             * pg_read_binary_file() messed up.  We could receive less,
             * though, if the file was truncated in the source after we
             * checked its size. That's OK, there should be a WAL record of
diff --git a/src/test/regress/expected/tuplesort.out b/src/test/regress/expected/tuplesort.out
index 3fc1998bf2..418f296a3f 100644
--- a/src/test/regress/expected/tuplesort.out
+++ b/src/test/regress/expected/tuplesort.out
@@ -1,7 +1,7 @@
-- only use parallelism when explicitly intending to do so
SET max_parallel_maintenance_workers = 0;
SET max_parallel_workers = 0;
-- A table with[-with-] contents that, when sorted, triggers abbreviated
-- key aborts. One easy way to achieve that is to use uuids that all
-- have the same prefix, as abbreviated keys for uuids just use the
-- first sizeof(Datum) bytes.
diff --git a/src/test/regress/sql/tuplesort.sql b/src/test/regress/sql/tuplesort.sql
index 7d7e02f02a..846484d561 100644
--- a/src/test/regress/sql/tuplesort.sql
+++ b/src/test/regress/sql/tuplesort.sql
@@ -2,7 +2,7 @@
SET max_parallel_maintenance_workers = 0;
SET max_parallel_workers = 0;

-- A table with[-with-] contents that, when sorted, triggers abbreviated
-- key aborts. One easy way to achieve that is to use uuids that all
-- have the same prefix, as abbreviated keys for uuids just use the
-- first sizeof(Datum) bytes.

commit 4920f9520d7ba1b420bcf03ae48178d74425a622
Author: Justin Pryzby <pryzbyj@telsasoft.com>
Date:   Sun Jan 17 10:57:21 2021 -0600

    doc review for checksum docs
    
    cf621d9d84db1e6edaff8ffa26bad93fdce5f830

diff --git a/doc/src/sgml/wal.sgml b/doc/src/sgml/wal.sgml
index 66de1ee2f8..02f576a1a9 100644
--- a/doc/src/sgml/wal.sgml
+++ b/doc/src/sgml/wal.sgml
@@ -237,19 +237,19 @@
  </indexterm>

  <para>
   [-Data-]{+By default, data+} pages are not[-checksum-] protected by [-default,-]{+checksums,+} but this can
optionallybe
 
   enabled for a cluster.  When enabled, each data page will be [-assigned-]{+ASSIGNED+} a
   checksum that is updated when the page is written and verified [-every-]{+each+} time
   the page is read. Only data pages are protected by [-checksums,-]{+checksums;+} internal data
   structures and temporary files are not.
  </para>

  <para>
   Checksums [-are-]{+verification is+} normally [-enabled-]{+ENABLED+} when the cluster is initialized using <link
   linkend="app-initdb-data-checksums"><application>initdb</application></link>.
   They can also be enabled or disabled at a later time as an offline
   operation. Data checksums are enabled or disabled at the full cluster
   level, and cannot be specified[-individually-] for {+individual+} databases or tables.
  </para>

  <para>
@@ -260,9 +260,9 @@
  </para>

  <para>
   When attempting to recover from corrupt [-data-]{+data,+} it may be necessary to bypass
   the checksum [-protection in order to recover data.-]{+protection.+} To do this, temporarily set the configuration
   parameter <xref linkend="guc-ignore-checksum-failure" />.
  </para>

  <sect2 id="checksums-offline-enable-disable">

commit fc69321a5ebc55cb1df9648bc28215672cffbf31
Author: Justin Pryzby <pryzbyj@telsasoft.com>
Date:   Wed Jan 20 16:10:49 2021 -0600

    Doc review for psql \dX
    
    ad600bba0422dde4b73fbd61049ff2a3847b068a

diff --git a/doc/src/sgml/ref/psql-ref.sgml b/doc/src/sgml/ref/psql-ref.sgml
index 13c1edfa4d..d0f397d5ea 100644
--- a/doc/src/sgml/ref/psql-ref.sgml
+++ b/doc/src/sgml/ref/psql-ref.sgml
@@ -1930,8 +1930,9 @@ testdb=>
        </para>

        <para>
        The [-column-]{+status+} of [-the-]{+each+} kind of extended [-stats-]{+statistics is shown in a column+}
{+        named after the "kind"+} (e.g. [-Ndistinct) shows its status.-]{+Ndistinct).+}
        NULL means that it doesn't [-exists.-]{+exist.+} "defined" means that it was requested
        when creating the statistics.
        You can use pg_stats_ext if you'd like to know whether <link linkend="sql-analyze">
        <command>ANALYZE</command></link> was run and statistics are available to the

commit 78035a725e13e28bbae9e62fe7013bef435d70e3
Author: Justin Pryzby <pryzbyj@telsasoft.com>
Date:   Sat Feb 6 15:13:37 2021 -0600

    *an exclusive
    
    3c84046490bed3c22e0873dc6ba492e02b8b9051

diff --git a/doc/src/sgml/ref/drop_index.sgml b/doc/src/sgml/ref/drop_index.sgml
index 85cf23bca2..b6d2c2014f 100644
--- a/doc/src/sgml/ref/drop_index.sgml
+++ b/doc/src/sgml/ref/drop_index.sgml
@@ -45,7 +45,7 @@ DROP INDEX [ CONCURRENTLY ] [ IF EXISTS ] <replaceable class="parameter">name</r
     <para>
      Drop the index without locking out concurrent selects, inserts, updates,
      and deletes on the index's table.  A normal <command>DROP INDEX</command>
      acquires {+an+} exclusive lock on the table, blocking other accesses until the
      index drop can be completed.  With this option, the command instead
      waits until conflicting transactions have completed.
     </para>

commit c36ac4c1f85f620ae9ce9cfa7c14b6c95dcdedc5
Author: Justin Pryzby <pryzbyj@telsasoft.com>
Date:   Wed Dec 30 09:39:16 2020 -0600

    function comment: get_am_name

diff --git a/src/backend/commands/amcmds.c b/src/backend/commands/amcmds.c
index eff9535ed0..188109e474 100644
--- a/src/backend/commands/amcmds.c
+++ b/src/backend/commands/amcmds.c
@@ -186,7 +186,7 @@ get_am_oid(const char *amname, bool missing_ok)
}

/*
 * get_am_name - given an access method [-OID name and type,-]{+OID,+} look up its name.
 */
char *
get_am_name(Oid amOid)

commit 22e6f0e2d4eaf78e449393bf2bf8b3f8af2b71f8
Author: Justin Pryzby <pryzbyj@telsasoft.com>
Date:   Mon Jan 18 14:37:17 2021 -0600

    One fewer (not one less)

diff --git a/contrib/pageinspect/heapfuncs.c b/contrib/pageinspect/heapfuncs.c
index 9abcee32af..f6760eb31e 100644
--- a/contrib/pageinspect/heapfuncs.c
+++ b/contrib/pageinspect/heapfuncs.c
@@ -338,7 +338,7 @@ tuple_data_split_internal(Oid relid, char *tupdata,
        attr = TupleDescAttr(tupdesc, i);

        /*
         * Tuple header can specify [-less-]{+fewer+} attributes than tuple descriptor as
         * ALTER TABLE ADD COLUMN without DEFAULT keyword does not actually
         * change tuples in pages, so attributes with numbers greater than
         * (t_infomask2 & HEAP_NATTS_MASK) should be treated as NULL.
diff --git a/doc/src/sgml/charset.sgml b/doc/src/sgml/charset.sgml
index cebc09ef91..1b00e543a6 100644
--- a/doc/src/sgml/charset.sgml
+++ b/doc/src/sgml/charset.sgml
@@ -619,7 +619,7 @@ SELECT * FROM test1 ORDER BY a || b COLLATE "fr_FR";
    name such as <literal>de_DE</literal> can be considered unique
    within a given database even though it would not be unique globally.
    Use of the stripped collation names is recommended, since it will
    make one [-less-]{+fewer+} thing you need to change if you decide to change to
    another database encoding.  Note however that the <literal>default</literal>,
    <literal>C</literal>, and <literal>POSIX</literal> collations can be used regardless of
    the database encoding.
diff --git a/doc/src/sgml/ref/create_type.sgml b/doc/src/sgml/ref/create_type.sgml
index 0b24a55505..693423e524 100644
--- a/doc/src/sgml/ref/create_type.sgml
+++ b/doc/src/sgml/ref/create_type.sgml
@@ -867,7 +867,7 @@ CREATE TYPE <replaceable class="parameter">name</replaceable>
   Before <productname>PostgreSQL</productname> version 8.3, the name of
   a generated array type was always exactly the element type's name with one
   underscore character (<literal>_</literal>) prepended.  (Type names were
   therefore restricted in length to one [-less-]{+fewer+} character than other names.)
   While this is still usually the case, the array type name may vary from
   this in case of maximum-length names or collisions with user type names
   that begin with underscore.  Writing code that depends on this convention
diff --git a/doc/src/sgml/rules.sgml b/doc/src/sgml/rules.sgml
index e81addcfa9..aa172d102b 100644
--- a/doc/src/sgml/rules.sgml
+++ b/doc/src/sgml/rules.sgml
@@ -1266,7 +1266,7 @@ CREATE [ OR REPLACE ] RULE <replaceable class="parameter">name</replaceable> AS
<para>
    The query trees generated from rule actions are thrown into the
    rewrite system again, and maybe more rules get applied resulting
    in [-more-]{+additional+} or [-less-]{+fewer+} query trees.
    So a rule's actions must have either a different
    command type or a different result relation than the rule itself is
    on, otherwise this recursive process will end up in an infinite loop.
diff --git a/src/backend/access/common/heaptuple.c b/src/backend/access/common/heaptuple.c
index 24a27e387d..0b56b0fa5a 100644
--- a/src/backend/access/common/heaptuple.c
+++ b/src/backend/access/common/heaptuple.c
@@ -719,11 +719,11 @@ heap_copytuple_with_tuple(HeapTuple src, HeapTuple dest)
}

/*
 * Expand a tuple which has [-less-]{+fewer+} attributes than required. For each attribute
 * not present in the sourceTuple, if there is a missing value that will be
 * used. Otherwise the attribute will be set to NULL.
 *
 * The source tuple must have [-less-]{+fewer+} attributes than the required number.
 *
 * Only one of targetHeapTuple and targetMinimalTuple may be supplied. The
 * other argument must be NULL.
diff --git a/src/backend/commands/analyze.c b/src/backend/commands/analyze.c
index 7295cf0215..64908ac39c 100644
--- a/src/backend/commands/analyze.c
+++ b/src/backend/commands/analyze.c
@@ -1003,7 +1003,7 @@ examine_attribute(Relation onerel, int attnum, Node *index_expr)
 * As of May 2004 we use a new two-stage method:  Stage one selects up
 * to targrows random blocks (or all blocks, if there aren't so many).
 * Stage two scans these blocks and uses the Vitter algorithm to create
 * a random sample of targrows rows (or [-less,-]{+fewer,+} if there are [-less-]{+fewer+} in the
 * sample of blocks).  The two stages are executed simultaneously: each
 * block is processed as soon as stage one returns its number and while
 * the rows are read stage two controls which ones are to be inserted
diff --git a/src/backend/utils/adt/jsonpath_exec.c b/src/backend/utils/adt/jsonpath_exec.c
index 4d185c27b4..078aaef539 100644
--- a/src/backend/utils/adt/jsonpath_exec.c
+++ b/src/backend/utils/adt/jsonpath_exec.c
@@ -263,7 +263,7 @@ static int    compareDatetime(Datum val1, Oid typid1, Datum val2, Oid typid2,
 *        implement @? and @@ operators, which in turn are intended to have an
 *        index support.  Thus, it's desirable to make it easier to achieve
 *        consistency between index scan results and sequential scan results.
 *        So, we throw as [-less-]{+few+} errors as possible.  Regarding this function,
 *        such behavior also matches behavior of JSON_EXISTS() clause of
 *        SQL/JSON.  Regarding jsonb_path_match(), this function doesn't have
 *        an analogy in SQL/JSON, so we define its behavior on our own.
diff --git a/src/backend/utils/adt/selfuncs.c b/src/backend/utils/adt/selfuncs.c
index 47ca4ddbb5..52314d3aa1 100644
--- a/src/backend/utils/adt/selfuncs.c
+++ b/src/backend/utils/adt/selfuncs.c
@@ -645,7 +645,7 @@ scalarineqsel(PlannerInfo *root, Oid operator, bool isgt, bool iseq,

            /*
             * The calculation so far gave us a selectivity for the "<=" case.
             * We'll have one [-less-]{+fewer+} tuple for "<" and one additional tuple for
             * ">=", the latter of which we'll reverse the selectivity for
             * below, so we can simply subtract one tuple for both cases.  The
             * cases that need this adjustment can be identified by iseq being
diff --git a/src/backend/utils/cache/catcache.c b/src/backend/utils/cache/catcache.c
index fa2b49c676..55c9445898 100644
--- a/src/backend/utils/cache/catcache.c
+++ b/src/backend/utils/cache/catcache.c
@@ -1497,7 +1497,7 @@ GetCatCacheHashValue(CatCache *cache,
 *        It doesn't make any sense to specify all of the cache's key columns
 *        here: since the key is unique, there could be at most one match, so
 *        you ought to use SearchCatCache() instead.  Hence this function takes
 *        one [-less-]{+fewer+} Datum argument than SearchCatCache() does.
 *
 *        The caller must not modify the list object or the pointed-to tuples,
 *        and must call ReleaseCatCacheList() when done with the list.
diff --git a/src/backend/utils/misc/sampling.c b/src/backend/utils/misc/sampling.c
index 0c327e823f..7348b86682 100644
--- a/src/backend/utils/misc/sampling.c
+++ b/src/backend/utils/misc/sampling.c
@@ -42,7 +42,7 @@ BlockSampler_Init(BlockSampler bs, BlockNumber nblocks, int samplesize,
    bs->N = nblocks;            /* measured table size */

    /*
     * If we decide to reduce samplesize for tables that have [-less-]{+fewer+} or not much
     * more than samplesize blocks, here is the place to do it.
     */
    bs->n = samplesize;
diff --git a/src/backend/utils/mmgr/freepage.c b/src/backend/utils/mmgr/freepage.c
index e4ee1aab97..10a1effb74 100644
--- a/src/backend/utils/mmgr/freepage.c
+++ b/src/backend/utils/mmgr/freepage.c
@@ -495,7 +495,7 @@ FreePageManagerDump(FreePageManager *fpm)
 * if we search the parent page for the first key greater than or equal to
 * the first key on the current page, the downlink to this page will be either
 * the exact index returned by the search (if the first key decreased)
 * or one [-less-]{+fewer+} (if the first key increased).
 */
static void
FreePageBtreeAdjustAncestorKeys(FreePageManager *fpm, FreePageBtree *btp)
diff --git a/src/bin/pgbench/pgbench.c b/src/bin/pgbench/pgbench.c
index a4a3f40048..627a244fb7 100644
--- a/src/bin/pgbench/pgbench.c
+++ b/src/bin/pgbench/pgbench.c
@@ -6458,7 +6458,7 @@ threadRun(void *arg)

            /*
             * If advanceConnectionState changed client to finished state,
             * that's one [-less-]{+fewer+} client that remains.
             */
            if (st->state == CSTATE_FINISHED || st->state == CSTATE_ABORTED)
                remains--;
diff --git a/src/include/pg_config_manual.h b/src/include/pg_config_manual.h
index d27c8601fa..e3d2e751ea 100644
--- a/src/include/pg_config_manual.h
+++ b/src/include/pg_config_manual.h
@@ -21,7 +21,7 @@

/*
 * Maximum length for identifiers (e.g. table names, column names,
 * function names).  Names actually are limited to one [-less-]{+fewer+} byte than this,
 * because the length must include a trailing zero byte.
 *
 * Changing this requires an initdb.
@@ -87,7 +87,7 @@

/*
 * MAXPGPATH: standard size of a pathname buffer in PostgreSQL (hence,
 * maximum usable pathname length is one [-less).-]{+fewer).+}
 *
 * We'd use a standard system header symbol for this, if there weren't
 * so many to choose from: MAXPATHLEN, MAX_PATH, PATH_MAX are all
diff --git a/src/interfaces/ecpg/include/sqlda-native.h b/src/interfaces/ecpg/include/sqlda-native.h
index 67d3c7b4e4..9e73f1f1b1 100644
--- a/src/interfaces/ecpg/include/sqlda-native.h
+++ b/src/interfaces/ecpg/include/sqlda-native.h
@@ -7,7 +7,7 @@

/*
 * Maximum length for identifiers (e.g. table names, column names,
 * function names).  Names actually are limited to one [-less-]{+fewer+} byte than this,
 * because the length must include a trailing zero byte.
 *
 * This should be at least as much as NAMEDATALEN of the database the
diff --git a/src/test/regress/expected/geometry.out b/src/test/regress/expected/geometry.out
index 84f7eabb66..9799cfbdbd 100644
--- a/src/test/regress/expected/geometry.out
+++ b/src/test/regress/expected/geometry.out
@@ -4325,7 +4325,7 @@ SELECT f1, polygon(8, f1) FROM CIRCLE_TBL WHERE f1 >= '<(0,0),1>';
 <(100,1),115>  |
((-15,1),(18.6827201635,82.3172798365),(100,116),(181.317279836,82.3172798365),(215,1),(181.317279836,-80.3172798365),(100,-114),(18.6827201635,-80.3172798365))
(6 rows)

-- Too [-less-]{+few+} points error
SELECT f1, polygon(1, f1) FROM CIRCLE_TBL WHERE f1 >= '<(0,0),1>';
ERROR:  must request at least 2 points
-- Zero radius error
diff --git a/src/test/regress/sql/geometry.sql b/src/test/regress/sql/geometry.sql
index 96df0ab05a..b0ab6d03ec 100644
--- a/src/test/regress/sql/geometry.sql
+++ b/src/test/regress/sql/geometry.sql
@@ -424,7 +424,7 @@ SELECT f1, f1::polygon FROM CIRCLE_TBL WHERE f1 >= '<(0,0),1>';
-- To polygon with less points
SELECT f1, polygon(8, f1) FROM CIRCLE_TBL WHERE f1 >= '<(0,0),1>';

-- Too [-less-]{+few+} points error
SELECT f1, polygon(1, f1) FROM CIRCLE_TBL WHERE f1 >= '<(0,0),1>';

-- Zero radius error

commit 1c00249319faf6dc23aadf4568ead5adc65ff57f
Author: Justin Pryzby <pryzbyj@telsasoft.com>
Date:   Wed Feb 10 17:45:07 2021 -0600

    comment typos

diff --git a/src/include/lib/simplehash.h b/src/include/lib/simplehash.h
index 395be1ca9a..99a03c8f21 100644
--- a/src/include/lib/simplehash.h
+++ b/src/include/lib/simplehash.h
@@ -626,7 +626,7 @@ restart:
        uint32        curoptimal;
        SH_ELEMENT_TYPE *entry = &data[curelem];

        /* any empty bucket can[-directly-] be used {+directly+} */
        if (entry->status == SH_STATUS_EMPTY)
        {
            tb->members++;

commit 2ac95b66e30785d480ef04c11d12b1075548045e
Author: Justin Pryzby <pryzbyj@telsasoft.com>
Date:   Sat Nov 14 23:09:21 2020 -0600

    typos in master

diff --git a/doc/src/sgml/datatype.sgml b/doc/src/sgml/datatype.sgml
index 7c341c8e3f..fe88c2273a 100644
--- a/doc/src/sgml/datatype.sgml
+++ b/doc/src/sgml/datatype.sgml
@@ -639,7 +639,7 @@ NUMERIC

    <para>
     The <literal>NaN</literal> (not a number) value is used to represent
     undefined [-calculational-]{+computational+} results.  In general, any operation with
     a <literal>NaN</literal> input yields another <literal>NaN</literal>.
     The only exception is when the operation's other inputs are such that
     the same output would be obtained if the <literal>NaN</literal> were to

commit d6d3499f52e664b7da88a3f2c94701cae6d76609
Author: Justin Pryzby <pryzbyj@telsasoft.com>
Date:   Sat Dec 5 22:43:12 2020 -0600

    pg_restore: "must be specified" and --list
    
    This was discussed here, but the idea got lost.

https://www.postgresql.org/message-id/flat/20190612170201.GA11881%40alvherre.pgsql#2984347ab074e6f198bd294fa41884df

diff --git a/src/bin/pg_dump/pg_restore.c b/src/bin/pg_dump/pg_restore.c
index 589b4aed53..f6e6e41329 100644
--- a/src/bin/pg_dump/pg_restore.c
+++ b/src/bin/pg_dump/pg_restore.c
@@ -305,7 +305,7 @@ main(int argc, char **argv)
    /* Complain if neither -f nor -d was specified (except if dumping TOC) */
    if (!opts->cparams.dbname && !opts->filename && !opts->tocSummary)
    {
        pg_log_error("one of [--d/--dbname and -f/--file-]{+-d/--dbname, -f/--file, or -l/--list+} must be
specified");
        exit_nicely(1);
    }

diff --git a/src/bin/pg_dump/t/001_basic.pl b/src/bin/pg_dump/t/001_basic.pl
index 083fb3ad08..8280914c2a 100644
--- a/src/bin/pg_dump/t/001_basic.pl
+++ b/src/bin/pg_dump/t/001_basic.pl
@@ -63,8 +63,8 @@ command_fails_like(

command_fails_like(
    ['pg_restore'],
    qr{\Qpg_restore: error: one of [--d/--dbname and -f/--file-]{+-d/--dbname, -f/--file, or -l/--list+} must be
specified\E},
    'pg_restore: error: one of [--d/--dbname and -f/--file-]{+-d/--dbname, -f/--file, or -l/--list+} must be
specified');

command_fails_like(
    [ 'pg_restore', '-s', '-a', '-f -' ],

commit 7c2dee70b0450bac5cfa2c3db52b4a2b2e535a9e
Author: Justin Pryzby <pryzbyj@telsasoft.com>
Date:   Sat Feb 15 15:53:34 2020 -0600

    Update comment obsolete since 69c3936a

diff --git a/src/backend/executor/nodeAgg.c b/src/backend/executor/nodeAgg.c
index 601b6dab03..394b4e667b 100644
--- a/src/backend/executor/nodeAgg.c
+++ b/src/backend/executor/nodeAgg.c
@@ -2064,8 +2064,7 @@ initialize_hash_entry(AggState *aggstate, TupleHashTable hashtable,
}

/*
 * Look up hash entries for the current tuple in all hashed grouping [-sets,-]
[- * returning an array of pergroup pointers suitable for advance_aggregates.-]{+sets.+}
 *
 * Be aware that lookup_hash_entry can reset the tmpcontext.
 *

commit 4b81f9512395cb321730e0a3dba1c659b9c2fee3
Author: Justin Pryzby <pryzbyj@telsasoft.com>
Date:   Fri Jan 8 13:09:55 2021 -0600

    doc: pageinspect
    
    d6061f83a166b015657fda8623c704fcb86930e9
    
    backpatch to 9.6?

diff --git a/doc/src/sgml/pageinspect.sgml b/doc/src/sgml/pageinspect.sgml
index a0be779940..a7bce41b7c 100644
--- a/doc/src/sgml/pageinspect.sgml
+++ b/doc/src/sgml/pageinspect.sgml
@@ -211,7 +211,7 @@ test=# SELECT tuple_data_split('pg_class'::regclass, t_data, t_infomask, t_infom
     </para>
     <para>
      If <parameter>do_detoast</parameter> is <literal>true</literal>,
      [-attribute that-]{+attributes+} will be detoasted as needed. Default value is
      <literal>false</literal>.
     </para>
    </listitem>

Вложения

Re: doc review for v14

От
Justin Pryzby
Дата:
Another round of doc review, not yet including all of yesterday's commits.

29c8d614c3 duplicate words
diff --git a/src/include/lib/sort_template.h b/src/include/lib/sort_template.h
index 771c789ced..24d6d0006c 100644
--- a/src/include/lib/sort_template.h
+++ b/src/include/lib/sort_template.h
@@ -241,7 +241,7 @@ ST_SCOPE void ST_SORT(ST_ELEMENT_TYPE *first, size_t n
 
 /*
  * Find the median of three values.  Currently, performance seems to be best
- * if the the comparator is inlined here, but the med3 function is not inlined
+ * if the comparator is inlined here, but the med3 function is not inlined
  * in the qsort function.
  */
 static pg_noinline ST_ELEMENT_TYPE *
e7c370c7c5 pg_amcheck: remove Double semi-colon
diff --git a/src/bin/pg_amcheck/t/004_verify_heapam.pl b/src/bin/pg_amcheck/t/004_verify_heapam.pl
index 36607596b1..2171d236a7 100644
--- a/src/bin/pg_amcheck/t/004_verify_heapam.pl
+++ b/src/bin/pg_amcheck/t/004_verify_heapam.pl
@@ -175,7 +175,7 @@ sub write_tuple
     seek($fh, $offset, 0)
         or BAIL_OUT("seek failed: $!");
     defined(syswrite($fh, $buffer, HEAPTUPLE_PACK_LENGTH))
-        or BAIL_OUT("syswrite failed: $!");;
+        or BAIL_OUT("syswrite failed: $!");
     return;
 }
 
b745e9e60e a statistics objects
diff --git a/src/backend/statistics/extended_stats.c b/src/backend/statistics/extended_stats.c
index 463d44a68a..4674168ff8 100644
--- a/src/backend/statistics/extended_stats.c
+++ b/src/backend/statistics/extended_stats.c
@@ -254,7 +254,7 @@ BuildRelationExtStatistics(Relation onerel, double totalrows,
  * that would require additional columns.
  *
  * See statext_compute_stattarget for details about how we compute statistics
- * target for a statistics objects (from the object target, attribute targets
+ * target for a statistics object (from the object target, attribute targets
  * and default statistics target).
  */
 int
e7d5c5d9dc guc.h: remove mention of "doit"
diff --git a/src/include/utils/guc.h b/src/include/utils/guc.h
index 1892c7927b..1126b34798 100644
--- a/src/include/utils/guc.h
+++ b/src/include/utils/guc.h
@@ -90,8 +90,7 @@ typedef enum
  * dividing line between "interactive" and "non-interactive" sources for
  * error reporting purposes.
  *
- * PGC_S_TEST is used when testing values to be used later ("doit" will always
- * be false, so this never gets stored as the actual source of any value).
+ * PGC_S_TEST is used when testing values to be used later.
  * For example, ALTER DATABASE/ROLE tests proposed per-database or per-user
  * defaults this way, and CREATE FUNCTION tests proposed function SET clauses
  * this way.  This is an interactive case, but it needs its own source value
ad5f9a2023 Caller
diff --git a/src/backend/utils/adt/jsonfuncs.c b/src/backend/utils/adt/jsonfuncs.c
index 9961d27df4..09fcff6729 100644
--- a/src/backend/utils/adt/jsonfuncs.c
+++ b/src/backend/utils/adt/jsonfuncs.c
@@ -1651,7 +1651,7 @@ push_null_elements(JsonbParseState **ps, int num)
  * this path. E.g. the path [a][0][b] with the new value 1 will produce the
  * structure {a: [{b: 1}]}.
  *
- * Called is responsible to make sure such path does not exist yet.
+ * Caller is responsible to make sure such path does not exist yet.
  */
 static void
 push_path(JsonbParseState **st, int level, Datum *path_elems,
@@ -4887,7 +4887,7 @@ IteratorConcat(JsonbIterator **it1, JsonbIterator **it2,
  * than just one last element, this flag will instruct to create the whole
  * chain of corresponding objects and insert the value.
  *
- * JB_PATH_CONSISTENT_POSITION for an array indicates that the called wants to
+ * JB_PATH_CONSISTENT_POSITION for an array indicates that the caller wants to
  * keep values with fixed indices. Indices for existing elements could be
  * changed (shifted forward) in case if the array is prepended with a new value
  * and a negative index out of the range, so this behavior will be prevented
9acedbd4af as
diff --git a/src/backend/commands/copyfrom.c b/src/backend/commands/copyfrom.c
index 20e7d57d41..40a54ad0bd 100644
--- a/src/backend/commands/copyfrom.c
+++ b/src/backend/commands/copyfrom.c
@@ -410,7 +410,7 @@ CopyMultiInsertBufferCleanup(CopyMultiInsertInfo *miinfo,
  * Once flushed we also trim the tracked buffers list down to size by removing
  * the buffers created earliest first.
  *
- * Callers should pass 'curr_rri' is the ResultRelInfo that's currently being
+ * Callers should pass 'curr_rri' as the ResultRelInfo that's currently being
  * used.  When cleaning up old buffers we'll never remove the one for
  * 'curr_rri'.
  */
9f78de5042 exist
diff --git a/src/backend/commands/analyze.c b/src/backend/commands/analyze.c
index 5bdaceefd5..182a133033 100644
--- a/src/backend/commands/analyze.c
+++ b/src/backend/commands/analyze.c
@@ -617,7 +617,7 @@ do_analyze_rel(Relation onerel, VacuumParams *params,
      *
      * We assume that VACUUM hasn't set pg_class.reltuples already, even
      * during a VACUUM ANALYZE.  Although VACUUM often updates pg_class,
-     * exceptions exists.  A "VACUUM (ANALYZE, INDEX_CLEANUP OFF)" command
+     * exceptions exist.  A "VACUUM (ANALYZE, INDEX_CLEANUP OFF)" command
      * will never update pg_class entries for index relations.  It's also
      * possible that an individual index's pg_class entry won't be updated
      * during VACUUM if the index AM returns NULL from its amvacuumcleanup()
a45af383ae rebuilt
diff --git a/src/backend/commands/cluster.c b/src/backend/commands/cluster.c
index 096a06f7b3..6487a9e3fc 100644
--- a/src/backend/commands/cluster.c
+++ b/src/backend/commands/cluster.c
@@ -1422,7 +1422,7 @@ finish_heap_swap(Oid OIDOldHeap, Oid OIDNewHeap,
                                  PROGRESS_CLUSTER_PHASE_FINAL_CLEANUP);
 
     /*
-     * If the relation being rebuild is pg_class, swap_relation_files()
+     * If the relation being rebuilt is pg_class, swap_relation_files()
      * couldn't update pg_class's own pg_class entry (check comments in
      * swap_relation_files()), thus relfrozenxid was not updated. That's
      * annoying because a potential reason for doing a VACUUM FULL is a
f24c2c1075 docs review: logical replication
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index a382258aee..bc4a8b2279 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -4137,7 +4137,7 @@ restore_command = 'copy "C:\\server\\archivedir\\%f" "%p"'  # Windows
          On the subscriber side, specifies how many replication origins (see
          <xref linkend="replication-origins"/>) can be tracked simultaneously,
          effectively limiting how many logical replication subscriptions can
-         be created on the server. Setting it a lower value than the current
+         be created on the server. Setting it to a lower value than the current
          number of tracked replication origins (reflected in
          <link linkend="view-pg-replication-origin-status">pg_replication_origin_status</link>,
          not <link linkend="catalog-pg-replication-origin">pg_replication_origin</link>)
diff --git a/doc/src/sgml/logical-replication.sgml b/doc/src/sgml/logical-replication.sgml
index 3fad5f34e6..7645ee032c 100644
--- a/doc/src/sgml/logical-replication.sgml
+++ b/doc/src/sgml/logical-replication.sgml
@@ -602,13 +602,12 @@
   </para>
 
   <para>
-   The subscriber also requires the <varname>max_replication_slots</varname>
-   be set to configure how many replication origins can be tracked.  In this
-   case it should be set to at least the number of subscriptions that will be
-   added to the subscriber, plus some reserve for table synchronization.
-   <varname>max_logical_replication_workers</varname> must be set to at least
-   the number of subscriptions, again plus some reserve for the table
-   synchronization.  Additionally the <varname>max_worker_processes</varname>
+   <varname>max_replication_slots</varname> must also be set on the subscriber.
+   It should be set to at least the number of
+   subscriptions that will be added to the subscriber, plus some reserve for
+   table synchronization.  <varname>max_logical_replication_workers</varname>
+   must be set to at least the number of subscriptions, again plus some reserve
+   for the table synchronization.  Additionally the <varname>max_worker_processes</varname>
    may need to be adjusted to accommodate for replication workers, at least
    (<varname>max_logical_replication_workers</varname>
    + <literal>1</literal>).  Note that some extensions and parallel queries
83f9954468 accessmtd
diff --git a/src/backend/catalog/heap.c b/src/backend/catalog/heap.c
index 9f6303266f..ba03e8aa8f 100644
--- a/src/backend/catalog/heap.c
+++ b/src/backend/catalog/heap.c
@@ -1119,6 +1119,7 @@ AddNewRelationType(const char *typeName,
  *    reltypeid: OID to assign to rel's rowtype, or InvalidOid to select one
  *    reloftypeid: if a typed table, OID of underlying type; else InvalidOid
  *    ownerid: OID of new rel's owner
+ *    accessmtd: OID of new rel's access method
  *    tupdesc: tuple descriptor (source of column definitions)
  *    cooked_constraints: list of precooked check constraints and defaults
  *    relkind: relkind for new rel
573eeb8666 language fixen
diff --git a/doc/src/sgml/maintenance.sgml b/doc/src/sgml/maintenance.sgml
index 3bbae6dd91..4adb34a21b 100644
--- a/doc/src/sgml/maintenance.sgml
+++ b/doc/src/sgml/maintenance.sgml
@@ -185,7 +185,7 @@
     never issue <command>VACUUM FULL</command>.  In this approach, the idea
     is not to keep tables at their minimum size, but to maintain steady-state
     usage of disk space: each table occupies space equivalent to its
-    minimum size plus however much space gets used up between vacuumings.
+    minimum size plus however much space gets used up between vacuum runs.
     Although <command>VACUUM FULL</command> can be used to shrink a table back
     to its minimum size and return the disk space to the operating system,
     there is not much point in this if the table will just grow again in the
diff --git a/doc/src/sgml/perform.sgml b/doc/src/sgml/perform.sgml
index d1af624f44..89ff58338e 100644
--- a/doc/src/sgml/perform.sgml
+++ b/doc/src/sgml/perform.sgml
@@ -1899,7 +1899,7 @@ SELECT * FROM x, y, a, b, c WHERE something AND somethingelse;
     much faster.  The following are configuration changes you can make
     to improve performance in such cases.  Except as noted below, durability
     is still guaranteed in case of a crash of the database software;
-    only abrupt operating system stoppage creates a risk of data loss
+    only an abrupt operating system crash creates a risk of data loss
     or corruption when these settings are used.
 
     <itemizedlist>
diff --git a/doc/src/sgml/ref/createuser.sgml b/doc/src/sgml/ref/createuser.sgml
index 4d60dc2cda..17579e50af 100644
--- a/doc/src/sgml/ref/createuser.sgml
+++ b/doc/src/sgml/ref/createuser.sgml
@@ -44,7 +44,7 @@ PostgreSQL documentation
    If you wish to create a new superuser, you must connect as a
    superuser, not merely with <literal>CREATEROLE</literal> privilege.
    Being a superuser implies the ability to bypass all access permission
-   checks within the database, so superuserdom should not be granted lightly.
+   checks within the database, so superuser access should not be granted lightly.
   </para>
 
   <para>
d37a8a04f7 wal_compression
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index cf4e82e8b5..a382258aee 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -3098,7 +3098,7 @@ include_dir 'conf.d'
       <listitem>
        <para>
         When this parameter is <literal>on</literal>, the <productname>PostgreSQL</productname>
-        server compresses a full page image written to WAL when
+        server compresses full page images written to WAL when
         <xref linkend="guc-full-page-writes"/> is on or during a base backup.
         A compressed page image will be decompressed during WAL replay.
         The default value is <literal>off</literal>.
e6025e2e81 amcheck
diff --git a/doc/src/sgml/amcheck.sgml b/doc/src/sgml/amcheck.sgml
index a2571d33ae..30fcb033e3 100644
--- a/doc/src/sgml/amcheck.sgml
+++ b/doc/src/sgml/amcheck.sgml
@@ -457,14 +457,13 @@ SET client_min_messages = DEBUG1;
    </listitem>
    <listitem>
     <para>
-     File system or storage subsystem faults where checksums happen to
-     simply not be enabled.
+     File system or storage subsystem faults where checksums are
+     not enabled.
     </para>
     <para>
-     Note that <filename>amcheck</filename> examines a page as represented in some
-     shared memory buffer at the time of verification if there is only a
-     shared buffer hit when accessing the block. Consequently,
-     <filename>amcheck</filename> does not necessarily examine data read from the
+     Note that <filename>amcheck</filename> examines a page as represented in a
+     shared memory buffer at the time of verification.  If the page is cached,
+     <filename>amcheck</filename> will not examine data read from the
      file system at the time of verification. Note that when checksums are
      enabled, <filename>amcheck</filename> may raise an error due to a checksum
      failure when a corrupt block is read into a buffer.
d987f0505e spell: vacuum
diff --git a/doc/src/sgml/ref/create_table.sgml b/doc/src/sgml/ref/create_table.sgml
index 44e50620fd..d7fffddbce 100644
--- a/doc/src/sgml/ref/create_table.sgml
+++ b/doc/src/sgml/ref/create_table.sgml
@@ -1520,7 +1520,7 @@ WITH ( MODULUS <replaceable class="parameter">numeric_literal</replaceable>, REM
     </listitem>
    </varlistentry>
 
-   <varlistentry id="reloption-autovacuum-vauum-scale-factor" xreflabel="autovacuum_vacuum_scale_factor">
+   <varlistentry id="reloption-autovacuum-vacuum-scale-factor" xreflabel="autovacuum_vacuum_scale_factor">
     <term><literal>autovacuum_vacuum_scale_factor</literal>, <literal>toast.autovacuum_vacuum_scale_factor</literal>
(<type>floatingpoint</type>)
 
     <indexterm>
      <primary><varname>autovacuum_vacuum_scale_factor</varname> </primary>
@@ -1610,7 +1610,7 @@ WITH ( MODULUS <replaceable class="parameter">numeric_literal</replaceable>, REM
     </listitem>
    </varlistentry>
 
-   <varlistentry id="reloption-autovacuum-vauum-cost-limit" xreflabel="autovacuum_vacuum_cost_limit">
+   <varlistentry id="reloption-autovacuum-vacuum-cost-limit" xreflabel="autovacuum_vacuum_cost_limit">
     <term><literal>autovacuum_vacuum_cost_limit</literal>, <literal>toast.autovacuum_vacuum_cost_limit</literal>
(<type>integer</type>)
     <indexterm>
      <primary><varname>autovacuum_vacuum_cost_limit</varname></primary>
69e597176b doc review: Fix use of cursor sensitivity terminology
diff --git a/doc/src/sgml/ref/declare.sgml b/doc/src/sgml/ref/declare.sgml
index 8a2b8cc892..aa3d1d1fa1 100644
--- a/doc/src/sgml/ref/declare.sgml
+++ b/doc/src/sgml/ref/declare.sgml
@@ -335,7 +335,7 @@ DECLARE liahona CURSOR FOR SELECT * FROM films;
   <para>
    According to the SQL standard, changes made to insensitive cursors by
    <literal>UPDATE ... WHERE CURRENT OF</literal> and <literal>DELETE
-   ... WHERE CURRENT OF</literal> statements are visibible in that same
+   ... WHERE CURRENT OF</literal> statements are visible in that same
    cursor.  <productname>PostgreSQL</productname> treats these statements like
    all other data changing statements in that they are not visible in
    insensitive cursors.
3399caf133 doc review: Make use of in-core query id added by commit 5fd9dfa5f5
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 04712769ca..cf4e82e8b5 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -7732,7 +7732,7 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv;
         The <xref linkend="pgstatstatements"/> extension also requires a query
         identifier to be computed.  Note that an external module can
         alternatively be used if the in-core query identifier computation
-        specification isn't acceptable.  In this case, in-core computation
+        method isn't acceptable.  In this case, in-core computation
         must be disabled.  The default is <literal>off</literal>.
        </para>
        <note>
567b33c755 doc review: Move pg_stat_statements query jumbling to core.
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index ae1a38b8bc..04712769ca 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -7737,7 +7737,7 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv;
        </para>
        <note>
         <para>
-         To ensure that a only one query identifier is calculated and
+         To ensure that only one query identifier is calculated and
          displayed, extensions that calculate query identifiers should
          throw an error if a query identifier has already been computed.
         </para>
diff --git a/doc/src/sgml/pgstatstatements.sgml b/doc/src/sgml/pgstatstatements.sgml
index 5ad4f0aed2..e235504e9a 100644
--- a/doc/src/sgml/pgstatstatements.sgml
+++ b/doc/src/sgml/pgstatstatements.sgml
@@ -406,7 +406,7 @@
   <note>
    <para>
     The following details about constant replacement and
-    <structfield>queryid</structfield> only applies when <xref
+    <structfield>queryid</structfield> only apply when <xref
     linkend="guc-compute-query-id"/> is enabled.  If you use an external
     module instead to compute <structfield>queryid</structfield>, you
     should refer to its documentation for details.
e292ee3e35 doc review: Add function to log the memory contexts of specified backend process.
diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
index be22f4b61b..679738f615 100644
--- a/doc/src/sgml/func.sgml
+++ b/doc/src/sgml/func.sgml
@@ -24926,12 +24926,12 @@ SELECT collation for ('foo' COLLATE "de_DE");
         <returnvalue>boolean</returnvalue>
        </para>
        <para>
-        Requests to log the memory contexts whose backend process has
-        the specified process ID.  These memory contexts will be logged at
+        Requests to log the memory contexts of the backend with the
+        specified process ID.  These memory contexts will be logged at
         <literal>LOG</literal> message level. They will appear in
         the server log based on the log configuration set
         (See <xref linkend="runtime-config-logging"/> for more information),
-        but will not be sent to the client whatever the setting of
+        but will not be sent to the client regardless of
         <xref linkend="guc-client-min-messages"/>.
         Only superusers can request to log the memory contexts.
        </para></entry>
@@ -25037,9 +25037,9 @@ SELECT collation for ('foo' COLLATE "de_DE");
 
    <para>
     <function>pg_log_backend_memory_contexts</function> can be used
-    to log the memory contexts of the backend process. For example,
+    to log the memory contexts of a backend process. For example,
 <programlisting>
-postgres=# SELECT pg_log_backend_memory_contexts(pg_backend_pid());
+postgres=# SELECT pg_log_backend_memory_contexts(pg_backend_pid()); -- XXX
  pg_log_backend_memory_contexts 
 --------------------------------
  t
@@ -25061,8 +25061,8 @@ LOG:  level: 1; TransactionAbortContext: 32768 total in 1 blocks; 32504 free (0
 LOG:  level: 1; ErrorContext: 8192 total in 1 blocks; 7928 free (3 chunks); 264 used
 LOG:  Grand total: 1651920 bytes in 201 blocks; 622360 free (88 chunks); 1029560 used
 </screen>
-    For more than 100 child contexts under the same parent one,
-    100 child contexts and a summary of the remaining ones will be logged.
+    If there are more than 100 child contexts under the same parent, the first
+    100 child contexts are logged, along with a summary of the remaining contexts.
     Note that frequent calls to this function could incur significant overhead,
     because it may generate a large number of log messages.
    </para>
85330eeda7 doc review: Stop archive recovery if WAL generated with wal_level=minimal is found.
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 26628f3e6d..ae1a38b8bc 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -2723,7 +2723,7 @@ include_dir 'conf.d'
         Note that changing <varname>wal_level</varname> to
         <literal>minimal</literal> makes any base backups taken before
         unavailable for archive recovery and standby server, which may
-        lead to database loss.
+        lead to data loss.
        </para>
        <para>
         In <literal>logical</literal> level, the same information is logged as
diff --git a/doc/src/sgml/perform.sgml b/doc/src/sgml/perform.sgml
index e0d3f246e9..d1af624f44 100644
--- a/doc/src/sgml/perform.sgml
+++ b/doc/src/sgml/perform.sgml
@@ -1747,7 +1747,7 @@ SELECT * FROM x, y, a, b, c WHERE something AND somethingelse;
     <xref linkend="guc-max-wal-senders"/> to zero.
     But note that changing these settings requires a server restart,
     and makes any base backups taken before unavailable for archive
-    recovery and standby server, which may lead to database loss.
+    recovery and standby server, which may lead to data loss.
    </para>
 
    <para>
dfdae1597d doc review: Add unistr function
diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
index 7b75e0bca2..be22f4b61b 100644
--- a/doc/src/sgml/func.sgml
+++ b/doc/src/sgml/func.sgml
@@ -3560,7 +3560,7 @@ repeat('Pg', 4) <returnvalue>PgPgPgPg</returnvalue>
         <returnvalue>text</returnvalue>
        </para>
        <para>
-        Evaluate escaped Unicode characters in argument.  Unicode characters
+        Evaluate escaped Unicode characters in the argument.  Unicode characters
         can be specified as
         <literal>\<replaceable>XXXX</replaceable></literal> (4 hexadecimal
         digits), <literal>\+<replaceable>XXXXXX</replaceable></literal> (6
491445e3c9 doc review: postgres_fdw: Add option to control whether to keep connections open.
diff --git a/doc/src/sgml/postgres-fdw.sgml b/doc/src/sgml/postgres-fdw.sgml
index fd34956936..e8cb679164 100644
--- a/doc/src/sgml/postgres-fdw.sgml
+++ b/doc/src/sgml/postgres-fdw.sgml
@@ -551,8 +551,8 @@ OPTIONS (ADD password_required 'false');
     <title>Connection Management Options</title>
 
     <para>
-     By default all the open connections that <filename>postgres_fdw</filename>
-     established to the foreign servers are kept in local session for re-use.
+     By default, all connections that <filename>postgres_fdw</filename>
+     establishes to foreign servers are kept open for re-use in the local session.
     </para>
  
     <variablelist>
@@ -562,11 +562,11 @@ OPTIONS (ADD password_required 'false');
       <listitem>
        <para>
         This option controls whether <filename>postgres_fdw</filename> keeps
-        the connections to the foreign server open so that the subsequent
+        the connections to the foreign server open so that subsequent
         queries can re-use them. It can only be specified for a foreign server.
         The default is <literal>on</literal>. If set to <literal>off</literal>,
         all connections to this foreign server will be discarded at the end of
-        transaction.
+        each transaction.
       </para>
       </listitem>
      </varlistentry>
95a43e5c2d doc review: BRIN minmax-multi indexes
diff --git a/doc/src/sgml/brin.sgml b/doc/src/sgml/brin.sgml
index d2476481af..ce7c210575 100644
--- a/doc/src/sgml/brin.sgml
+++ b/doc/src/sgml/brin.sgml
@@ -730,7 +730,7 @@ LOG:  request for BRIN range summarization for index "brin_wi_idx" page 128 was
      for <xref linkend="sql-altertable"/>. When set to a positive value,
      each block range is assumed to contain this number of distinct non-null
      values. When set to a negative value, which must be greater than or
-     equal to -1, the number of distinct non-null is assumed linear with
+     equal to -1, the number of distinct non-null values is assumed to grow linearly with
      the maximum possible number of tuples in the block range (about 290
      rows per block). The default value is <literal>-0.1</literal>, and
      the minimum number of distinct non-null values is <literal>16</literal>.
@@ -1214,7 +1214,7 @@ typedef struct BrinOpcInfo
 
  <para>
   The minmax-multi operator class is also intended for data types implementing
-  a totally ordered sets, and may be seen as a simple extension of the minmax
+  a totally ordered set, and may be seen as a simple extension of the minmax
   operator class. While minmax operator class summarizes values from each block
   range into a single contiguous interval, minmax-multi allows summarization
   into multiple smaller intervals to improve handling of outlier values.
e53ac30d44 doc review: Track total amounts of times spent writing and syncing WAL data to disk.
diff --git a/doc/src/sgml/wal.sgml b/doc/src/sgml/wal.sgml
index 0f13c43095..24cf567ee2 100644
--- a/doc/src/sgml/wal.sgml
+++ b/doc/src/sgml/wal.sgml
@@ -797,7 +797,7 @@
    <literal>fsync</literal>, or <literal>fsync_writethrough</literal>,
    the write operation moves WAL buffers to kernel cache and
    <function>issue_xlog_fsync</function> syncs them to disk. Regardless
-   of the setting of <varname>track_wal_io_timing</varname>, the numbers
+   of the setting of <varname>track_wal_io_timing</varname>, the number
    of times <function>XLogWrite</function> writes and
    <function>issue_xlog_fsync</function> syncs WAL data to disk are also
    counted as <literal>wal_write</literal> and <literal>wal_sync</literal>
6e0c552d1c doc review: Be clear about whether a recovery pause has taken effect.
diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
index 0606b6a9aa..7b75e0bca2 100644
--- a/doc/src/sgml/func.sgml
+++ b/doc/src/sgml/func.sgml
@@ -25576,7 +25576,7 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup());
         Returns recovery pause state.  The return values are <literal>
         not paused</literal> if pause is not requested, <literal>
         pause requested</literal> if pause is requested but recovery is
-        not yet paused and, <literal>paused</literal> if the recovery is
+        not yet paused, and <literal>paused</literal> if the recovery is
         actually paused.
        </para></entry>
       </row>
4bbf35a579 doc review: Add pg_amcheck, a CLI for contrib/amcheck.
diff --git a/doc/src/sgml/ref/pg_amcheck.sgml b/doc/src/sgml/ref/pg_amcheck.sgml
index fcc96b430a..d01e26faa8 100644
--- a/doc/src/sgml/ref/pg_amcheck.sgml
+++ b/doc/src/sgml/ref/pg_amcheck.sgml
@@ -460,7 +460,7 @@ PostgreSQL documentation
      <term><option>--skip=<replaceable class="parameter">option</replaceable></option></term>
      <listitem>
       <para>
-       If <literal>"all-frozen"</literal> is given, table corruption checks
+       If <literal>all-frozen</literal> is given, table corruption checks
        will skip over pages in all tables that are marked as all frozen.
       </para>
       <para>
c7bf0bcc61 doc review: Pass all scan keys to BRIN consistent function at once
diff --git a/doc/src/sgml/brin.sgml b/doc/src/sgml/brin.sgml
index d2f12bb605..d2476481af 100644
--- a/doc/src/sgml/brin.sgml
+++ b/doc/src/sgml/brin.sgml
@@ -833,7 +833,7 @@ typedef struct BrinOpcInfo
       Returns whether all the ScanKey entries are consistent with the given
       indexed values for a range.
       The attribute number to use is passed as part of the scan key.
-      Multiple scan keys for the same attribute may be passed at once, the
+      Multiple scan keys for the same attribute may be passed at once; the
       number of entries is determined by the <literal>nkeys</literal> parameter.
      </para>
     </listitem>
50454d9cf5 doc review: Add support for PROVE_TESTS and PROVE_FLAGS in MSVC scripts
diff --git a/doc/src/sgml/install-windows.sgml b/doc/src/sgml/install-windows.sgml
index 64687b12e6..cb6bb05dc5 100644
--- a/doc/src/sgml/install-windows.sgml
+++ b/doc/src/sgml/install-windows.sgml
@@ -499,8 +499,8 @@ $ENV{PERL5LIB}=$ENV{PERL5LIB} . ';c:\IPC-Run-0.94\lib';
 
   <para>
    The TAP tests run with <command>vcregress</command> support the
-   environment variables <varname>PROVE_TESTS</varname>, that is expanded
-   automatically using the name patterns given, and
+   environment variables <varname>PROVE_TESTS</varname>, which is
+   expanded as a glob pattern, and
    <varname>PROVE_FLAGS</varname>. These can be set on a Windows terminal,
    before running <command>vcregress</command>:
 <programlisting>
633e7a3b54 doc review: VACUUM (PROCESS_TOAST)
diff --git a/doc/src/sgml/ref/vacuum.sgml b/doc/src/sgml/ref/vacuum.sgml
index 6a0028a514..949ca23797 100644
--- a/doc/src/sgml/ref/vacuum.sgml
+++ b/doc/src/sgml/ref/vacuum.sgml
@@ -219,7 +219,7 @@ VACUUM [ FULL ] [ FREEZE ] [ VERBOSE ] [ ANALYZE ] [ <replaceable class="paramet
       corresponding <literal>TOAST</literal> table for each relation, if one
       exists. This is normally the desired behavior and is the default.
       Setting this option to false may be useful when it is only necessary to
-      vacuum the main relation. This option is required when the
+      vacuum the main relation. This option may not be disabled when the
       <literal>FULL</literal> option is used.
      </para>
     </listitem>
7e84a06724 doc review: Multiple xacts during table sync in logical replication
diff --git a/doc/src/sgml/logical-replication.sgml b/doc/src/sgml/logical-replication.sgml
index e95d446dac..3fad5f34e6 100644
--- a/doc/src/sgml/logical-replication.sgml
+++ b/doc/src/sgml/logical-replication.sgml
@@ -490,9 +490,9 @@
      any changes that happened during the initial data copy using standard
      logical replication.  During this synchronization phase, the changes
      are applied and committed in the same order as they happened on the
-     publisher.  Once the synchronization is done, the control of the
+     publisher.  Once synchronization is done, control of the
      replication of the table is given back to the main apply process where
-     the replication continues as normal.
+     replication continues as normal.
     </para>
   </sect2>
  </sect1>
8259924473 doc review: pg_stat_progress_create_index
diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index f637fe0415..8287587f61 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -5890,7 +5890,7 @@ SELECT pg_stat_get_backend_pid(s.backendid) AS pid,
       </para>
       <para>
        When creating an index on a partitioned table, this column is set to
-       the number of partitions on which the index has been completed.
+       the number of partitions on which the index has been created.
        This field is <literal>0</literal> during a <literal>REINDEX</literal>.
       </para></entry>
      </row>
576580e6c3 doc review: piecemeal construction of partitioned indexes
diff --git a/doc/src/sgml/ddl.sgml b/doc/src/sgml/ddl.sgml
index 30e4170963..354f9e57bd 100644
--- a/doc/src/sgml/ddl.sgml
+++ b/doc/src/sgml/ddl.sgml
@@ -3957,8 +3957,8 @@ ALTER TABLE measurement ATTACH PARTITION measurement_y2008m02
      As explained above, it is possible to create indexes on partitioned tables
      so that they are applied automatically to the entire hierarchy.
      This is very
-     convenient, as not only will the existing partitions become indexed, but
-     also any partitions that are created in the future will.  One limitation is
+     convenient, as not only the existing partitions will be indexed, but
+     so will any partitions that are created in the future.  One limitation is
      that it's not possible to use the <literal>CONCURRENTLY</literal>
      qualifier when creating such a partitioned index.  To avoid long lock
      times, it is possible to use <command>CREATE INDEX ON ONLY</command>
1384db4053 doc review: psql \dX
diff --git a/doc/src/sgml/ref/psql-ref.sgml b/doc/src/sgml/ref/psql-ref.sgml
index ddb7043362..a3cfd3b557 100644
--- a/doc/src/sgml/ref/psql-ref.sgml
+++ b/doc/src/sgml/ref/psql-ref.sgml
@@ -1927,9 +1927,10 @@ testdb=>
         </para>
 
         <para>
-        The column of the kind of extended stats (e.g. Ndistinct) shows its status.
-        NULL means that it doesn't exists. "defined" means that it was requested
-        when creating the statistics.
+        The status of each kind of extended statistics is shown in a column
+        named after its statistic kind (e.g. Ndistinct).
+        "defined" means that it was requested when creating the statistics,
+        and NULL means it wasn't requested. 
         You can use pg_stats_ext if you'd like to know whether <link linkend="sql-analyze">
         <command>ANALYZE</command></link> was run and statistics are available to the
         planner.


Вложения

Re: doc review for v14

От
Michael Paquier
Дата:
On Thu, Apr 08, 2021 at 11:40:08AM -0500, Justin Pryzby wrote:
> Another round of doc review, not yet including all of yesterday's commits.

Thanks for compiling all that.  I got through the whole set and
applied the most relevant parts on HEAD.  Some of them applied down to
9.6, so I have fixed it down where needed, for the parts that did not
conflict too heavily.
--
Michael

Вложения

Re: doc review for v14

От
Justin Pryzby
Дата:
On Fri, Apr 09, 2021 at 02:03:27PM +0900, Michael Paquier wrote:
> On Thu, Apr 08, 2021 at 11:40:08AM -0500, Justin Pryzby wrote:
> > Another round of doc review, not yet including all of yesterday's commits.
> 
> Thanks for compiling all that.  I got through the whole set and
> applied the most relevant parts on HEAD.  Some of them applied down to
> 9.6, so I have fixed it down where needed, for the parts that did not
> conflict too heavily.

Thanks.  Rebased with remaining, queued fixes.

-- 
Justin

Вложения

Re: doc review for v14

От
Justin Pryzby
Дата:

Re: doc review for v14

От
Michael Paquier
Дата:
On Fri, Apr 16, 2021 at 02:03:10AM -0500, Justin Pryzby wrote:
> A bunch more found with things like this.

Thanks, applied most of it!
--
Michael

Вложения