> On Fri, Dec 6, 2019 at 10:50 AM Zwettler Markus (OIZ) <mailto:Markus.Zwettler@zuerich.ch> wrote:
>> -----Ursprüngliche Nachricht-----
>> Von: Michael Paquier <mailto:michael@paquier.xyz>
>> Gesendet: Freitag, 6. Dezember 2019 02:43
>> An: Zwettler Markus (OIZ) <mailto:Markus.Zwettler@zuerich.ch>
>> Cc: Stephen Frost <mailto:sfrost@snowman.net>; mailto:pgsql-general@lists.postgresql.org
>> Betreff: Re: archiving question
>>
>> On Thu, Dec 05, 2019 at 03:04:55PM +0000, Zwettler Markus (OIZ) wrote:
>> > What do you mean hear?
>> >
>> > Afaik, Postgres runs the archive_command per log, means log by log by log.
>> >
>> > How should we parallelize this?
>>
>> You can, in theory, skip the archiving for a couple of segments and then do the
>> operation at once without the need to patch Postgres.
>> --
>> Michael
>
>
>Sorry, I am still confused.
>
>Do you mean I should move (mv * /backup_dir) the whole pg_xlog directory away and move it back (mv /backup_dir/*
/pg_xlog)in case of recovery?
>
>No, *absolutely* not.
>
>What you can do is have archive_command copy things one by one to a local directory (still sequentially), and then you
canhave a separate process that sends these to the archive -- and *this* process can be parallelized.
>
>//Magnus
That has been my initial question.
Is there a way to tune this sequential archive_command log by log copy in case I have tons of logs within the pg_xlog
directory?
Markus