Re: parallelizing the archiver
От | Andrey Borodin |
---|---|
Тема | Re: parallelizing the archiver |
Дата | |
Msg-id | 94D6DCA3-4DB1-4BEE-AD1F-F477E3812C8C@yandex-team.ru обсуждение исходный текст |
Ответ на | Re: parallelizing the archiver (Julien Rouhaud <rjuju123@gmail.com>) |
Список | pgsql-hackers |
> 10 сент. 2021 г., в 11:11, Julien Rouhaud <rjuju123@gmail.com> написал(а): > > On Fri, Sep 10, 2021 at 2:03 PM Andrey Borodin <x4mmm@yandex-team.ru> wrote: >> >>> 10 сент. 2021 г., в 10:52, Julien Rouhaud <rjuju123@gmail.com> написал(а): >>> >>> Yes, but it also means that it's up to every single archiving tool to >>> implement a somewhat hackish parallel version of an archive_command, >>> hoping that core won't break it. >> I'm not proposing to remove existing archive_command. Just deprecate it one-WAL-per-call form. > > Which is a big API beak. Huge extension, not a break. >> It's a very simplistic approach. If some GUC is set - archiver will just feed ready files to stdin of archive command.What fundamental design changes we need? > > I'm talking about the commands themselves. Your suggestion is to > change archive_command to be able to spawn a daemon, and it looks like > a totally different approach. I'm not saying that having a daemon > based approach to take care of archiving is a bad idea, I'm saying > that trying to fit that with the current archive_command + some new > GUC looks like a bad idea. It fits nicely, even in corner cases. E.g. restore_command run from pg_rewind seems compatible with this approach. One more example: after failover DBA can just ```ls|wal-g wal-push``` to archive all WALs unarchived before network partition. This is simple yet powerful approach, without any contradiction to existing archive_command API. Why it's a bad idea? Best regards, Andrey Borodin.
В списке pgsql-hackers по дате отправления: