Re: WIP/PoC for parallel backup
От | Asim R P |
---|---|
Тема | Re: WIP/PoC for parallel backup |
Дата | |
Msg-id | CANXE4Tc=YmPC7R+WWd6U8MOc4K2p0J7gM9DTk+LTaH693rphxg@mail.gmail.com обсуждение исходный текст |
Ответ на | WIP/PoC for parallel backup (Asif Rehman <asifr.rehman@gmail.com>) |
Ответы |
Re: WIP/PoC for parallel backup
Re: WIP/PoC for parallel backup |
Список | pgsql-hackers |
Hi Asif
Interesting proposal. Bulk of the work in a backup is transferring files from source data directory to destination. Your patch is breaking this task down in multiple sets of files and transferring each set in parallel. This seems correct, however, your patch is also creating a new process to handle each set. Is that necessary? I think we should try to achieve this using multiple asynchronous libpq connections from a single basebackup process. That is to use PQconnectStartParams() interface instead of PQconnectdbParams(), wich is currently used by basebackup. On the server side, it may still result in multiple backend processes per connection, and an attempt should be made to avoid that as well, but it seems complicated.
What do you think?
Asim
Interesting proposal. Bulk of the work in a backup is transferring files from source data directory to destination. Your patch is breaking this task down in multiple sets of files and transferring each set in parallel. This seems correct, however, your patch is also creating a new process to handle each set. Is that necessary? I think we should try to achieve this using multiple asynchronous libpq connections from a single basebackup process. That is to use PQconnectStartParams() interface instead of PQconnectdbParams(), wich is currently used by basebackup. On the server side, it may still result in multiple backend processes per connection, and an attempt should be made to avoid that as well, but it seems complicated.
What do you think?
Asim
В списке pgsql-hackers по дате отправления: