Re: Finalizing logical replication limitations as well as potentialfeatures
От | Joshua D. Drake |
---|---|
Тема | Re: Finalizing logical replication limitations as well as potentialfeatures |
Дата | |
Msg-id | aabe9032-4a58-19a9-de2e-aae83abf2420@commandprompt.com обсуждение исходный текст |
Ответ на | Re: Finalizing logical replication limitations as well as potentialfeatures (Alvaro Herrera <alvherre@alvh.no-ip.org>) |
Ответы |
Re: Finalizing logical replication limitations as well as potentialfeatures
|
Список | pgsql-hackers |
On 01/04/2018 01:26 PM, Alvaro Herrera wrote: > Joshua D. Drake wrote: > >> We just queue/audit the changes as they happen and sync up the changes >> after the initial sync completes. > This already happens. There is an initial sync, and there's logical > decoding that queues any changes that exist "after" the sync's snapshot. > > What you seem to want is to have multiple processes doing the initial > COPY in parallel -- each doing one fraction of the table. Of course, > they would have to use the same snapshot. That would make sense only > if the COPY itself is the bottleneck and not the network, or the I/O > speed of the origin server. This doesn't sound a common scenario to me. Not quite but close. My thought process is that we don't want to sync within a single snapshot a 100-500mil row table (or worse). Unless I am missing something there, that has the potential to be a very long running transaction especially if we are syncing more than one relation. JD -- Command Prompt, Inc. || http://the.postgres.company/ || @cmdpromptinc PostgreSQL centered full stack support, consulting and development. Advocate: @amplifypostgres || Learn: https://postgresconf.org ***** Unless otherwise stated, opinions are my own. *****
В списке pgsql-hackers по дате отправления: