Re: Fastest way to duplicate a quite large database
От | Adrian Klaver |
---|---|
Тема | Re: Fastest way to duplicate a quite large database |
Дата | |
Msg-id | 570E552C.8090908@aklaver.com обсуждение исходный текст |
Ответ на | Re: Fastest way to duplicate a quite large database (Edson Richter <edsonrichter@hotmail.com>) |
Ответы |
Re: Fastest way to duplicate a quite large database
|
Список | pgsql-general |
On 04/13/2016 06:58 AM, Edson Richter wrote: > > > Another trouble I've found: I've used "pg_dump" and "pg_restore" to > create the new CustomerTest database in my cluster. Immediately, > replication started to replicate the 60Gb data into slave, causing big > trouble. > Does mark it as "template" avoids replication of that "copied" database? > How can I mark a database to "do not replicate"? With the Postgres built in binary replication you can't, it replicates the entire cluster. There are third party solutions that offer that choice: http://www.postgresql.org/docs/9.5/interactive/different-replication-solutions.html Table 25-1. High Availability, Load Balancing, and Replication Feature Matrix It has been mentioned before, running a non-production database on the same cluster as the production database is a generally not a good idea. Per previous suggestions I would host your CustomerTest database on another instance/cluster of Postgres listening on a different port. Then all you customers have to do is create a connection that points at the new port. > > Thanks, > > Edson > > -- Adrian Klaver adrian.klaver@aklaver.com
В списке pgsql-general по дате отправления: