Re: Fastest way to duplicate a quite large database
От | Adrian Klaver |
---|---|
Тема | Re: Fastest way to duplicate a quite large database |
Дата | |
Msg-id | 570D0E70.5050208@aklaver.com обсуждение исходный текст |
Ответ на | Re: Fastest way to duplicate a quite large database (Edson Richter <edsonrichter@hotmail.com>) |
Список | pgsql-general |
On 04/12/2016 07:51 AM, Edson Richter wrote: > Same machine, same cluster - just different database name. Hmm, running tests against the same cluster you are running the production database would seem to be a performance hit against the production database and potentially dangerous should the tests trip a bug that crashes the server. > > Atenciosamente, > > Edson Carlos Ericksson Richter > > Em 12/04/2016 11:46, John R Pierce escreveu: >> On 4/12/2016 7:25 AM, Edson Richter wrote: >>> >>> I have a database "Customer" with about 60Gb of data. >>> I know I can backup and restore, but this seems too slow. >>> >>> Is there any other option to duplicate this database as >>> "CustomerTest" as fast as possible (even fastar than backup/restore) >>> - better if in one operation (something like "copy database A to B")? >>> I would like to run this everyday, overnight, with minimal impact to >>> prepare a test environment based on production data. >> >> >> copy to the same machine, or copy to a different test server? >> different answers. >> >> >> > > > -- Adrian Klaver adrian.klaver@aklaver.com
В списке pgsql-general по дате отправления: