Re: Fastest way to duplicate a quite large database
От | Louis Battuello |
---|---|
Тема | Re: Fastest way to duplicate a quite large database |
Дата | |
Msg-id | 96BD513D-CAD7-4041-BE2D-C3BEC29BB644@etasseo.com обсуждение исходный текст |
Ответ на | Re: Fastest way to duplicate a quite large database (John R Pierce <pierce@hogranch.com>) |
Список | pgsql-general |
> On Apr 12, 2016, at 11:14 AM, John R Pierce <pierce@hogranch.com> wrote: > > On 4/12/2016 7:55 AM, John McKown wrote: >> Hum, I don't know exactly how to do it, but on Linux, you could put the "Customer" database in a tablespace which resideson a BTRFS filesystem. BTRFS can do a quick "snapshot" of the filesystem.... > > except, tablespaces aren't standalone, and there's no provision for importing the contents of the tablespace. all themetadata remains in the default tablespace, which leaves all sorts of room for problems if you do this. > > the /best/ way to achieve what the OP is asking for would likely be to run the tests on a seperate server (or at leastseperate postgres instance aka cluster), and use pg_basebackup to rebuild this test instance. > > > > > -- > john r pierce, recycling bits in santa cruz > > > > -- > Sent via pgsql-general mailing list (pgsql-general@postgresql.org) > To make changes to your subscription: > http://www.postgresql.org/mailpref/pgsql-general > I agree with John’s post. I should have mentioned that my template database is never production. It’s an obfuscated copyof the production data on separate hardware. I use the "create with template” to spin up copies for developers/testersto provide a representative data set (not identical to production). And, since the create doesn’t copytable statistics, I have to kick off a post-copy background process to gather them: nohup vacuumdb --analyze-only --quiet --dbname=${DATABASE} &>/dev/null & Still, with all that, users are still able to drop and recreate a test database within a coffee break.
В списке pgsql-general по дате отправления: