Re: Backup hot-standby database.
От | Stephen Rees |
---|---|
Тема | Re: Backup hot-standby database. |
Дата | |
Msg-id | A71C2574-38E9-4EC4-A118-20E11546B7C8@pandora.com обсуждение исходный текст |
Ответ на | Re: Backup hot-standby database. (Robert Treat <rob@xzilla.net>) |
Ответы |
Hot-standby/Reporting database.
Re: Backup hot-standby database. |
Список | pgsql-admin |
Robert, Thank you for reply. I had the wrong end of the stick regarding pg_dump and hot-standby. I will take a look at omnipitr, as you suggest. Per your comment > You have to stop replay while you are doing the dumps like this how do I stop, then resume, replay with both the master and hot- standby available throughout? - Steve On Mar 15, 2011, at 3:04 PM, Robert Treat wrote: > On Tue, Mar 15, 2011 at 5:50 PM, Stephen Rees <srees@pandora.com> > wrote: >> Using PostgreSQL 9.0.x >> >> I cannot use pg_dump to generate a backup of a database on a hot- >> standby >> server, because it is, by definition, read-only. > > That really makes no sense :-) You can use pg_dump on a read-only > slave, but I think the issue that people tend to run into is that the > pg_dump operations get canceled out by incoming changes before it can > finish. You can of course modify the configs to work around this > somewhat, but eventually it becomes a problem. > >> However, it seems that I >> can use COPY TO within a serializable transaction to create a >> consistent set >> of data file(s). For example, >> >> BEGIN TRANSACTION; >> SET TRANSACTION ISOLATION LEVEL SERIALIZABLE; >> COPY t1 TO '/tmp/t1'; >> COPY t2 TO '/tmp/t2'; >> >> ... etc ... >> >> COPY t<n> TO '/tmp/t<n>'; >> COMMIT TRANSACTION; >> >> I can then use pg_dump to export the corresponding database schema >> from the >> master DBMS. >> >> Is this going to scale to a multi-GB database, where it will take >> hours to >> export the data from all of the tables, or are there scalability >> issues of >> which I should be aware? >> > > Well, basically that's in in a nutshell. You have to stop replay while > you are doing the dumps like this, so eventually that delay becomes > unbearable for most people (especially on the order of hours). > > There are several ways to work around this... you can use filesystem > snapshots to make copies and dump from there; great if you have the > option. If you don't you might want to look into omnipitr, it can > create filesystem level backups from a slave (not the same as a > logical export, but it might do). > > > Robert Treat > play: xzilla.net > work: omniti.com > hiring: l42.org/lg
В списке pgsql-admin по дате отправления: