Re: Hot Standby vs slony
От | bricklen |
---|---|
Тема | Re: Hot Standby vs slony |
Дата | |
Msg-id | CAGrpgQ8X4Q2s2jGHK=OcTf7kNfPFtgthtXzo1JA3zqj9mMSzFg@mail.gmail.com обсуждение исходный текст |
Ответ на | Hot Standby vs slony (Mark Steben <mark.steben@drivedominion.com>) |
Список | pgsql-admin |
On Thu, Feb 8, 2018 at 1:09 PM, Mark Steben <mark.steben@drivedominion.com> wrote:
Good afternoon,We currently run postgres 9.4 We currently run the following:----------------------------------------------| ---------------- |--> slony (reporting, hi-availabilty) |production | ----->| --------------------------------------------- | ---------------- ||--------------------------------------| |-> hot standby (dr) |---------------------------------------| We would like to replace slony with another instance of hot standby as follows:----------------------------------------------| ---------------- |--> hot standby1 (reporting, ha) |production | ----->| --------------------------------------------| | ---------------- ||--------------------------------------| |-> hot standby2 (dr) |---------------------------------------| Is this possible? I see in the documentation it is possible for warm standby but don'tsee a confirmation in the section on hot standby.
Yes, you can run multiple hot standby's from the primary, or cascade the hot standby's from each other (and combinations of both).
I can say that with confidence as one of the common configurations I'm running (for roughly 1500 servers) consists of a primary PG cluster with a hot standby using streaming replication (async replication) within the same data centre, with a remote "primary" hot standby fed by WAL shipping, and a remote hot standby streaming off that. The remote primary is running with delayed WAL application, which varies between 1 and 4 hours, depending on the class of replica sets we are running. This configuration covers basic DR, HA, and in case of user-error we can fail over (promote the remote primary replica before any user-destructive changes are applied to the remote hot standby). One of the caveats is that a sudden interruption between DC's followed by a failover could result in some data loss, depending on the archive_timeout/WAL switch rate etc, but that's a business RPO that we've agreed upon with clients.
I can say that with confidence as one of the common configurations I'm running (for roughly 1500 servers) consists of a primary PG cluster with a hot standby using streaming replication (async replication) within the same data centre, with a remote "primary" hot standby fed by WAL shipping, and a remote hot standby streaming off that. The remote primary is running with delayed WAL application, which varies between 1 and 4 hours, depending on the class of replica sets we are running. This configuration covers basic DR, HA, and in case of user-error we can fail over (promote the remote primary replica before any user-destructive changes are applied to the remote hot standby). One of the caveats is that a sudden interruption between DC's followed by a failover could result in some data loss, depending on the archive_timeout/WAL switch rate etc, but that's a business RPO that we've agreed upon with clients.
В списке pgsql-admin по дате отправления: