[SPAM] Re: Best way to replicate to large number of nodes
От | Ben Chobot |
---|---|
Тема | [SPAM] Re: Best way to replicate to large number of nodes |
Дата | |
Msg-id | 8B36C1FC-1BCE-4A7A-9FAC-C64EC6B1416A@silentmedia.com обсуждение исходный текст |
Ответ на | Best way to replicate to large number of nodes (Brian Peschel <brianp@occinc.com>) |
Ответы |
Re: [SPAM] Re: Best way to replicate to large number of
nodes
|
Список | pgsql-general |
On Apr 21, 2010, at 1:41 PM, Brian Peschel wrote: > I have a replication problem I am hoping someone has come across before and can provide a few ideas. > > I am looking at a configuration of on 'writable' node and anywhere from 10 to 300 'read-only' nodes. Almost all of thesenodes will be across a WAN from the writable node (some over slow VPN links too). I am looking for a way to replicateas quickly as possible from the writable node to all the read-only nodes. I can pretty much guarantee the read-onlynodes will never become master nodes. Also, the updates to the writable node are bunched and at known times (ieonly updated when I want it updated, not constant updates), but when changes occur, there are a lot of them at once. Two things you didn't address are the acceptable latency of keeping the read-only nodes in sync with the master - can theybe different for a day? A minute? Do you need things to stay synchronous? Also, how big is your dataset? A simple pg_dumpand some hot scp action after you batched updates might be able to solve your problem.
В списке pgsql-general по дате отправления: