Re: [SPAM] Re: Best way to replicate to large number of nodes
От | Brian Peschel |
---|---|
Тема | Re: [SPAM] Re: Best way to replicate to large number of nodes |
Дата | |
Msg-id | 4BD09667.5040908@occinc.com обсуждение исходный текст |
Ответ на | [SPAM] Re: Best way to replicate to large number of nodes (Ben Chobot <bench@silentmedia.com>) |
Ответы |
Re: [SPAM] Re: Best way to replicate to large number of
nodes
|
Список | pgsql-general |
On 04/22/2010 10:12 AM, Ben Chobot wrote: > On Apr 21, 2010, at 1:41 PM, Brian Peschel wrote: > > >> I have a replication problem I am hoping someone has come across before and can provide a few ideas. >> >> I am looking at a configuration of on 'writable' node and anywhere from 10 to 300 'read-only' nodes. Almost all of thesenodes will be across a WAN from the writable node (some over slow VPN links too). I am looking for a way to replicateas quickly as possible from the writable node to all the read-only nodes. I can pretty much guarantee the read-onlynodes will never become master nodes. Also, the updates to the writable node are bunched and at known times (ieonly updated when I want it updated, not constant updates), but when changes occur, there are a lot of them at once. >> > Two things you didn't address are the acceptable latency of keeping the read-only nodes in sync with the master - can theybe different for a day? A minute? Do you need things to stay synchronous? Also, how big is your dataset? A simple pg_dumpand some hot scp action after you batched updates might be able to solve your problem. Latency is important. I would say 10 to 15 minutes max, but the shorter the better. I don't have an exact size, but I believe the entire DB is about 10 gig. We had an idea of creating our apps write the SQL statements to a file, rather than using an ODBC drive to directly change the DBs. Then we could scp/rsync the files to the remote machines and execute them there. This just seems like a very manual process though. - Brian
В списке pgsql-general по дате отправления: