using pg_comparator as replication
| От | Erik Aronesty |
|---|---|
| Тема | using pg_comparator as replication |
| Дата | |
| Msg-id | ccd588d90911050748r23d908bek751b36056ff6c3fb@mail.gmail.com обсуждение исходный текст |
| Список | pgsql-hackers |
An update on pg_comparator as an efficient way to do master-slave replication.<br /><br />I have been using it for 2 yearson a "products" table that has grown from 12,000 rows to 24,000 rows. There are 3 slaves and 1 master. It is sync'edevery 10 minutes.<br /><br />It has never failed or caused problems.<br /><br />On 23039 rows, with under 100 rowschanged, over a 3mbit internet connection, and on a the sync takes 3.3 seconds, 0.94 seconds of which is CPU time (1.86GHZ intel dual core). Most of the time is waiting for the network. And that could be sped up considerably with compression(maybe 5-10 times for my data)... I don't think the postgres communications protocol considers compression anoption.<br /><br />I do not synchronize all the columns ... just the 15 most important ones<br /><br />Average number ofbytes per row is 284<br /><br /> Primary Key is an autoincrement integer id<br /><br />Databases are all on the internetat cheap colocation centers with suppsedly 10mbit high speed connections that realistically get about 3 mbit.<br/><br /> I ship a backup and restore of the table every week... in case there are tons of changes and the systemburps when there are too many.... I also schedule scripts that might make lots of changes to happen before the dump/restore.<br/><br />In my 15 years as a DBA, I have never had "replication" (which some say this isn't, and I say that'sa matter of how you define it) work so well.<br /><br />(Apologies for the hasty post with the wrong subject... pleaseignore/delete)<br />
В списке pgsql-hackers по дате отправления: