performance hit for replication
От | Matthew Nuzum |
---|---|
Тема | performance hit for replication |
Дата | |
Msg-id | 425bf664.1fac7adf.258a.5c79@mx.gmail.com обсуждение исходный текст |
Ответы |
Re: performance hit for replication
Re: performance hit for replication |
Список | pgsql-performance |
I'd like to create a fail-over server in case of a problem. Ideally, it would be synchronized with our main database server, but I don't see any major problem with having a delay of up to 4 hours between syncs. My database is a little shy of 10 Gigs, with much of that data being in an archived log table. Every day a batch job is run which adds 100,000 records over the course of 3 hours (the batch job does a lot of pre/post processing). Doing a restore of the db backup in vmware takes about 3 hours. I suspect a powerful server with a better disk setup could do it faster, but I don't have servers like that at my disposal, so I need to assume worst-case of 3-4 hours is typical. So, my question is this: My server currently works great, performance wise. I need to add fail-over capability, but I'm afraid that introducing a stressful task such as replication will hurt my server's performance. Is there any foundation to my fears? I don't need to replicate the archived log data because I can easily restore that in a separate step from the nightly backup if disaster occurs. Also, my database load is largely selects. My application works great with PostgreSQL 7.3 and 7.4, but I'm currently using 7.3. I'm eager to hear your thoughts and experiences, -- Matthew Nuzum <matt@followers.net> www.followers.net - Makers of "Elite Content Management System" Earn a commission of $100 - $750 by recommending Elite CMS. Visit http://www.elitecms.com/Contact_Us.partner for details.
В списке pgsql-performance по дате отправления: