High Reliability without High Availability?
От | Al Cohen |
---|---|
Тема | High Reliability without High Availability? |
Дата | |
Msg-id | rP6cncobCOMEesfdRWPC-w@speakeasy.net обсуждение исходный текст |
Ответы |
Re: High Reliability without High Availability?
|
Список | pgsql-general |
We've been using PostgreSQL for some time, and it's been very, very reliable. However, we're starting to think about preparing for something bad happening - dead drives, fires, locusts, and whatnot. In our particular situation, being down for two hours or so is OK. What's really bad is losing data. The PostgreSQL replication solutions that we're seeing are very clever, but seem to require significant effort to set up and keep going. Since we don't care if a slave DB is ready to kick over at a moment's notice, I'm wondering if there is some way to generate data, in real time, that would allow an offline rebuild in the event of catastrophe. We could copy this data across the 'net as it's available, so we could be OK even if the place burned down. Is there a log file that does or could do this? Or some internal system table that we could use to generate something? Thanks! Al Cohen
В списке pgsql-general по дате отправления: