Re: Linux ready for high-volume databases?
От | Vivek Khera |
---|---|
Тема | Re: Linux ready for high-volume databases? |
Дата | |
Msg-id | x7n0dvoygi.fsf@yertle.int.kciLink.com обсуждение исходный текст |
Ответ на | ("Gregory S. Williamson" <gsw@globexplorer.com>) |
Список | pgsql-general |
>>>>> "GS" == Greg Stark <gsstark@mit.edu> writes: GS> the first approach. I'm thinking taking a pg_dump regularly GS> (nightly if I can get away with doing it that infrequently) GS> keeping the past n dumps, and burning a CD with those dumps. Basically what I do. I burn a set of CDs from one of my dumps once a week, and keep the rest online for a few days. I'm really getting close to splurging for a DVD writer since my dumps are way too big for a single CD. GS> This doesn't provide what online backups do, of recovery to the GS> minute of the crash. And I get nervous having only logical pg_dump GS> output, no backups of the actual blocks on disk. But is that what GS> everybody does? Well, if you want backups of the blocks on disk, then you need to shut down the postmaster so that it is a consistent copy. You can't copy the table files "live" this way. So, yes, having the pg_dump is pretty much your safest bet to have a consistent dump. And using a replicated slave with, eg, eRServer, is also another way, but that requires more hardware. -- =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= Vivek Khera, Ph.D. Khera Communications, Inc. Internet: khera@kciLink.com Rockville, MD +1-240-453-8497 AIM: vivekkhera Y!: vivek_khera http://www.khera.org/~vivek/
В списке pgsql-general по дате отправления: