Re: Implementing incremental backup
От | Claudio Freire |
---|---|
Тема | Re: Implementing incremental backup |
Дата | |
Msg-id | CAGTBQpZhBifNKP+g4PeKFAVV=yNEXeczq8an0gqBSvoXbUS4Ag@mail.gmail.com обсуждение исходный текст |
Ответ на | Implementing incremental backup (Tatsuo Ishii <ishii@postgresql.org>) |
Ответы |
Re: Implementing incremental backup
Re: Implementing incremental backup |
Список | pgsql-hackers |
On Wed, Jun 19, 2013 at 7:13 AM, Tatsuo Ishii <ishii@postgresql.org> wrote: > > For now, my idea is pretty vague. > > - Record info about modified blocks. We don't need to remember the > whole history of a block if the block was modified multiple times. > We just remember that the block was modified since the last > incremental backup was taken. > > - The info could be obtained by trapping calls to mdwrite() etc. We need > to be careful to avoid such blocks used in xlogs and temporary > tables to not waste resource. > > - If many blocks were modified in a file, we may be able to condense > the info as "the whole file was modified" to reduce the amount of > info. > > - How to take a consistent incremental backup is an issue. I can't > think of a clean way other than "locking whole cluster", which is > obviously unacceptable. Maybe we should give up "hot backup"? I don't see how this is better than snapshotting at the filesystem level. I have no experience with TB scale databases (I've been limited to only hundreds of GB), but from my limited mid-size db experience, filesystem snapshotting is pretty much the same thing you propose there (xfs_freeze), and it works pretty well. There's even automated tools to do that, like bacula, and they can handle incremental snapshots.
В списке pgsql-hackers по дате отправления: