Re: pg_dump and XID limit
От | Elliot Chance |
---|---|
Тема | Re: pg_dump and XID limit |
Дата | |
Msg-id | C7D1CBBF-B65F-4047-B9BC-933D9C02D502@gmail.com обсуждение исходный текст |
Ответ на | Re: pg_dump and XID limit (Tom Lane <tgl@sss.pgh.pa.us>) |
Ответы |
Re: pg_dump and XID limit
Re: pg_dump and XID limit |
Список | pgsql-admin |
On 24/11/2010, at 5:07 PM, Tom Lane wrote: > Elliot Chance <elliotchance@gmail.com> writes: >> This is a hypothetical problem but not an impossible situation. Just curious about what would happen. > >> Lets say you have an OLTP server that keeps very busy on a large database. In this large database you have one or moretables on super fast storage like a fusion IO card which is handling (for the sake of argument) 1 million transactionsper second. > >> Even though only one or a few tables are using almost all of the IO, pg_dump has to export a consistent snapshot of allthe tables to somewhere else every 24 hours. But because it's such a large dataset (or perhaps just network congestion)the daily backup takes 2 hours. > >> Heres the question, during that 2 hours more than 4 billion transactions could of occurred - so what's going to happento your backup and/or database? > > The DB will shut down to prevent wraparound once it gets 2 billion XIDs > in front of the oldest open snaphot. > > regards, tom lane Wouldn't that mean at some point it would be advisable to be using 64bit transaction IDs? Or would that change too much ofthe codebase?
В списке pgsql-admin по дате отправления: