Re: Somewhat automated method of cleaning table of corrupt records for pg_dump
От | Craig Ringer |
---|---|
Тема | Re: Somewhat automated method of cleaning table of corrupt records for pg_dump |
Дата | |
Msg-id | 5084F055.70706@ringerc.id.au обсуждение исходный текст |
Ответ на | Somewhat automated method of cleaning table of corrupt records forpg_dump (Heiko Wundram <modelnine@modelnine.org>) |
Ответы |
Re: Somewhat automated method of cleaning table of corrupt
records for pg_dump
|
Список | pgsql-general |
On 10/19/2012 10:31 PM, Heiko Wundram wrote: > Hey! > > I'm currently in the situation that due to (probably) broken memory in a > server, I have a corrupted PostgreSQL database. Getting at the data > that's in the DB is not time-critical (because backups have restored the > largest part of it), but I'd still like to restore what can be restored > from the existing database to fill in the remaining data. VACUUM FULL > runs successfully (i.e., I've fixed the blocks with broken block > headers, removed rows that have invalid OIDs as recorded by the VACUUM, > etc.), but dumping the DB from the rescue system (which is PostgreSQL > 8.3.21) to transfer it to another still fails with "invalid memory alloc > request size 18446744073709551613", i.e., most probably one of the TEXT > colums in the respective tables contains invalid sizings. Working strictly with a *copy*, does REINDEXing then CLUSTERing the tables help? VACCUM FULL on 8.3 won't rebuild indexes, so if index damage is the culprit a reindex may help. Then, if CLUSTER is able to rewrite the tables in index order you might be able to recover. -- Craig Ringer
В списке pgsql-general по дате отправления: