I don't think a vacuum can handle what a pg_dump can't. But I'm maybe wrong.
----- Mail original -----
De: wambacher@posteo.de
À: pgsql-admin@lists.postgresql.org
Envoyé: Mardi 28 Août 2018 15:19:28
Objet: Re: tuple concurrently updated
because it's an OpenStreetMap Full planet database, which is huge (~ 2
TB), i don't have many backups. the last is from 20180608
ok, i can restore that and reload the incremential updates or reload
full data starting from raw data. both will need some days.
if i could get rid of the bad data-record, it's much easier to recover that.
Running a vacuum or a vacuum full, is my last try.
Regards
walter
Am 28.08.2018 um 15:10 schrieb 066ce286@free.fr:
> BTW you have data corruption.
>
> So forget your problem about concurrent updates ; that's not the clue.
>
> Your new question should be how to recover corrupted table.
>
> Sorry, I've no skill for that problem other that "how ran your backups/archivelogs ?"
>