Обсуждение: Data problem - error "invalid attribute number # for "
We've received a client's database which was giving errors and refusing to be backed up; at this point we do not know what'shappened to the database to corrupt it (likely a hardware or power failure, etc.) The error that is being given when trying to do selects on the affected table or doing a pg_dump is: ERROR: invalid attribute number 32533 for largedata We've tried to see if we can resolve this error, but - there's no error number associated with it, so it's difficult to reference - there doesn't seem to be any information on it (searches for 'invalid attribute number' only return past troubles for whichthis error was only a symptom, no the main issue). As I mentioned, we've tried doing a pg_dump, vacuuming the affected table, and trying to select anything from it, all resultingwith the error above. At this point, we're left wondering: - what does this error mean exactly (is it a problem with the data? with the schema?) - how should we go about dealing with it (what else should we try?) Of note: we ARE able to pg_dump the database if we exclude the affected table; right now our plan (short of any successfulsolutions) is to recover the table's data from a known good backup to recover the table data. Any suggestions/info appreciated. Seb __________________________________________________________________ Connect with friends from any web browser - no download required. Try the new Yahoo! Canada Messenger for the Web BETA athttp://ca.messenger.yahoo.com/webmessengerpromo.php
Sebastien Boisvert <sebastienboisvert@yahoo.com> writes: > We've received a client's database which was giving errors and refusing to be backed up; at this point we do not know what'shappened to the database to corrupt it (likely a hardware or power failure, etc.) > The error that is being given when trying to do selects on the affected table or doing a pg_dump is: > ERROR: invalid attribute number 32533 for largedata Some poking through the source code finds only one match for that error message, which is in relcache.c. It looks like you have a row in pg_attribute that claims to belong to that relation, but has attnum 32533. It would be interesting to see the results for select * from pg_attribute where attrelid = 'largedata'::regclass What PG version is this exactly, on what platform? regards, tom lane
> >>> Some poking through the source code finds only one match for that error >>> message, which is in relcache.c. It looks like you have a row in >>> pg_attribute that claims to belong to that relation, but has attnum >>> 32533. It would be interesting to see the results for >>> select * from pg_attribute where attrelid = 'largedata'::regclass >> >> [see attached] >Hmm, no sign of any such row here ... try reindexing pg_attribute. It gets more interesting: MCS=# reindex table pg_attribute; ERROR: could not access status of transaction 276828288 DETAIL: Could not open file "pg_subtrans/1080": No such file or directory. Couldn't force the reindex, and vacuuming didn't help. I've checked the directory and there's only one file in it ('0004'). There's lots of previous info about that error, not none that I've found (yet) that have the same detail description (itusually says 'Invalid argument' instead). At this point this looks pretty serious, and unless there's a specific way todeal with this error (without hopefully running into other new ones), I might just go with our plan to recover the datausing the backup, instead of spending more time on a possibly lost cause. __________________________________________________________________ Looking for the perfect gift? Give the gift of Flickr! http://www.flickr.com/gift/
Sebastien Boisvert <sebastienboisvert@yahoo.com> writes: >> Hmm, no sign of any such row here ... try reindexing pg_attribute. > It gets more interesting: > MCS=# reindex table pg_attribute; > ERROR: could not access status of transaction 276828288 > DETAIL: Could not open file "pg_subtrans/1080": No such file or directory. So you've got some amount of corruption in pg_attribute --- probably more than just one row affected. You're fortunate that only one table was not dumpable. If you have a reasonably recent backup for that table, I agree that further effort probably isn't warranted. regards, tom lane