Обсуждение: Error message using pg_dump with tar format
Hi- I'm getting the following error message: pg_dump: [tar archiver] could not write to tar member (wrote 39, attempted 166) Here are the particulars: -I'm running this command: "pg_dump -Ft prod | prod.dump.tar" (The database is named prod) -The dump gets about 1/4 of the way through, and then gives me the error message and stops. -I'm running PostgreSQL version 7.3.2. -There is plenty of disk space available. -The same command on the same database and server with same specs worked last week when I was on V7.2.1. -Since upgrading, more data has been added, but the structure of the database is unchanged. -Using the -v switch shows me that it always quits on the same table, but otherwise adds no new information. -The part of the error message in parentheses changes on each run. For instance, on the last run, I got "(wrote 64, attempted 174)" The rest of the message remains consistent. -The table it quits on is fairly large- about 2.6GB. It is both "wide" because it contains a text field that is usually a few sentences of text, and "long", containing 9,137,808 records. This is also the only table in our database that is split into multiple files. -A text dump using this command works fine: "pg_dump prod > prod.dump.text" I found a reference to this message in the admin list archives on 3/28/2003, but it was in the context of a database containing large blobs (mine has no blobs), and the suggestion was to upgrade to 7.3. I couldn't find a resolution in that thread, so I'm not sure if it ever got worked out. Any thoughts?? Thanks! -Nick --------------------------------------------------------------------- Nick Fankhauser nickf@doxpop.com Phone 1.765.965.7363 Fax 1.765.962.9788 doxpop - Court records at your fingertips - http://www.doxpop.com/
Sorry about the recent duplicate post on this- my first post had the wrong return email address & must have just gotten deferred for a bit rather than deleted. But... since I'm still having the problem & no workable solution has been suggested yet, I'll add a second appeal for ideas. The only information that I can add since the original post is that I have thoroughly tested the text-dump files being created (Since I'm now completely dependent on them!), and found that they are being created flawlessly and I can get a clean restore from them if I'm willing to restore the entire database rather than a particular table. Since nobody has jumped in with a simple solution, I'm guessing that I may have a bug. Is there a pg_dump guru among the developers that I should run this past before posting to the bugs list? I'm doing a clean install of 7.3.2 on a second non-production box so I can ensure that it isn't just an install glitch or hardware-related issue and also have a safe sandbox to play in if somebody comes up with an idea. I'll report back if the behavior chnages on the second box. -Nick --------------------------------------------------------------------- Nick Fankhauser nickf@doxpop.com Phone 1.765.965.7363 Fax 1.765.962.9788 doxpop - Court records at your fingertips - http://www.doxpop.com/
> Are you sure you're not running: > > pg_dump -Ft prod > prod.dump.tar You are correct- I mis-typed the command in my message. > Maybe a file size limit on that box is being hit? No, the text dump is creating a larger file successfully, so I don't think that is the cause. There's plenty of disk space, and the file is well under 1GB when the dump quits. The file size limit on the OS is 2GB in our case. Thanks for the correction & idea. -Nick
On Thu, 10 Jul 2003, Nick Fankhauser wrote: > > > Are you sure you're not running: > > > > pg_dump -Ft prod > prod.dump.tar > > You are correct- I mis-typed the command in my message. > > > > Maybe a file size limit on that box is being hit? > > No, the text dump is creating a larger file successfully, so I don't think > that is the cause. There's plenty of disk space, and the file is well under > 1GB when the dump quits. The file size limit on the OS is 2GB in our case. > > Thanks for the correction & idea. Make sure you're not hitting any quote limits. Try just writing a file (i.e. dd if=/dev/zero of=/backupdir/filename blocks=5000000 or something like that. Try backing up to /dev/null and see if you get the same error.
On Mon, 7 Jul 2003, Nick Fankhauser - Doxpop wrote: > Hi- > > I'm getting the following error message: > > pg_dump: [tar archiver] could not write to tar member (wrote 39, attempted > 166) > > Here are the particulars: > > -I'm running this command: "pg_dump -Ft prod | prod.dump.tar" (The database > is named prod) Are you sure you're not running: pg_dump -Ft prod > prod.dump.tar ?? As using a | should try to run a program called prod.dump.tar, which should either error out as not existing, or if it exists, error out as not being executable / shell can't find it's interpreter. > -The table it quits on is fairly large- about 2.6GB. It is both "wide" > because it contains a text field that is usually a few sentences of text, > and "long", containing 9,137,808 records. This is also the only table in our > database that is split into multiple files. Maybe a file size limit on that box is being hit? > -A text dump using this command works fine: "pg_dump prod > prod.dump.text"
Have u checked the disk space remaining already ? Regds Mallah. On Monday 07 Jul 2003 10:00 pm, Nick Fankhauser - Doxpop wrote: > Hi- > > I'm getting the following error message: > > pg_dump: [tar archiver] could not write to tar member (wrote 39, attempted > 166) > > Here are the particulars: > > -I'm running this command: "pg_dump -Ft prod | prod.dump.tar" (The database > is named prod) > > -The dump gets about 1/4 of the way through, and then gives me the error > message and stops. > > -I'm running PostgreSQL version 7.3.2. > > -There is plenty of disk space available. > > -The same command on the same database and server with same specs worked > last week when I was on V7.2.1. > > -Since upgrading, more data has been added, but the structure of the > database is unchanged. > > -Using the -v switch shows me that it always quits on the same table, but > otherwise adds no new information. > > -The part of the error message in parentheses changes on each run. For > instance, on the last run, I got "(wrote 64, attempted 174)" The rest of > the message remains consistent. > > -The table it quits on is fairly large- about 2.6GB. It is both "wide" > because it contains a text field that is usually a few sentences of text, > and "long", containing 9,137,808 records. This is also the only table in > our database that is split into multiple files. > > -A text dump using this command works fine: "pg_dump prod > prod.dump.text" > > I found a reference to this message in the admin list archives on > 3/28/2003, but it was in the context of a database containing large blobs > (mine has no blobs), and the suggestion was to upgrade to 7.3. I couldn't > find a resolution in that thread, so I'm not sure if it ever got worked > out. > > Any thoughts?? > > Thanks! > -Nick > > --------------------------------------------------------------------- > Nick Fankhauser > > nickf@doxpop.com Phone 1.765.965.7363 Fax 1.765.962.9788 > doxpop - Court records at your fingertips - http://www.doxpop.com/ > > > ---------------------------(end of broadcast)--------------------------- > TIP 9: the planner will ignore your desire to choose an index scan if your > joining column's datatypes do not match -- Rajesh Kumar Mallah, Project Manager (Development) Infocom Network Limited, New Delhi phone: +91(11)6152172 (221) (L) ,9811255597 (M) Visit http://www.trade-india.com , India's Leading B2B eMarketplace.