Обсуждение: pg_restore error
I am trying to load a data base using pg_restore. We are migrating to a new server and I want to see if I can make the load go faster if I load with pg_restore instead of psql. The old server is: CentOS upgraded to 5.4 64bit with the stock PostgreSQL 8.1. The dump file was created with: pg_dump --format=c --file=ec.restore ec where ec is the name of the database. It is encoded SQL_ASCII. I plan on converting to UTF-8 but I can't do that quite yet. The new server is CentOS 5.4 64 bit with PostgreSQL 8.4 installed from the YUM repo on the PostgreSQL site. The restore command is: createdb -E SQL_ASCII -T template0 ec pg_restore -dec -j2 ec.restore I get this error: pg_restore: [custom archiver] dumping a specific TOC data block out of order is not supported without ID on this input stream (fseek required) pg_restore: [archiver] worker process failed: exit code 1 It seems to run if I ommit the -j2 option. (Though as of this moment is has not completed). Is there a known problem with using parallel loading in 8.4 from a file created with an 8.1 database? -- Bryan White
Bryan White wrote: > I am trying to load a data base using pg_restore...... Is there a known problem with using parallel > loading in 8.4 from a file created with an 8.1 database? > It was unclear from your email how you are creating the dump. Did you create the dump with the 8.1 or the 8.4 version of pg_dump? When upgrading, the docs recommend using the newer version of the dump utility: "It is recommended that you use the pg_dump and pg_dumpall programs from the newer version of PostgreSQL, to take advantage of any enhancements that might have been made in these programs. Current releases of the dump programs can read data from any server version back to 7.0." How many cores are available on the new server? Cheers, Steve
Bryan White <nicktook@gmail.com> writes: > I get this error: > pg_restore: [custom archiver] dumping a specific TOC data block out of > order is not supported without ID on this input stream (fseek > required) I can't duplicate that here using latest 8.1.x pg_dump and 8.4.x pg_restore ... what versions are you using exactly? Also, it might be dependent on the set of objects in your database --- would you be willing to show us the output of "pg_restore --list ec.restore"? regards, tom lane
On Sat, Apr 17, 2010 at 2:46 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote: > Bryan White <nicktook@gmail.com> writes: >> I get this error: >> pg_restore: [custom archiver] dumping a specific TOC data block out of >> order is not supported without ID on this input stream (fseek >> required) > > I can't duplicate that here using latest 8.1.x pg_dump and 8.4.x > pg_restore ... what versions are you using exactly? Also, it might > be dependent on the set of objects in your database --- would you > be willing to show us the output of "pg_restore --list ec.restore"? I think it was an error on my part. I am retring the whole operation now. -- Bryan White
> It was unclear from your email how you are creating the dump. Did you create > the dump with the 8.1 or the 8.4 version of pg_dump? When upgrading, the > docs recommend using the newer version of the dump utility: "It is > recommended that you use the pg_dump and pg_dumpall programs from the newer > version of PostgreSQL, to take advantage of any enhancements that might have > been made in these programs. Current releases of the dump programs can read > data from any server version back to 7.0." I messed up. I sent the stderr output of the dump to a text file and never looked at it. There was some sort of deadlock condition on a particular table. Another process tries to drop and recreate a table during the time of the dump. I was using the pg_dump from 8.1. I have now dumped the database with the 8.4 version. It will be a couple hours before I have any results. > How many cores are available on the new server? The new server is a dual socket Nahalem. 8 cores, 16 threads, 48 GB ram, 2 WAL drives in RAID1 and 12 database drives in RAID10. -- Bryan White
Bryan White wrote: > ... > The new server is a dual socket Nahalem. 8 cores, 16 threads, 48 GB > ram, 2 WAL drives in RAID1 and 12 database drives in RAID10. > I would try setting -j higher - in your case try 8 for starters. Also turn off fsync and autovacuum (and turn them back on !!!). See these and more tips at: http://it.toolbox.com/blogs/database-soup/using-84-parallel-restore-with-your-83-or-82-database-31575 Cheers, Steve
On Sat, Apr 17, 2010 at 5:53 PM, Steve Crawford <scrawford@pinpointresearch.com> wrote: > Bryan White wrote: >> >> ... >> The new server is a dual socket Nahalem. 8 cores, 16 threads, 48 GB >> ram, 2 WAL drives in RAID1 and 12 database drives in RAID10. >> > > I would try setting -j higher - in your case try 8 for starters. Also turn > off fsync and autovacuum (and turn them back on !!!). See these and more > tips at: > http://it.toolbox.com/blogs/database-soup/using-84-parallel-restore-with-your-83-or-82-database-31575 > Success. The -j6 run completed. The time to restore went from 152 minutes to 45 minutes. I was already doing some of the things on the above page. I plan on implementing others and than do a few test runs to find the optimal number of jobs. Thanks! -- Bryan White