Обсуждение: problems with pg_restore
I have a 200GB archive I created like this:
% pg_dump -f db1.pga -Fc -b db1
To check that it is good, I tried to restore to another database:
% pg_restore -d db2 db1.pga
[...quickly restores sequences, then...]
pg_restore: [archiver (db)] could not execute query: ERROR: function plpgsql_call_handler already exists with same argument types
Ok, I had installed PLPGSQL in template1, which has been useful in other situations. I drop db2, recreate it, and drop the PLPGSQL language and call handler, then try again:
% pg_restore -d db2 db1.pga
[...restores seqs, tables, keys, then spends several quiet hours (on blobs I assume)...]
pg_restore: [archiver (db)] could not execute query: ERROR: Database comments may only be applied to the current database
I'm not sure how to get around this one. Can an archived database with comments be restored to a database with a different name? Are the comments the only thing missing from the restored database, or could there be other things?
thanks,
Dylan Kuhn
"Kuhn, Dylan K (4520500D)" <Dylan.Kuhn@navy.mil> writes: > [ tries to restore a dump into a database with a different name ] > pg_restore: [archiver (db)] could not execute query: ERROR: Database commen= > ts may only be applied to the current database > I'm not sure how to get around this one. Can an archived database with com= > ments be restored to a database with a different name? Hm. Evidently not :-(. The COMMENT ON DATABASE facility is a bit bogus anyway (since there's no way to make the comments visible across databases). You might be best advised not to use it. Hackers: this seems like an extremely bad side-effect of what we thought was a simple addition of a helpful check. I am thinking we should either remove the check again, or downgrade it to a WARNING (though I'm not quite sure how to phrase the warning ...). Any thoughts? regards, tom lane
> Hm. Evidently not :-(. The COMMENT ON DATABASE facility is > a bit bogus > anyway (since there's no way to make the comments visible across > databases). You might be best advised not to use it. I agree. I plan to delete it and try again, and avoid using database comments in the future. This seems to work: DELETE FROM ONLY pg_description WHERE pg_description.objoid= (SELECT pg_database.oid FROM pg_database WHERE pg_database.datname=current_database()) AND pg_description.classoid= (SELECT pg_class.relfilenode FROM pg_class WHERE pg_class.relname='pg_database'); > (though I'm > not quite sure how to phrase the warning ...). Any thoughts? I would have been happy enough with the existing message as a warning - probably anyone who tries to use database commentshas noticed the limitations. -dylan-
Rod Taylor <rbt@rbt.ca> writes: >> Hackers: this seems like an extremely bad side-effect of what we thought >> was a simple addition of a helpful check. I am thinking we should >> either remove the check again, or downgrade it to a WARNING (though I'm >> not quite sure how to phrase the warning ...). Any thoughts? > How about going the other way and removing the requirement to explicitly > state the database? > COMMENT ON DATABASE IS 'This comment is on the current database.'; Won't help us for reading existing pg_dump scripts, although perhaps it would be useful going forward. Given the current implementation, it seems like there are three possible behaviors for COMMENT ON DATABASE when the database name isn't the same as the current database: 1. Raise error (what we're doing now). Simple but breaks dump scripts for the restore-into-different-DB scenario. 2. Do nothing, store the comment in the current DB's pg_description (what we did in 7.2). Now that I think about it, this also fails for different-database restore, since very possibly the attempt to look up the DB name will fail --- you'll get a no-such-database error instead of the present error, but it's still unhelpful. 3. Ignore the specified DB name, store the comment as the description of the current DB; possibly give a warning saying we're doing so. This would allow correct restoration of dumps into different DBs, but I think people would find it awfully surprising :-( regards, tom lane
> Given the current implementation, it seems like there are > three possible > behaviors for COMMENT ON DATABASE when the database name > isn't the same > as the current database: > > 1. Raise error (what we're doing now). Simple but breaks dump scripts > for the restore-into-different-DB scenario. > > 2. Do nothing, store the comment in the current DB's pg_description > (what we did in 7.2). Now that I think about it, this also fails > for different-database restore, since very possibly the attempt > to look up the DB name will fail --- you'll get a no-such-database > error instead of the present error, but it's still unhelpful. > > 3. Ignore the specified DB name, store the comment as the description > of the current DB; possibly give a warning saying we're doing so. > This would allow correct restoration of dumps into different DBs, > but I think people would find it awfully surprising :-( > The behavior and syntax of COMMENT ON DATABASE was surprising to me from the start. IMHO, a warning that doesn't affectmy current transaction is a more pleasant surprise than an error that aborts it. It would be nice if there was aneasier way to undo a mistake, but again I would rather finish my transaction with an incorrect comment than have to doit over again, so I'd take door #3. Could there be a #4, drop support for COMMENT ON DATABASE, ignoring it and issuing a warning that it is no longer supported? It's hard to argue that it's very useful as it stands, and you've already recommended against using it. -dylan-
Alvaro Herrera <alvherre@dcc.uchile.cl> writes: > On Tue, Jul 15, 2003 at 04:03:13PM -0400, Tom Lane wrote: >> Given the current implementation, it seems like there are three possible >> behaviors for COMMENT ON DATABASE when the database name isn't the same >> as the current database: > There's a fourth possibility: ignore the command and issue a WARNING. Hmm, that seems like a reasonable choice. Anyone have an objection? regards, tom lane
Alvaro Herrera <alvherre@dcc.uchile.cl> writes:On Tue, Jul 15, 2003 at 04:03:13PM -0400, Tom Lane wrote:Given the current implementation, it seems like there are three possible behaviors for COMMENT ON DATABASE when the database name isn't the same as the current database:There's a fourth possibility: ignore the command and issue a WARNING.Hmm, that seems like a reasonable choice. Anyone have an objection?
My personal experience would lead me to believe that this is the best option. Count it a vote in favor.
regards, tom lane ---------------------------(end of broadcast)--------------------------- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Andrew Biagioni
I am trying to migrate a database from Oracle to Postgresql. I am using the PgAdmin database migration wizard (1.5.60-Dev) to do this. Using ODBC I am able to migrate small tables but I have a few tables that have over 10 million rows that are failing. I know the reason as I am getting an "Snapshot too old" error from Oracle but am unable to come up with a workaround. I prefer using the wizard as it takes care of nulls etc. which I might have to handle if I went the dump and load route. Can someone give me an idea on getting around this (other than preventing updates to Oracle). Also, is there a way to turn off WAL while loading bulk data. thanks dilan
"Dilan Arumainathan" <dilan_a@impark.com> writes: > [ Oracle problem ] You'd probably have better luck asking about that in an Oracle-specific list. > Also, is there a way to turn off WAL while loading bulk data. No. You could consider turning off fsync though. regards, tom lane
Dilan Arumainathan wrote: >I am trying to migrate a database from Oracle to Postgresql. I am using the >PgAdmin database migration wizard (1.5.60-Dev) to do this. Using ODBC I am >able to migrate small tables but I have a few tables that have over 10 >million rows that are failing. I know the reason as I am getting an >"Snapshot too old" error from Oracle but am unable to come up with a >workaround. I prefer using the wizard as it takes care of nulls etc. which I >might have to handle if I went the dump and load route. Can someone give me >an idea on getting around this (other than preventing updates to Oracle). > >Also, is there a way to turn off WAL while loading bulk data. > >thanks >dilan > Hi Dilan, this is an well known problem in Oracle. (I know its off topic, but it might interest other people) You got several options around it: - Increase the size of your rollback segments (the message means, that Oralce cannot give you a consistent view) See http://download-west.oracle.com/docs/cd/A87860_01/doc/server.817/a76965/c23cnsis.htm#2599 - Stop all other transactions - Do the work in serveral steps - Use a script like "unload.sql" ( in the Attachement, from http://www.evergreen-database.com/) ( it doesn not prevent the problem, but is quite performant) I hope I could help you with that. Cheers, Dani ------------------------------------------------------------------------- -- SQL Script Name: unload_fixed.sql -- -- Function: generates a sql*plus script to unload a table to a -- file and a SQL*Loader script to reload the same -- data. Intent is to create a faster alternative -- to export/import. -- -- Initial testing indicates that the unload takes about -- 20% longer than EXP, but that the reload takes only -- 45% as long as IMP. Under Oracle7 r7.1, the capability -- of parallelizing direct loads (but not IMP) should -- provide much faster load times, reasonably 10% of the -- time for IMP. -- -- WORD OF WARNING Re: PERFORMANCE: -- Performance is very specific to the data distribution -- of the table data. Much poorer performance has been -- seen in the following cases: -- - many long varchar columns which actually contain -- short data values; -- - many number columns without precision or scale -- which are defaulted to large numbers; -- - lots of null values; -- All of these cases lead to inefficient use of the -- fixed record format of the unload file. Padding the -- unload file with leading/trailing zeros or trailing -- blanks yields files 4X the size of an export dmp file -- and unload times 4X those of export. (Even in these -- extreme test situations, the load time was still -- between 80% and 90% of IMP.) -- -- -- This unload/reload utility has some other advantages -- besides speed. The unload can easily select a subset -- of the original table (for statistical sampling or -- retrieving rows for a particular department or busines -- date for instance) whereas EXP/IMP deals with entire -- tables. Additionally, if desired, unload can sort -- the output in order to speed index builds and/or -- optimize cache hits on the reloaded data based on -- loading frequently used rows contiguously. This may -- provide an additional benefit in some reorg efforts. -- Finally, the unload might have a GROUP BY appended -- which would facilitate development of summary tables. -- -- By editing the generated unload2.sql and SQL*Loader -- .CTL scripts, one could additionally remove columns -- or modify them with SQL (or PL/SQL functions in r7.1) -- during the extract. Just be sure to adjust the length -- of the COLUMN and PAGESIZE in unload2.sql and the -- input field in .CTL to reflect whatever changes. -- -- This utility can also unload data from a view which -- is not possible via EXP. This facility may be used -- to do subsets--selection (specific rows), projection -- (specific columns), joins, GROUP BY summaries or -- function application without having to edit this -- script or its generated output scripts. -- -- Arguments IN: prompts for table_owner, table_name as well as -- default_precision and default_scale for columns -- defined as NUMBER or FLOAT (without precision or -- scale defined). -- -- Arguments OUT: none -- -- Calls: none -- -- Called by: none -- -- Change History: 05/25/94 gcdodge original development -- -- Limitations: Doesn't handle long, raw, long raw, rowid, mlslabel -- datatypes. -- -- This utility has been tested in a Solaris 2.3 -- environment, but is expected to be fully portable -- to any ASCII platform. Unlike EXP/IMP, however, -- it will not automatically make translations when -- the unload file is moved from ASCII to EBCDIC or -- vice versa. Since all data is written in external -- formats, one should expect that file transfer -- utilities that do such conversions should work. -- As an alternative, one could edit the SQL*Loader -- .CTL script to specify the alternative encoding -- scheme. -- -- If a numeric column is encountered which has no -- defined precision or scale, then this script will -- use default values (prompted for); this poses three -- risks: 1) that you may overspecify the precision -- and thereby waste space in the unload file; 2) -- you may underspecify the precision and thereby -- get overflow indicators in the unloaded data which -- may not be caught prior to loading; 3) you may -- underspecify the scale and introduce truncation -- which will not be found by either the unload or -- load processes. For this reason, it is strongly -- recommended that numeric table columns be defined -- with appropriate precision and scale values. -- -- The generated SQL*Loader script assumes that fields -- of blanks should be loaded as NULLS...if the table -- has columns for which SPACES are valid values, then -- it will be necessary to edit the generated unload2.sql -- script to concatenate double quotes before and after -- the affected column(s) along with changing the length -- (pagesize in unload2.sql and the individual field's -- length in the generated .CTL file) by two bytes. -- ------------------------------------------------------------------------- set tab off set heading off heading off feedback off echo off verify off space 1 pagesize 0 linesize 120 accept owner prompt 'What schema owns the table to be unloaded? ' accept table_name prompt 'What table is to be unloaded? ' accept default_precision prompt 'What TOTAL number of digits should be reserved for numbers without defined precision? ' accept default_scale prompt 'What number of DECIMAL digits should be reserved for numbers without defined scale? ' --------------------------------------------------- -- Generate the unload script --------------------------------------------------- spool unload_fixed2.sql select 'SET HEADING OFF FEEDBACK OFF ECHO OFF VERIFY OFF SPACE 0 PAGESIZE 0 TERMOUT OFF' from dual / -- Calculate the sum of all output field lengths and set the output record size select 'SET LINESIZE ' || (sum(decode(data_type, 'CHAR',data_length, 'VARCHAR',data_length, 'VARCHAR2',data_length, 'DATE',14, 'NUMBER',decode(data_precision, '',&default_precision+2, greatest(data_precision-data_scale,1)+decode(data_scale,0,0,1)+data_scale)+1, 'FLOAT',&default_precision+2, data_length))) from dba_tab_columns where owner=upper('&&owner') and table_name=upper('&&table_name') / -- Generate an appropriate SQL*Plus COLUMN command to control formatting of each output field select 'COLUMN ' || rpad('"'||column_name||'"',32) || ' FORMAT ' || rpad(decode(data_type, 'CHAR','A'||data_length, 'VARCHAR2','A'||data_length, 'VARCHAR','A'||data_length, 'DATE','A14', 'NUMBER',decode(data_precision, '', rpad('0',&default_precision-&default_scale,'9')||'.'||rpad('9',&default_scale,'9'), rpad('0',greatest(data_precision-data_scale,1),'9') || decode(data_scale,0,'','.') || decode(data_scale,0,'',rpad('9',data_scale,'9'))), 'FLOAT',rpad('0',&default_precision-&default_scale,'9')||'.'||rpad('9',&default_scale,'9'), 'ERROR'),40) || ' HEADING ''X''' from dba_tab_columns where owner=upper('&&owner') and table_name=upper('&&table_name') order by column_id / -- Generate the actual SELECT statement to unload table data select 'SPOOL /tmp/&&owner..&&table_name..DAT' from dual / column var1 noprint column var2 noprint select 'a' var1, 0 var2, 'SELECT ' from dual union select 'b', column_id, decode(column_id, 1, ' ', ' , ') || decode(data_type,'DATE','to_char('||'"'||column_name||'"'||',''YYYYMMDDHH24MISS'') '||'"'||column_name||'"' , '"'||column_name||'"') from dba_tab_columns where owner=upper('&&owner') and table_name=upper('&&table_name') union select 'c', 0, 'FROM &&owner..&&table_name' from dual union select 'd', 0, ';' from dual order by 1,2 / select 'SPOOL OFF' from dual / select 'SET TERMOUT ON' from dual / spool off ----------------------------------------------------------------------------- -- Generate the SQL*Loader control file ----------------------------------------------------------------------------- set lines 120 pages 0 spool &&owner..&&table_name..CTL select 'a' var1, 0 var2, 'OPTIONS(DIRECT=TRUE)' from dual union select 'b', 0, 'LOAD DATA' from dual union select 'c', 0, 'INFILE ''/tmp/&&owner..&&table_name..DAT''' from dual union select 'd', 0, 'BADFILE &&owner..&&table_name..BAD' from dual union select 'e', 0, 'DISCARDFILE &&owner..&&table_name..DSC' from dual union select 'f', 0, 'DISCARDMAX 999' from dual union select 'm', 0, 'INTO TABLE &&owner..&&table_name' from dual union select 'n', column_id, rpad(decode(column_id,1,'(',',')||'"'||column_name||'"',31) || decode(data_type, 'CHAR','CHAR('||data_length||')', 'VARCHAR','CHAR('||data_length||')', 'VARCHAR2','CHAR('||data_length||')', 'DATE','DATE(14) "YYYYMMDDHH24MISS"', 'NUMBER','DECIMAL EXTERNAL('||decode(data_precision, '',&default_precision+2, greatest(data_precision-data_scale,1)+decode(data_scale,0,0,1)+data_scale+1) ||')', 'FLOAT','DECIMAL EXTERNAL('||to_char(&default_precision+2)||')', 'ERROR--'||data_type) || ' NULLIF ("' ||column_name||'" = BLANKS)' from dba_tab_columns where owner = upper('&&owner') and table_name = upper('&&table_name') union select 'z', 0, ')' from dual order by 1, 2 / spool off ----------------------------------------------------------------------------- -- Cleanup ----------------------------------------------------------------------------- clear column clear break clear compute undef owner undef table_name undef default_precision undef default_scale
> Hm. Evidently not :-(. The COMMENT ON DATABASE facility is a bit bogus > anyway (since there's no way to make the comments visible across > databases). You might be best advised not to use it. > > Hackers: this seems like an extremely bad side-effect of what we thought > was a simple addition of a helpful check. I am thinking we should > either remove the check again, or downgrade it to a WARNING (though I'm > not quite sure how to phrase the warning ...). Any thoughts? How about going the other way and removing the requirement to explicitly state the database? COMMENT ON DATABASE IS 'This comment is on the current database.';
Вложения
> 3. Ignore the specified DB name, store the comment as the description > of the current DB; possibly give a warning saying we're doing so. > This would allow correct restoration of dumps into different DBs, > but I think people would find it awfully surprising :-( I like this one for 7.4 (with warning) but remove the requirement to supply a dbname at all with a warning about the deprecated syntax in 7.4. 7.4 pg_dump should not provide dbname. Remove the ability to supply database name completely in 7.5.
On Tue, Jul 15, 2003 at 04:03:13PM -0400, Tom Lane wrote: > Given the current implementation, it seems like there are three possible > behaviors for COMMENT ON DATABASE when the database name isn't the same > as the current database: There's a fourth possibility: ignore the command and issue a WARNING. Restores the database in both cases (in same database and in a different one), and sets the correct comment only if the database name is correct, giving a hint that the comment should be manually set. -- Alvaro Herrera (<alvherre[a]dcc.uchile.cl>) "I can't go to a restaurant and order food because I keep looking at the fonts on the menu. Five minutes later I realize that it's also talking about food" (Donald Knuth)