Re: Performance issues with compaq server
От | Holger Marzen |
---|---|
Тема | Re: Performance issues with compaq server |
Дата | |
Msg-id | Pine.LNX.4.44.0205082155550.7269-100000@bluebell.marzen.de обсуждение исходный текст |
Ответ на | Re: Performance issues with compaq server (Doug McNaught <doug@wireboard.com>) |
Ответы |
Re: Performance issues with compaq server
|
Список | pgsql-general |
On 8 May 2002, Doug McNaught wrote: > Holger Marzen <holger@marzen.de> writes: > > > ACK. On a given hardware I get about 150 inserts per second. Using a > > begin/end transaction for a group of 100 inserts speeds it up to about > > 450 inserts per second. > > COPY is even faster as there is less query parsing to be done, plus > you get a transaction per COPY statement even without BEGIN/END. Yes, but I wanted to change something in some rows, so I used perl and insert. > > But beware: if one insert fails (duplicate key, faulty data) then you > > have to re-insert the remaining rows as single transactions, else all > > rows of the previous transaction are discarded. > > Hmm don't you have to ROLLBACK and redo the whole transaction without > the offending row(s), since you can't commit while in ABORT state? Or > am I misunderstanding? Postgres complains and doesn't accept the following inserts after a failed one until end of transaction. I didn't have the time yet to figure out if it rolls back the preceeding inserts. Is there a rule in SQL standards that describes what should happen if some statemens in a transaction fail and the program issues a commit? -- PGP/GPG Key-ID: http://blackhole.pca.dfn.de:11371/pks/lookup?op=get&search=0xB5A1AFE1
В списке pgsql-general по дате отправления: