Re: Optimising inside transactions
От | John Taylor |
---|---|
Тема | Re: Optimising inside transactions |
Дата | |
Msg-id | 0206121642460A.03223@splash.hq.jtresponse.co.uk обсуждение исходный текст |
Ответ на | Re: Optimising inside transactions (Tom Lane <tgl@sss.pgh.pa.us>) |
Список | pgsql-novice |
On Wednesday 12 June 2002 16:36, Tom Lane wrote: > John Taylor <postgres@jtresponse.co.uk> writes: > > I'm running a transaction with about 1600 INSERTs. > > Each INSERT involves a subselect. > > > I've noticed that if one of the INSERTs fails, the remaining INSERTs run in about > > 1/2 the time expected. > > > Is postgresql optimising the inserts, knowing that it will rollback at the end ? > > > If not, why do the queries run faster after the failure ? > > Queries after the failure aren't run at all; they're only passed through > the parser's grammar so it can look for a COMMIT or ROLLBACK command. > Normal processing resumes after ROLLBACK. If you were paying attention > to the return codes you'd notice complaints like > > regression=# begin; > BEGIN > regression=# select 1/0; > ERROR: floating point exception! The last floating point operation either exceeded legal ranges or was a divide by zero > -- subsequent queries will be rejected like so: > regression=# select 1/0; > WARNING: current transaction is aborted, queries ignored until end of transaction block > *ABORT STATE* Well, I'm using JDBC, and it isn't throwing any exceptions, so I assumed it was working :-/ > > I'd actually expect much more than a 2:1 speed differential, because the > grammar is not a significant part of the runtime AFAICT. Perhaps you > are including some large amount of communication overhead in that > comparison? > Yes, now that I think about it - I am getting a bigger differential I'm actually running queries to update two slightly different databases in parallel, so the failing one is taking almost no time at all. Thanks JohnT
В списке pgsql-novice по дате отправления: