Re: Transaction atomicity
От | Albe Laurenz |
---|---|
Тема | Re: Transaction atomicity |
Дата | |
Msg-id | AFCCBB403D7E7A4581E48F20AF3E5DB20191CB7D@EXADV1.host.magwien.gv.at обсуждение исходный текст |
Ответ на | Transaction atomicity (Giuseppe Sacco <giuseppe@eppesuigoccas.homedns.org>) |
Ответы |
Re: Transaction atomicity
|
Список | pgsql-jdbc |
> I have a question about a simple code that should automatically insert a > record in a table. The primary key for this record is automatically > generated during the INSERT statement, as the max value plus one. > > The (simplified, since the real number of fields is 5) code is: > > final String query = "INTO table (docId,seqNr) " + > "VALUES (?, (SELECT 1 + coalesce (max(seqNr), 0) "+ > "FROM table " + > "WHERE seqNr BETWEEN 0 AND (9223372036854775807-1) ))"; > Connection conn=daof.getConnection(); > try { > conn.setAutoCommit(true); > PrepararedStatemen st = conn.prepareStatement(query); > st.setString(1,'myDocId'); > st.executeUpdate(); > conn.close(); > } > catch (SQLException e) { > try { conn.close(); } catch (SQLException ee) {} > throw e; > } > > I then create Runtime.getRuntime().availableProcessors() threads that > create a DataSource and start executing that code in parallel. What > happens is that I get many DUPLICATE KEY errors. I thought that in > AutoCommit all insert are supposed to be serialized, so they should > create a seqNr different since MAX(seqNr) is different at each call. > > Is my reasoning correct? Or is it a different problem, maybe with jdbc > or postgresql? Does the behaviour change if you set the prepare threshold of the statement to 0? Yours, Laurenz Albe
В списке pgsql-jdbc по дате отправления: