Re: Revisited: Transactions, insert unique.
От | Lincoln Yeoh |
---|---|
Тема | Re: Revisited: Transactions, insert unique. |
Дата | |
Msg-id | 3.0.5.32.20000427110851.00a9fdd0@pop.mecomb.po.my обсуждение исходный текст |
Ответ на | Re: Revisited: Transactions, insert unique. (Ed Loehr <eloehr@austin.rr.com>) |
Ответы |
Re: Revisited: Transactions, insert unique.
|
Список | pgsql-general |
At 11:59 AM 26-04-2000 -0500, Ed Loehr wrote: >Joachim Achtzehnter wrote: >> >> In a message to Ed Loehr and pgsql-general, Lincoln Yeoh wrote: >> > Then you do a commit on both, and you end up with two rows. >> >> This is dissapointing indeed! What this means is that Postgresql >> transactions are, in fact, not truely serializable. The concurrent Well, it's not that disappointing as it's not easily avoidable given that transactions usually consist of multiple consecutive nonzerotime order sensitive statements. :) So the database has to figure out how much information to share amongst partially completed transactions in order to achieve maximum serialisation/atomicity. There will also be transactions which to be serialised must actually occur serially. That is a whole transaction has to go through before the other one (or every other one), or at least a group of statements within the transaction. Stuff like select for update gives the database engine some hints, but in the case I mentioned it is not enough (unless modified). Basically the database needs to get a need-to-know summary of the transaction in a single atomic statement, then it can better decide what to block. The application is in the best position to give this summary. So in the end for maximum serialisation it may be a good idea to have an arbitrary lock mechanism to cover the troublesome cases. e.g. getlock('arbitrary string'[,timeout]); getlock('all temporary product tables, created or about to be') getlock('all rows potential or real on all tables where key1=X or key2=Y') If another transaction tries to get the same lock, it will wait for timeout then abort with an error. (possible timeout values, -1= never, 0 or greater=time to wait in seconds). Ok so I'm biased to how MySQL does it (it's simple and has a good chance of working well). Yes it shifts a lot to the application. But if people have to do things like do their multiple select for updates in the right order (to prevent deadlocks), they might as well start using something like this instead (or fix their architecture if possible ;) ). And it's likely to be faster! Anyone else here like this arbitrary lock thingy? I'm very interested to know of other ways to achieve good serialisation, especially database centric methods. Cheerio, Link. p.s. Would anyone actually need timeouts of a day (86400) or greater?
В списке pgsql-general по дате отправления: