Re: Moving from MySQL to PGSQL....some questions (multilevel
От | Karl O. Pinc |
---|---|
Тема | Re: Moving from MySQL to PGSQL....some questions (multilevel |
Дата | |
Msg-id | 20040304084840.E18208@mofo.meme.com обсуждение исходный текст |
Ответ на | Re: Moving from MySQL to PGSQL....some questions (multilevel (Bruno Wolff III <bruno@wolff.to>) |
Ответы |
Re: Moving from MySQL to PGSQL....some questions (multilevel
|
Список | pgsql-general |
On 2004.03.03 22:48 Bruno Wolff III wrote: > On Wed, Mar 03, 2004 at 17:22:44 -0600, > "Karl O. Pinc" <kop@meme.com> wrote: > > > > To make it fast, you'd want to keep the max(id2) value on the table > > keyed by id1. Your trigger would update the max(id2) value as well > > as alter the row being inserted. To keep from having problems with > > concurrent inserts, you'd need to perform all inserts inside > > serialized transactions. The only problem I see is that there's > > a note in the documentation that says that postgresql's > serialization > > dosen't always work. Anybody know if it would work in this case? > > There was a discussion about predicate locking some time ago (I think > last summer). Postgres doesn't do this and it is possible for two > parallel transactions to get results that aren't consistant with > one transaction occurring before the other. I think the particular > example was inserting some rows and then counting them in each of > two parallel transactions. The answer you get won't be the same as > if either of the two transactions occurred entirely before the other. > This might be what you are referring to. Yes. That's it. So it sounds like this would be a concurrency safe way to perform the operation. It also sounds like it might be a good idea to do SELECT FOR UPDATE on the table/row keyed by id1 to keep the serialized transactions from stepping on each other's toes. Karl <kop@meme.com> Free Software: "You don't pay back, you pay forward." -- Robert A. Heinlein
В списке pgsql-general по дате отправления: