Re: Duplicate Unique Key constraint error
От | Tom Allison |
---|---|
Тема | Re: Duplicate Unique Key constraint error |
Дата | |
Msg-id | D985E6C3-0D3F-480E-B8F6-459486A05C55@tacocat.net обсуждение исходный текст |
Ответ на | Re: Duplicate Unique Key constraint error (Tom Lane <tgl@sss.pgh.pa.us>) |
Ответы |
Re: Duplicate Unique Key constraint error
|
Список | pgsql-general |
On Jul 10, 2007, at 3:09 PM, Tom Lane wrote: > > "Harpreet Dhaliwal" <harpreet.dhaliwal01@gmail.com> writes: >> Transaction 1 started, saw max(dig_id) = 30 and inserted new >> dig_id=31. >> Now the time when Transaction 2 started and read max(dig_id) it >> was still 30 >> and by the time it tried to insert 31, 31 was already inserted by >> Transaction 1 and hence the unique key constraint error. > > This is exactly why you're recommended to use sequences (ie serial > columns) for generating IDs. Taking max()+1 does not work, unless > you're willing to lock the whole table and throw away vast amounts of > concurrency. I wonder how SQL server is handling this? Are they locking the table? I realize it's off-topic, but I'm still curious. Sequences are your friend. they come in INT and BIGINT flavors, but BIGINT is a lot of rows. Can set set Sequences to automatically rollover back to zero?
В списке pgsql-general по дате отправления: