Re: serial columns & loads misfeature?
От | Lee Harr |
---|---|
Тема | Re: serial columns & loads misfeature? |
Дата | |
Msg-id | afld9c$1eni$1@news.hub.org обсуждение исходный текст |
Ответ на | serial columns & loads misfeature? (Kevin Brannen <kevinb@nurseamerica.net>) |
Список | pgsql-general |
> After I created the DB, I inserted the data (thousands of inserts) via > psql. All went well. Then I started testing the changed code (Perl) > and when I went to insert, I got a "dup key" error. > > It took me awhile to figure out what was going on, but I can recreate > the problem with: > > create table test (s serial, i int); > insert into test values (1,1); > insert into test values (2,2); > insert into test values (3,3); > insert into test (i) values (4); > ERROR: Cannot insert a duplicate key into unique index test_s_key > With these inserts, you are bypassing the SERIAL mechanism (it uses a DEFAULT value) > I was expecting the system to realize new "keys" had been inserted, and > so when the "nextval" that implicitly happens on a serial field is run, > it would "know" that it was too small and return "max(s)+1". [FWIW, my > expectations in this area were set by my experience with Informix and > mysql, both do this; not sure if other RDBMs do.] > I can certainly see the advantage of having the SERIAL columns set properly by some kind of OtherDB --> Postgres conversion tool, but I do not think there is a need for a different mechanism in the usual case.
В списке pgsql-general по дате отправления: