Re: Generating Lots of PKs with nextval(): A Feature Proposal
От | Peter Crabtree |
---|---|
Тема | Re: Generating Lots of PKs with nextval(): A Feature Proposal |
Дата | |
Msg-id | AANLkTilq_4JxDIU_u-F7W2fWfttE21A5GugQKnp0_Tzw@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: Generating Lots of PKs with nextval(): A Feature Proposal (Tom Lane <tgl@sss.pgh.pa.us>) |
Список | pgsql-hackers |
On Fri, May 14, 2010 at 5:27 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote: > Peter Crabtree <peter.crabtree@gmail.com> writes: >> Now, I was reminded that I could simply do this: > >> SELECT nextval('my_seq') FROM generate_series(1, 500); > >> But of course then I would have no guarantee that I would get a >> contiguous block of ids, > > The existing "cache" behavior will already handle that for you, > I believe. I don't really see a need for new features here. I don't see how that works for this case, because the "cache" setting is "static", and also shared between sessions. So if I have 10 records one time, and 100 records the next, and 587 the third time, what should my CACHE be set to for that sequence? And if I do ALTER SEQUENCE SET CACHE each time, I have either killed concurrency (because I'm locking other sessions out of using that sequence until I'm finished with it), or I have a race condition (if someone else issues an ALTER SEQUENCE before I call nextval()). The same problem exists with using ALTER SEQUENCE SET INCREMENT BY. Peter
В списке pgsql-hackers по дате отправления: