Re: Can Postgres Not Do This Safely ?!?
От | Peter Geoghegan |
---|---|
Тема | Re: Can Postgres Not Do This Safely ?!? |
Дата | |
Msg-id | AANLkTim-Pv3E0Q5Od-ue9P1p3R7Tboj1n1D1u5rkyAi4@mail.gmail.com обсуждение исходный текст |
Ответ на | Can Postgres Not Do This Safely ?!? (Karl Pickett <karl.pickett@gmail.com>) |
Ответы |
Re: Can Postgres Not Do This Safely ?!?
|
Список | pgsql-general |
On 29 October 2010 03:04, Karl Pickett <karl.pickett@gmail.com> wrote: > Hello Postgres Hackers, > > We have a simple 'event log' table that is insert only (by multiple > concurrent clients). It has an integer primary key. We want to do > incremental queries of this table every 5 minutes or so, i.e. "select > * from events where id > LAST_ID_I_GOT" to insert into a separate > reporting database. The problem is, this simple approach has a race > that will forever skip uncommitted events. I.e., if 5000 was > committed sooner than 4999, and we get 5000, we will never go back and > get 4999 when it finally commits. How can we solve this? Basically > it's a phantom row problem but it spans transactions. > > I looked at checking the internal 'xmin' column but the docs say that > is 32 bit, and something like 'txid_current_snapshot' returns a 64 bit > value. I don't get it. All I want to is make sure I skip over any > rows that are newer than the oldest currently running transaction. > Has nobody else run into this before? If I understand your question correctly, you want a "gapless" PK: http://www.varlena.com/GeneralBits/130.php -- Regards, Peter Geoghegan
В списке pgsql-general по дате отправления: