Re: queries on xmin
От | Matt Amos |
---|---|
Тема | Re: queries on xmin |
Дата | |
Msg-id | 79d9e4e90906110912u1b6b3d51j1e7ddd2cc647fdbf@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: queries on xmin (Marko Kreen <markokr@gmail.com>) |
Ответы |
Re: queries on xmin
|
Список | pgsql-general |
On Thu, Jun 11, 2009 at 2:48 PM, Marko Kreen<markokr@gmail.com> wrote: > On 6/11/09, Matt Amos <zerebubuth@gmail.com> wrote: >> On Thu, Jun 11, 2009 at 1:13 PM, Brett Henderson<brett@bretth.com> wrote: >> >> See pgq.batch_event_sql() function in Skytools [2] for how to >> >> query txids between snapshots efficiently and without being affected >> >> by long transactions. >> > >> > I'll take a look. >> >> it was looking at the skytools stuff which got me thinking about using >> txids in the first place. someone on the osm-dev list had suggested >> using PgQ, but we weren't keen on the schema changes that would have >> been necessary. > > Except the trigger, PgQ does not need any schema changes? i've been having a look and it seems to me that PgQ requires some extra tables as well as the trigger. am i missing something? PgQ might be a good solution, but i'm worried that after calling pgq.finish_batch() the batch is released. this would mean it wouldn't be possible to regenerate older files (e.g: a few days to a week) in case something unexpected went wrong. it might not be a major problem, though. i think we could get the same functionality without the extra daemons, by putting a trigger on those tables for insert and recorded the object id, version and 64-bit txid in another table. but if we're going to alter the schema we might as well put the txid column directly into those tables... cheers, matt
В списке pgsql-general по дате отправления: