Re: using a postgres table as a multi-writer multi-updater queue
От | Andy Colson |
---|---|
Тема | Re: using a postgres table as a multi-writer multi-updater queue |
Дата | |
Msg-id | 5653594F.3050906@squeakycode.net обсуждение исходный текст |
Ответ на | using a postgres table as a multi-writer multi-updater queue (Chris Withers <chris@simplistix.co.uk>) |
Список | pgsql-general |
On 11/23/2015 4:41 AM, Chris Withers wrote: > Hi All, > > I wondered if any of you could recommend best practices for using a > postgres table as a queue. Roughly speaking, 100-200 workers will vomit > rows and rates of a few hundres per second into the table leaving the > status as new and then as many workers as needed to keep up with the > load will plough through the queue changing the status to something > other than new. > > My naive implementation would be something along the lines of: > > CREATE TABLE event ( > ts timestamp, > event char(40), > status char(10), > CONSTRAINT pkey PRIMARY KEY(ts, event) > ); > > > ...with writers doing INSERT or COPY to get data into the table and > readers doing something like: > > SELECT FOR UPDATE * FROM event WHERE status='new' LIMIT 1000; > > ...so, grabbing batches of 1,000, working on them and then setting their > status. > > But, am I correct in thinking that SELECT FOR UPDATE will not prevent > multiple workers selecting the same rows? > > Anyway, is this approach reasonable? If so, what tweaks/optimisations > should I be looking to make? > > If it's totally wrong, how should I be looking to approach the problem? > > cheers, > > Chris Have you tried Redis? Its really good at that sort of thing. -Andy
В списке pgsql-general по дате отправления: