Re: [HACKERS] Thread-safe queueing?
От | Tim Holloway |
---|---|
Тема | Re: [HACKERS] Thread-safe queueing? |
Дата | |
Msg-id | 382E0926.2C3F536E@southeast.net обсуждение исходный текст |
Ответ на | Re: [HACKERS] Thread-safe queueing? (Tom Lane <tgl@sss.pgh.pa.us>) |
Ответы |
Re: [HACKERS] Thread-safe queueing?
|
Список | pgsql-hackers |
Tom Lane wrote: > > Tim Holloway <mtsinc@southeast.net> writes: > > I need to create a cross-process producer/consumer data queue > > (e.g. singly-linked list). That is - Processes A, B, and C add nodes > > to a controlled list and process D removes them. Not sure if the > > creation of the nodes would be best done by the producers or > > consumers, but destruction would have to be done by the consumer, as > > the producers don't wait for processing. For optimal results, the > > consumer process should sleep until item(s) are added to its queue. > > > Query: within the existing backend framework, what's the best way to > > accomplish this? > > More context, please. What are you trying to accomplish? Is this > really a communication path between backends (and if so, what backend > code needs it?), or are you trying to set up a queue between SQL > clients? How much data might need to be in the queue at one time? > > regards, tom lane > This is for the logging subsystem I'm developing. The backends call pg_log(), which is like elog(), except that the message is a resource ID + any parameters in order to support locales and custom message formatting. These ID+parameter packets are then pipelined down to the logging channels via the log engine to be formatted and output according to rules in the configuration file. I *think* that the log engine should be a distinct process. I'm not sure I can trust the output not to come out sliced and diced if each backend can run the engine directly -- and for that matter, I see problems if the engine is reconfigured on the fly owing to the need for each backend to replicate the configuration process (among other things). The basic singly-linked list component is all I need to handle the FIFO, but obviously I need guards to preserve its integrity. As to the amount of data involved, I sincerely hope the queue would stay pretty shallow! I have the configuration parser and logging engine operational, so the last significant hurdle is making sure that A) the data to be logged is accessable/addressable by the engine, and B) that the process runs in the proper sequence. A description of what it all will look like is now online at http://postgres.mousetech.com/index.html (with apologies for the ugly formatting). Thanks, TIm Holloway
В списке pgsql-hackers по дате отправления: