Re: Event-driven programming?
От | shakahshakah@gmail.com |
---|---|
Тема | Re: Event-driven programming? |
Дата | |
Msg-id | 1189630471.367782.134120@57g2000hsv.googlegroups.com обсуждение исходный текст |
Ответ на | Re: Event-driven programming? ("D. Dante Lorenso" <dante@lorenso.com>) |
Список | pgsql-general |
On Sep 12, 3:05 pm, da...@lorenso.com ("D. Dante Lorenso") wrote: > Pavel Stehule wrote: > > 2007/9/12, Jay Dickon Glanville <dickon.glanvi...@gmail.com>: > >> - I write a function (it doesn't matter what language it's in: > >> PL/pgSQL, PL/Java, etc) > >> - I register that function as a "post-commit" callback function > >> - when a client commits a transaction, the function gets called, and > >> the database passes the function some general information as to the > >> content of the transaction > > >> Note how similar this process is to writing triggers. The only > >> problem I have with triggers is that events get generated per-table. > >> I'd like to get notifications based on transactions, not table > >> changes. > > >> What I'd like to be able to do with this event is to notify any > >> applications of this change, so they can update their cached view of > >> the database. > > Although I'm happy to use triggers as-is (not per transaction, etc) I've > also wondered about firing events from the database. I'm curious to > know if anyone has attempted to write a trigger that will open a socket > and send an event packet to an application server on the network. > > I've considered using a message queue like JMS to manage events on my > network and have PostgreSQL fire off UDP messages to a socket server > that would insert jobs into the message queue as triggers get fired in > the database. Doing this would be an alternative to storing the queue > as a database table and having to use polling to constantly check the > database for events in the queue. > > I am interested what anybody might contribute to this thread. Let us > know what you tried whether it worked or not, it might be useful. > > -- Dante Depending on your reliability requirements UDP may not be a great choice. But, since you asked about what's been tried, my (successful so far) production setup is along the lines of: 1. process A accepts multiple data flows, inserts "work to be done" items into a table in batches and calls NOTIFY. 2. process B LISTENs for notifications (with a blocking read on the socket connection to Postgres) and takes them as a signal to look for "work items to be done". It also checks every N minutes of idle time for "work items to be done" in case the NOTIFY/LISTEN mechanism is broken (haven't seen that situation yet). As for recovery, process B looks for work items on startup, then drops into the LISTEN / blocking_read mode.
В списке pgsql-general по дате отправления: