Обсуждение: Re: trouble creating log table with rules
> Date: Thu, 24 Jun 1999 01:53:05 -0500 > From: Jim Rowan <jmr@computing.com> > Subject: trouble creating log table with rules > > I've read the docs in the programmers manual, and can create rules like this: > > CREATE RULE "m_log_change" AS ON UPDATE TO "machine" > do ( > INSERT INTO machine_log (who, date, machnum, col, newval) > SELECT getpgusername(), 'now'::text, old.machnum, > 'host', new.host > WHERE (new.host != old.host) or > (old.host IS NOT NULL and new.host IS NULL) or > (old.host IS NULL and new.host IS NOT NULL); > > INSERT INTO machine_log (who, date, machnum, col, newval) > SELECT getpgusername(), 'now'::text, old.machnum, > 'serial_num_cpu', new.serial_num_cpu > WHERE (new.serial_num_cpu != old.serial_num_cpu) or > (old.serial_num_cpu IS NOT NULL and new.serial_num_cpu IS NULL) or > (old.serial_num_cpu IS NULL and new.serial_num_cpu IS NOT NULL); > ); > > My big problem is that if I replicate this enough times to cover the fields I > want, I get this error: > > pqReadData() -- backend closed the channel unexpectedly. > This probably means the backend terminated abnormally > before or while processing the request. > We have lost the connection to the backend, so further processing is impossible. Terminating. > > Is there a way I can avoid this error? Is there a better way to code these > rules? Hi, I've seen similar kinds of behaviour in 6.4.2 with triggers/rules/procedures and so on where the backend would die randomly. I'm not sure if it is still in 6.5, I haven't used it yet. Have a look at the error log from the postmaster and see if there is anything interesting in there and I might be able to help you some more here. Sometimes you might get a BTP_CHAIN fault, or another one (I can't rememember - I haven't seen it in a while). The solution I found was just before adding your procedures or whatever, do a VACUUM ANALYZE pg_proc, which will vacuum one of the internal system tables, and then it would work. I found that without the vacuum, postgres would die every third or fourth time I tried to reload my triggers, etc. Also, I haven't reported this yet (because I can't reproduce it) but every so often, I've found that you'll do the vacuum, and then it will return "Blowaway_relation_buffers returned -2" and the vacuum dies. This is really bad, and so you would normally dump the data and reload, but you can't do this for pg_proc. So the dbms is screwed and you have to reload the whole thing. It turns out that one of the indices or the table itself has this BTP_CHAIN problem. I did some experiments involving trying to trick postgres into allowing me to dump reload it (ie, create a new table called pg_proc_2, with the same data and indices, and moving it into place but it won't let you do it to protect itself. The worst part with this kind of death is that my database is about 1.1 Gb on disk, and so reloading is NOT something I want to have to do :) Anyone got any advice for this or know of a problem? As mentioned in another email posted to the hackers list, I am getting lots of problems with BTP_CHAIN problems and having to reload tables, which is not something I want to do during the day when staff are trying to use the database and I have to shut it down. I've heard there is a patch for this but I haven't got anything back on whether its ok to use it or not. bye, Wayne ------------------------------------------------------------------------------ Wayne Piekarski Tel: (08) 8221 5221 Research & Development Manager Fax: (08) 8221 5220 SE Network Access Pty Ltd Mob: 0407 395 889 222 Grote Street Email: wayne@senet.com.au Adelaide SA 5000 WWW: http://www.senet.com.au
> > > Date: Thu, 24 Jun 1999 01:53:05 -0500 > > From: Jim Rowan <jmr@computing.com> > > Subject: trouble creating log table with rules > > > > I've read the docs in the programmers manual, and can create rules like this: > > > > CREATE RULE "m_log_change" AS ON UPDATE TO "machine" > > do ( > > INSERT INTO machine_log (who, date, machnum, col, newval) > > SELECT getpgusername(), 'now'::text, old.machnum, > > 'host', new.host > > WHERE (new.host != old.host) or > > (old.host IS NOT NULL and new.host IS NULL) or > > (old.host IS NULL and new.host IS NOT NULL); > > > > INSERT INTO machine_log (who, date, machnum, col, newval) > > SELECT getpgusername(), 'now'::text, old.machnum, > > 'serial_num_cpu', new.serial_num_cpu > > WHERE (new.serial_num_cpu != old.serial_num_cpu) or > > (old.serial_num_cpu IS NOT NULL and new.serial_num_cpu IS NULL) or > > (old.serial_num_cpu IS NULL and new.serial_num_cpu IS NOT NULL); > > ); > > > > My big problem is that if I replicate this enough times to cover the fields I > > want, I get this error: > > > > pqReadData() -- backend closed the channel unexpectedly. > > This probably means the backend terminated abnormally > > before or while processing the request. > > We have lost the connection to the backend, so further processing is impossible. Terminating. > > You didn't tell us which version of PostgreSQL and (more important) if the error occurs during CREATE RULE or when updating machine. If it occurs during the CREATE RULE (what I hope for you) it doesn't happen in the rewriter itself. For the rule actions in the example above it isn't important in which order they are processed. So you could setup single action rules per field to get (mostly) the same results. If you can create the entire multi action rule but get the backend crash during UPDATE of machine, then it's a problem in the rewriter which I cannot imagine looking at your rules. Jan -- #======================================================================# # It's easier to get forgiveness for being wrong than for being right. # # Let's break this rule - forgive me. # #========================================= wieck@debis.com (Jan Wieck) #
I said: > > CREATE RULE "m_log_change" AS ON UPDATE TO "machine" > > do ( > > INSERT INTO machine_log (who, date, machnum, col, newval) > > SELECT getpgusername(), 'now'::text, old.machnum, > > 'host', new.host > > WHERE (new.host != old.host) or > > (old.host IS NOT NULL and new.host IS NULL) or > > (old.host IS NULL and new.host IS NOT NULL); > > > > INSERT INTO machine_log (who, date, machnum, col, newval) > > SELECT getpgusername(), 'now'::text, old.machnum, > > 'serial_num_cpu', new.serial_num_cpu > > WHERE (new.serial_num_cpu != old.serial_num_cpu) or > > (old.serial_num_cpu IS NOT NULL and new.serial_num_cpu IS NULL) or > > (old.serial_num_cpu IS NULL and new.serial_num_cpu IS NOT NULL); > > ); > > My big problem is that if I replicate this enough times to cover the fields I > > want, I get this error: > > > > pqReadData() -- backend closed the channel unexpectedly. > > This probably means the backend terminated abnormally > > before or while processing the request. > > We have lost the connection to the backend, so further processing is impossible. Terminating. > > wieck> You didn't tell us which version of PostgreSQL and (more wieck> important) if the error occurs during CREATE RULE or when updating wieck> machine. Duhhh. sorry! postgresql 6.5; FreeBSD 3.2 stable - recent. The error occurs during CREATE RULE. wieck> So you could setup single action rules per field to get (mostly) wieck> the same results. I previously had tried to do the same thing with many (more than 10) distinct single-action rules (sorry, don't have the exact syntax of what I used.. but it was very similar to this example.). In that case, the CREATE RULE worked properly, but at update time it bombed out (again, don't have the detail anymore). The error message indicated that it thought there was a loop in my rules, something about "more than 10"... In that case, as I remember, the backend did not crash -- it just declined to execute the update. I'll try multiple multi-action rules to see if I can do what I want.. Is this (the way I'm writing the rules) the best approach? Jim Rowan DCSI DCE/DFS/Sysadmin Consulting jmr@computing.com (512) 374-1143