Обсуждение: catalog corruption causes
I am trying to get my head around what causes catalog corruption. I have posted before with regard to recovering from corruptions (if that is what indeed happened to me), and was given much help. Does anyone know why a database catalog will get corrupted? As I mentioned previously, my db involves considerable use of temporary tables created by php-psql connections. Other than that, I don't know of too much that is "odd" about my use (misuse) of the database. Thanks, r.b. Robert W. Burgholzer Surface Water Modeler Office of Water Supply and Planning Virginia Department of Environmental Quality rwburgholzer@deq.virginia.gov 804-698-4405 Open Source Modeling Tools: http://sourceforge.net/projects/npsource/
"Burgholzer, Robert (DEQ)" <Robert.Burgholzer@deq.virginia.gov> wrote: > I am trying to get my head around what causes catalog corruption. From what I've seen many are caused by things outside of PostgreSQL -- like bad disk, bad drivers, OS bugs, running on network drives which aren't reliable, write caches without battery back-up, anti-virus software, etc. Another common cause is using faster-but-unreliable PostgreSQL configuration settings like fsync=off or full_page_writes=off. If that's couple with a kill -9 or a crash of hardware or OS, you can get corruption. Then there's the possibility of PostgreSQL bugs. Make sure you use a recent minor release of whatever major release you're on, so you get the benefit of bug fixes. And I recommend staying away from VACUUM FULL -- among the many other reasons there are to avoid it, it seems to have more than it's share of odd corner cases where things can go wrong. Less likely, but still possible, is that there's some malicious element involved. Keep your security tight to minimize the risk of that. -Kevin
Thanks Kevin, that gives me more than enough things to pepper my sysadmin with. :) r.b. Robert W. Burgholzer Surface Water Modeler Office of Water Supply and Planning Virginia Department of Environmental Quality rwburgholzer@deq.virginia.gov 804-698-4405 Open Source Modeling Tools: http://sourceforge.net/projects/npsource/
On Fri, Aug 6, 2010 at 12:28 PM, Burgholzer, Robert (DEQ) <Robert.Burgholzer@deq.virginia.gov> wrote: > I am trying to get my head around what causes catalog corruption. I > have posted before with regard to recovering from corruptions (if that > is what indeed happened to me), and was given much help. > > Does anyone know why a database catalog will get corrupted? As I > mentioned previously, my db involves considerable use of temporary > tables created by php-psql connections. Other than that, I don't know > of too much that is "odd" about my use (misuse) of the database. Two most common causes are bad memory / hard drives / cpu and a machine that doesn't fsync properly crashing and losing part of a write to the disks. memtest86+ will give you an idea if your hardware (cpu / mem) are stable and reliable. SMART can tell you if your hard drives are acting up. pgbench can tell you if your system is lying about fsync (a single SATA drive shouldn't be able to do more than a few hundred tps).
On Fri, Aug 6, 2010 at 12:28 PM, Burgholzer, Robert (DEQ) <Robert.Burgholzer@deq.virginia.gov> wrote: > I am trying to get my head around what causes catalog corruption. I > have posted before with regard to recovering from corruptions (if that > is what indeed happened to me), and was given much help. > > Does anyone know why a database catalog will get corrupted? As I > mentioned previously, my db involves considerable use of temporary > tables created by php-psql connections. Other than that, I don't know > of too much that is "odd" about my use (misuse) of the database. Note that it's also possible you've found some uncommon corner case with pgsql code. If you have tested memory / cpu and can consistently get corrupted catalogs by running a particular sequence of commands against pgsql over and over then definitely report it and provide a test case. Guidance on reporting problems here: http://wiki.postgresql.org/wiki/Guide_to_reporting_problems
I was wondering if we can query/obtain the high-water mark of number of sessions or connections reached in a Postgres database.Is there a view or command that can provide this information. The pg_stat_database shows the current number ofconnections, but not the high-water mark a database had reached. Thanks in advance. Husam ****************************************************************************************** This message may contain confidential or proprietary information intended only for the use of the addressee(s) named above or may contain information that is legally privileged. If you are not the intended addressee, or the person responsible for delivering it to the intended addressee, you are hereby notified that reading, disseminating, distributing or copying this message is strictly prohibited. If you have received this message by mistake, please immediately notify us by replying to the message and delete the original message and any copies immediately thereafter. Thank you. ****************************************************************************************** CLLD
Re: High-water Mark for number of sessions/connections reached in Postgres
От
"Kevin Grittner"
Дата:
"Tomeh, Husam" <HTomeh@corelogic.com> wrote: > I was wondering if we can query/obtain the high-water mark of > number of sessions or connections reached in a Postgres database. I'm not aware of anything like that, although it seems as though it would have obvious uses in database administration. If you had such a thing, would you want the ability to reset it? -Kevin
Thanks Scott, I will look into these testing programs and file a bug if I have in fact found a special occurrence. I suspect that there might be some issues with our postgres install, as the initial install was via a yum, and I recall that our sysadmin had some difficulty removing it in order to do a compile from source install. We have also experienced one or two cases of strange hard drive behavior, so this gives us some avenues to explore. r.b. Robert W. Burgholzer Surface Water Modeler Office of Water Supply and Planning Virginia Department of Environmental Quality rwburgholzer@deq.virginia.gov 804-698-4405 Open Source Modeling Tools: http://sourceforge.net/projects/npsource/
On Fri, Aug 6, 2010 at 6:19 PM, Tomeh, Husam <HTomeh@corelogic.com> wrote: > I was wondering if we can query/obtain the high-water mark of number of sessions or connections reached in a Postgres database.Is there a view or command that can provide this information. The pg_stat_database shows the current number ofconnections, but not the high-water mark a database had reached. It's a pretty easy thing to approximate with a shell script. while true;do ps ax|grep postgres:|grep -v grep|wc -l ;sleep 10;done > connects.log & then just tail the connects.log file. It's a dirty hack and it'll be a few counts over due to counting the postmaster and a few other processes, but it'll give you a good idea of what your system is doing. Add a date in there if you need to know the time it was happening.
If you want a date stamp, you can change the ps ax stuff to look like this: date +"%Y-%m-%d %H:%M:%S"|tr "\n" ":";ps ax|grep postgres:|grep -v grep|wc -l On Mon, Aug 9, 2010 at 4:21 PM, Scott Marlowe <scott.marlowe@gmail.com> wrote: > On Fri, Aug 6, 2010 at 6:19 PM, Tomeh, Husam <HTomeh@corelogic.com> wrote: >> I was wondering if we can query/obtain the high-water mark of number of sessions or connections reached in a Postgresdatabase. Is there a view or command that can provide this information. The pg_stat_database shows the current numberof connections, but not the high-water mark a database had reached. > > It's a pretty easy thing to approximate with a shell script. > > while true;do ps ax|grep postgres:|grep -v grep|wc -l ;sleep 10;done > > connects.log & > > then just tail the connects.log file. It's a dirty hack and it'll be > a few counts over due to counting the postmaster and a few other > processes, but it'll give you a good idea of what your system is > doing. Add a date in there if you need to know the time it was > happening. > -- To understand recursion, one must first understand recursion.
Excerpts from Scott Marlowe's message of lun ago 09 18:29:58 -0400 2010: > If you want a date stamp, you can change the ps ax stuff to look like this: > > date +"%Y-%m-%d %H:%M:%S"|tr "\n" ":";ps ax|grep postgres:|grep -v grep|wc -l FWIW the "grep" business is best solved by ps itself, something like ps ax -C postgres date +"%Y-%m-%d %H:%M:%S"|tr "\n" ":";ps ax -C postgres|wc -l -- Álvaro Herrera <alvherre@commandprompt.com> The PostgreSQL Company - Command Prompt, Inc. PostgreSQL Replication, Consulting, Custom Development, 24x7 support
Thanks for these workarounds I have something similar implemented. It would've been nice to have Postgres maintain the high-watermark inside the database in a pg view such as pg_stat_database. This is useful from a capacity perspective wherethat can be monitored to alert DBAs when a threshold is reached. Other DB engines has such feature built in such asOracle. Perhaps something to be on Postgres TO DO list soon :) Regards, Husam -----Original Message----- From: pgsql-admin-owner@postgresql.org [mailto:pgsql-admin-owner@postgresql.org] On Behalf Of Alvaro Herrera Sent: Monday, August 09, 2010 3:39 PM To: Scott Marlowe Cc: Tomeh, Husam; pgsql-admin@postgresql.org Subject: Re: [ADMIN] High-water Mark for number of sessions/connections reached in Postgres Excerpts from Scott Marlowe's message of lun ago 09 18:29:58 -0400 2010: > If you want a date stamp, you can change the ps ax stuff to look like this: > > date +"%Y-%m-%d %H:%M:%S"|tr "\n" ":";ps ax|grep postgres:|grep -v grep|wc -l FWIW the "grep" business is best solved by ps itself, something like ps ax -C postgres date +"%Y-%m-%d %H:%M:%S"|tr "\n" ":";ps ax -C postgres|wc -l -- Álvaro Herrera <alvherre@commandprompt.com> The PostgreSQL Company - Command Prompt, Inc. PostgreSQL Replication, Consulting, Custom Development, 24x7 support -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin ****************************************************************************************** This message may contain confidential or proprietary information intended only for the use of the addressee(s) named above or may contain information that is legally privileged. If you are not the intended addressee, or the person responsible for delivering it to the intended addressee, you are hereby notified that reading, disseminating, distributing or copying this message is strictly prohibited. If you have received this message by mistake, please immediately notify us by replying to the message and delete the original message and any copies immediately thereafter. Thank you. ****************************************************************************************** CLLD
On Mon, Aug 9, 2010 at 5:33 PM, Tomeh, Husam <HTomeh@corelogic.com> wrote: > This is useful from a capacity perspective where that can be monitored to alert DBAs when a threshold is reached. If alerting based on connection counts is your concern, you might get some benefit from the Bucardo project's check_postgres.pl script. One of the tests it performs is a backend count. It works with any monitoring package that speaks NRPE, so you can issue alerts based on either a percentage of your max_connections setting, or on the actual backend count, as indicated by the arguments with which it's invoked. It would also be fairly straightforward to store whatever value the test returns in a db table, log file, Cacti/other RRD-based tools, or whatever else strikes your fancy, for historical and trending analysis. <http://bucardo.org/check_postgres/> rls -- :wq