Обсуждение: Continuing issues... Can't vacuum!
Hello, I posted yesterday about a corrupt index. I'm continuing to experience problems. I know that part of my problem is that the databases have not been vacuumed as they should have been. I've tried to vacuum them but it's not working. The message I'm getting is as follows: vacuumdb: vacuuming database "km" NOTICE: number of page slots needed (2275712) exceeds max_fsm_pages (200000) HINT: Consider increasing the configuration parameter "max_fsm_pages" to a value over 2275712. The problem is I've found the max_fsm_pages parameter in the postgresql.conf file, but changing it doesn't seem to be having any effect. So I'm going to ask some questions that are probably pretty silly, but I hope you'll help me. First of all, perhaps the postgresql.conf file that I am editing may not be the one that is being read when the database server starts. There are several postgresql.conf files on this system. How do I tell which one is the one being read? The max_fsm_pages parameter says what the minimum setting is in the file. Is there also a maximum? I can't back up my database with the error right now, because of the error. I have a backup but I'm afraid to restore it. I feel like I have many indices that are bloated and I want to get the databases vacuumed before I start trying to create and load a new database. I'm concerned that there won't be enough index space for the system to sort and copy the indices. Please, help. Carol
On Fri, 2008-05-23 at 13:54 -0400, Carol Walter wrote: > Hello, > vacuumdb: vacuuming database "km" > NOTICE: number of page slots needed (2275712) exceeds max_fsm_pages > (200000) > HINT: Consider increasing the configuration parameter > "max_fsm_pages" to a value over 2275712. > > The problem is I've found the max_fsm_pages parameter in the > postgresql.conf file, but changing it doesn't seem to be having any > effect. You have to restart the database. > So I'm going to ask some questions that are probably pretty > silly, but I hope you'll help me. First of all, perhaps the > postgresql.conf file that I am editing may not be the one that is > being read when the database server starts. There are several > postgresql.conf files on this system. How do I tell which one is the > one being read? Woah... :) From psql: show config_file; > The max_fsm_pages parameter says what the minimum setting is in the > file. Is there also a maximum? > It is limited by your shared memory but consider that is not your problem. Your problem is you don't vacuum enough. My suggestion is to do the following: Up the max_fsm_pages a considerable amount Restart Vacuum the entire database backup reinitialize with backup make sure you are vacuuming properly. Sincerely, Joshua D. Drake
Carol Walter wrote: > > vacuumdb: vacuuming database "km" > NOTICE: number of page slots needed (2275712) exceeds max_fsm_pages > (200000) > HINT: Consider increasing the configuration parameter "max_fsm_pages" > to a value over 2275712. > > The problem is I've found the max_fsm_pages parameter in the > postgresql.conf file, but changing it doesn't seem to be having any > effect. So I'm going to ask some questions that are probably pretty > silly, but I hope you'll help me. First of all, perhaps the > postgresql.conf file that I am editing may not be the one that is > being read when the database server starts. There are several > postgresql.conf files on this system. How do I tell which one is the > one being read? > Carol, Do the following in psql: show config_file; You should get some output like so: config_file ------------------------------------- /var/lib/pgsql/data/postgresql.conf (1 row) Go forth and edit that file. :-) > The max_fsm_pages parameter says what the minimum setting is in the > file. Is there also a maximum? > I'm not sure I understand the question. max_fsm_pages is the maximum number of pages to be used in the free space map. See Jim Nasby's article here: http://decibel.org/~decibel/pervasive/fsm.html > I can't back up my database with the error right now, because of the > error. I have a backup but I'm afraid to restore it. I feel like I > have many indices that are bloated and I want to get the databases > vacuumed before I start trying to create and load a new database. I'm > concerned that there won't be enough index space for the system to > sort and copy the indices. What's the error? All I see in this email is a warning about max_fsm_pages and that should not stop you from doing a pg_dump. -- Jeff Frost, Owner <jeff@frostconsultingllc.com> Frost Consulting, LLC http://www.frostconsultingllc.com/ Phone: 650-780-7908 FAX: 650-649-1954
Oh, yes. I have restarted it several times - or at least I thought that I did. Actually, it's not stopping. I grepped for "post" after I did the stop and it is still running. I really don't like the idea of issuing the kill command on the database processes. Is there another option? Carol On May 23, 2008, at 2:00 PM, Joshua D. Drake wrote: > > > On Fri, 2008-05-23 at 13:54 -0400, Carol Walter wrote: >> Hello, > >> vacuumdb: vacuuming database "km" >> NOTICE: number of page slots needed (2275712) exceeds max_fsm_pages >> (200000) >> HINT: Consider increasing the configuration parameter >> "max_fsm_pages" to a value over 2275712. >> >> The problem is I've found the max_fsm_pages parameter in the >> postgresql.conf file, but changing it doesn't seem to be having any >> effect. > > You have to restart the database. > >> So I'm going to ask some questions that are probably pretty >> silly, but I hope you'll help me. First of all, perhaps the >> postgresql.conf file that I am editing may not be the one that is >> being read when the database server starts. There are several >> postgresql.conf files on this system. How do I tell which one is the >> one being read? > > Woah... :) > >> From psql: show config_file; > > >> The max_fsm_pages parameter says what the minimum setting is in the >> file. Is there also a maximum? >> > > It is limited by your shared memory but consider that is not your > problem. Your problem is you don't vacuum enough. My suggestion is > to do > the following: > > Up the max_fsm_pages a considerable amount > Restart > Vacuum the entire database > backup > reinitialize with backup > make sure you are vacuuming properly. > > Sincerely, > > Joshua D. Drake > > > > -- > Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) > To make changes to your subscription: > http://www.postgresql.org/mailpref/pgsql-admin
On Fri, 2008-05-23 at 14:35 -0400, Carol Walter wrote: > Oh, yes. I have restarted it several times - or at least I thought > that I did. Actually, it's not stopping. I grepped for "post" after > I did the stop and it is still running. I really don't like the idea > of issuing the kill command on the database processes. Is there > another option? > pg_ctl -D /path/to/data/dir -m fast stop Sincerely, Joshua D. Drake
Does this do an orderly, safe stop? Also do I then bring it back up with the usual command? Thanks, Carol On May 23, 2008, at 2:39 PM, Joshua D. Drake wrote: > > > On Fri, 2008-05-23 at 14:35 -0400, Carol Walter wrote: >> Oh, yes. I have restarted it several times - or at least I thought >> that I did. Actually, it's not stopping. I grepped for "post" after >> I did the stop and it is still running. I really don't like the idea >> of issuing the kill command on the database processes. Is there >> another option? >> > > pg_ctl -D /path/to/data/dir -m fast stop > > Sincerely, > > Joshua D. Drake > >
>>> Carol Walter <walterc@indiana.edu> wrote: > On May 23, 2008, at 2:39 PM, Joshua D. Drake wrote: >> >> pg_ctl -D /path/to/data/dir -m fast stop >> > Does this do an orderly, safe stop? Also do I then bring it back up > with the usual command? "Three different shutdown methods can be selected with the -m option: "Smart" mode waits for all the clients to disconnect. This is the default. "Fast" mode does not wait for clients to disconnect. All active transactions are rolled back and clients are forcibly disconnected, then the server is shut down. "Immediate" mode will abort all server processes without a clean shutdown. This will lead to a recovery run on restart." From the docs: http://www.postgresql.org/docs/8.2/interactive/app-pg-ctl.html -Kevin
Jeff, Thank you very much. I didn't know about this command. There are lots of times this would have saved me looking it up in my documentation. I've got a couple of different instances of postgres running and I always have to check the docs to find out where my config file is. Well, the table I told everyone about yesterday won't let me run pg_dump against it. My hesitancy comes from not knowing what the result will be of dumping the other databases. I've had the experience of dumping my "can of worms" out and not getting them back in the same "can" when I was done with them. =) Carol On May 23, 2008, at 2:02 PM, Jeff Frost wrote: > Carol Walter wrote: >> >> vacuumdb: vacuuming database "km" >> NOTICE: number of page slots needed (2275712) exceeds >> max_fsm_pages (200000) >> HINT: Consider increasing the configuration parameter >> "max_fsm_pages" to a value over 2275712. >> >> The problem is I've found the max_fsm_pages parameter in the >> postgresql.conf file, but changing it doesn't seem to be having >> any effect. So I'm going to ask some questions that are probably >> pretty silly, but I hope you'll help me. First of all, perhaps >> the postgresql.conf file that I am editing may not be the one that >> is being read when the database server starts. There are several >> postgresql.conf files on this system. How do I tell which one is >> the one being read? >> > Carol, > > Do the following in psql: > > show config_file; > > You should get some output like so: > > config_file > ------------------------------------- > /var/lib/pgsql/data/postgresql.conf > (1 row) > > Go forth and edit that file. :-) >> The max_fsm_pages parameter says what the minimum setting is in >> the file. Is there also a maximum? >> > I'm not sure I understand the question. max_fsm_pages is the > maximum number of pages to be used in the free space map. See Jim > Nasby's article here: http://decibel.org/~decibel/pervasive/fsm.html >> I can't back up my database with the error right now, because of >> the error. I have a backup but I'm afraid to restore it. I feel >> like I have many indices that are bloated and I want to get the >> databases vacuumed before I start trying to create and load a new >> database. I'm concerned that there won't be enough index space >> for the system to sort and copy the indices. > What's the error? All I see in this email is a warning about > max_fsm_pages and that should not stop you from doing a pg_dump. > > -- > Jeff Frost, Owner <jeff@frostconsultingllc.com> > Frost Consulting, LLC http://www.frostconsultingllc.com/ > Phone: 650-780-7908 FAX: 650-649-1954 >
On Fri, May 23, 2008 at 1:08 PM, Carol Walter <walterc@indiana.edu> wrote: > Jeff, > > Thank you very much. I didn't know about this command. There are lots of > times this would have saved me looking it up in my documentation. I've got > a couple of different instances of postgres running and I always have to > check the docs to find out where my config file is. > > Well, the table I told everyone about yesterday won't let me run pg_dump > against it. My hesitancy comes from not knowing what the result will be of > dumping the other databases. I've had the experience of dumping my "can of > worms" out and not getting them back in the same "can" when I was done with > them. =) What error are you getting when you try to dump that table?
Hi, Again. I tried this and got the same message. It is as follows: -bash-3.00$ /opt/csw/postgresql/bin/pg_ctl -D /dbpdisk/postgres/ prod_823 -m fas t stop pg_ctl: PID file "/dbpdisk/postgres/prod_823/postmaster.pid" does not exist Is server running? Are there other ideas? Carol This is the same error I got with the On May 23, 2008, at 2:58 PM, Kevin Grittner wrote: >>>> Carol Walter <walterc@indiana.edu> wrote: >> On May 23, 2008, at 2:39 PM, Joshua D. Drake wrote: >>> >>> pg_ctl -D /path/to/data/dir -m fast stop >>> >> Does this do an orderly, safe stop? Also do I then bring it back up > >> with the usual command? > > "Three different shutdown methods can be selected with the -m option: > "Smart" mode waits for all the clients to disconnect. This is the > default. "Fast" mode does not wait for clients to disconnect. All > active transactions are rolled back and clients are forcibly > disconnected, then the server is shut down. "Immediate" mode will > abort all server processes without a clean shutdown. This will lead to > a recovery run on restart." > > From the docs: > > http://www.postgresql.org/docs/8.2/interactive/app-pg-ctl.html > > -Kevin >
Carol Walter wrote: > Hi, Again. > > I tried this and got the same message. It is as follows: > > -bash-3.00$ /opt/csw/postgresql/bin/pg_ctl -D > /dbpdisk/postgres/prod_823 -m fas > t stop > pg_ctl: PID file "/dbpdisk/postgres/prod_823/postmaster.pid" does not > exist > Is server running? > > Are there other ideas? Carol, what does: ps -ef | grep postgres return? If postgres is really running, there should be a postmaster process that shows which directory it's using as the DATA directory. It'll look something like this: postgres 24080 1 0 09:49 ? 00:00:07 /usr/bin/postmaster -p 5432 -D /var/lib/pgsql/data If there isn't a postmaster, it's possible someone started postgres with the -D option directly. If you have multiple postgres server's running on the same machine, you'll see multiple postmasters. BTW: What operating system is this running under? If linux, which distribution?
Hi, Jeff, Here's the output from the command. -bash-3.00$ ps -ef | grep post postgres 9631 9629 0 Sep 24 ? 30:28 /opt/csw/ postgresql/bin/sparcv8/postgres -D /dbpdisk/postgres/prod_823 postgres 9633 9629 0 Sep 24 ? 96:31 /opt/csw/ postgresql/bin/sparcv8/postgres -D /dbpdisk/postgres/prod_823 postgres 9634 9629 0 Sep 24 ? 67:28 /opt/csw/ postgresql/bin/sparcv8/postgres -D /dbpdisk/postgres/prod_823 postgres 9629 9396 0 Sep 24 ? 13:26 /opt/csw/ postgresql/bin/sparcv8/postgres -D /dbpdisk/postgres/prod_823 postgres 12082 12029 0 17:57:06 pts/17 0:00 ps -ef postgres 12029 3950 0 17:55:05 pts/17 0:00 -bash postgres 12083 12029 0 17:57:06 pts/17 0:00 grep post I running under Solaris 10. I don't have anything that looks exactly like what you here, but the postgres has got to be running. I can go in as the postgres user connect to different databases and select data from them. This installation was loaded by my colleague using a product called Blastwave hence the csw directories. By the way, and not incidentally, thank you for the help. Carol On May 23, 2008, at 5:15 PM, Jeff Frost wrote: > Carol Walter wrote: >> Hi, Again. >> >> I tried this and got the same message. It is as follows: >> >> -bash-3.00$ /opt/csw/postgresql/bin/pg_ctl -D /dbpdisk/postgres/ >> prod_823 -m fas >> t stop >> pg_ctl: PID file "/dbpdisk/postgres/prod_823/postmaster.pid" does >> not exist >> Is server running? >> >> Are there other ideas? > > Carol, what does: > > ps -ef | grep postgres > > return? > > If postgres is really running, there should be a postmaster process > that shows which directory it's using as the DATA directory. It'll > look something like this: > > postgres 24080 1 0 09:49 ? 00:00:07 /usr/bin/postmaster > -p 5432 -D /var/lib/pgsql/data > > If there isn't a postmaster, it's possible someone started postgres > with the -D option directly. > > If you have multiple postgres server's running on the same machine, > you'll see multiple postmasters. > > BTW: What operating system is this running under? If linux, which > distribution? > >
Carol Walter wrote: > -bash-3.00$ ps -ef | grep post > postgres 9631 9629 0 Sep 24 ? 30:28 > /opt/csw/postgresql/bin/sparcv8/postgres -D /dbpdisk/postgres/prod_823 > postgres 9633 9629 0 Sep 24 ? 96:31 > /opt/csw/postgresql/bin/sparcv8/postgres -D /dbpdisk/postgres/prod_823 > postgres 9634 9629 0 Sep 24 ? 67:28 > /opt/csw/postgresql/bin/sparcv8/postgres -D /dbpdisk/postgres/prod_823 > postgres 9629 9396 0 Sep 24 ? 13:26 > /opt/csw/postgresql/bin/sparcv8/postgres -D /dbpdisk/postgres/prod_823 > postgres 12082 12029 0 17:57:06 pts/17 0:00 ps -ef > postgres 12029 3950 0 17:55:05 pts/17 0:00 -bash > postgres 12083 12029 0 17:57:06 pts/17 0:00 grep post > > I running under Solaris 10. I don't have anything that looks exactly > like what you here, but the postgres has got to be running. I can go > in as the postgres user connect to different databases and select data > from them. This installation was loaded by my colleague using a > product called Blastwave hence the csw directories. > > By the way, and not incidentally, thank you for the help. Looks like pid 9629 is the parent of all of them, so if you simply: kill 9629 they should all exit gracefully. You can check this with the same ps command. Then you should be able to: "pg_ctl -D /dbpdisk/postgres/prod_823 start" as the postgres user to start it back up. I think you might want to have a look at the /etc/init.d/postgresql (or whatever the startup script is) because it doesn't look to be using pg_ctl to start postgresql since the pid file was missing or ask your colleague how it was started in the first place. I'm not sure on Solaris if the ps output gets changed like it does in linux to reflect the processes role. In linux those would probably look more like this: postgres 2525 1 0 May02 ? 00:01:04 /usr/bin/postmaster -p 5432 -D /var/lib/pgsql/data postgres 2649 2525 0 May02 ? 00:00:21 postgres: logger process postgres 2662 2525 0 May02 ? 00:03:18 postgres: writer process postgres 2663 2525 0 May02 ? 00:02:12 postgres: wal writer process postgres 2664 2525 0 May02 ? 00:07:16 postgres: autovacuum launcher process postgres 2665 2525 0 May02 ? 00:04:56 postgres: stats collector process -- Jeff Frost, Owner <jeff@frostconsultingllc.com> Frost Consulting, LLC http://www.frostconsultingllc.com/ Phone: 650-780-7908 FAX: 650-649-1954