Re: database question
От | john.crawford@sirsidynix.com |
---|---|
Тема | Re: database question |
Дата | |
Msg-id | 22b026dd-e233-44c0-8098-1719b296fd01@x41g2000hsb.googlegroups.com обсуждение исходный текст |
Ответ на | database question (john.crawford@sirsidynix.com) |
Список | pgsql-general |
> > So the answer is you've got something that's gone hog-wild on creating > large objects and not deleting them; or maybe the application *is* > deleting them but pg_largeobject isn't getting vacuumed. > > regards, tom lane Hi all, thanks for the advice. I ran the script for largefiles and the largest is 3Gb followed by 1Gb then followed by another 18 files that total about 3Gb between them. So about 7Gb in total of a 100Gb partition that has 99Gb used. All this is in the data/base/16450 directory in these large 1Gb files. If I look in the logs for Postgres I can see a vacuum happening every 20 minutes, in that it says "autovacuum: processing database "db name" but nothing else. How do I know if the vacuum is actually doing anything? What is pg_largeobjects and what can I check with it (sorry did say I was a real novice). Really appreciate your help guys. John
В списке pgsql-general по дате отправления: