Re: Postgres performance slowly gets worse over a month
От | Michael G. Martin |
---|---|
Тема | Re: Postgres performance slowly gets worse over a month |
Дата | |
Msg-id | 3D4153A4.3080604@vpmonline.com обсуждение исходный текст |
Ответ на | Re: Postgres performance slowly gets worse over a month (Joshua Daniel Franklin <joshuadfranklin@yahoo.com>) |
Ответы |
Re: Postgres performance slowly gets worse over a month
|
Список | pgsql-admin |
I believe the more frequently you vacuum, the faster it will go, so that may be the driving factor in deciding. Personally, each day, I'd add the new tuples then remove the no-longer needed tuples, make sure max_fsm_pages is large enough to handle all the pages removed in the largest table, then run a vacuum analyze on the table or entire database. Run it each night and it will be nice and fast and you shouldn't ever need to worry about locking the entire table with a vacuum full or spend time to re-create the table and indicies.
That's what I do which I think is the most automated,maintainance-free solution. I currently run a lazy vacuum analyze each night after making my large changes. My tables don't change enough during the day to require mid-day vacuums.
--Michael
Joshua Daniel Franklin wrote:
That's what I do which I think is the most automated,maintainance-free solution. I currently run a lazy vacuum analyze each night after making my large changes. My tables don't change enough during the day to require mid-day vacuums.
--Michael
Joshua Daniel Franklin wrote:
I played with this tonight writing a small insert/update routine and frequent vacuums. Here is what I came up with ( (PostgreSQL) 7.2.1 )This is some great info, thanks.In addition, max_fsm_pages has an impact on how many pages will be available to be marked as re-usable. If you have a huge table and changes are impacting more than the default 10,000 pages this is set to, you will want to bump this number up. My problem was I saw my UnUsed tuples always growing and not being re-used until I bumped this value up. As I watched the vacuum verbose output each run, I notices more than 10k pages were in fact changing between vacuums.This has made me think about something we've been doing. We've got one db that is used basically read-only; every day ~15000 records are added, but very rarely are any deleted. What we've been doing is just letting it sit until it gets close to too big for the filesystem, then lopping off the earliest 6 months worth of records. The question is, is it best to do this then set the max_fsm_pages to a huge number and vacuum full? Or should I change it so scripts remove the oldest day and vacuum before adding the next days? Or just rebuild the db every time. :) __________________________________________________ Do You Yahoo!? Yahoo! Health - Feel better, live better http://health.yahoo.com ---------------------------(end of broadcast)--------------------------- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to majordomo@postgresql.org)
В списке pgsql-admin по дате отправления: