Обсуждение: reclaiming disk space after major updates
Our usage pattern has recently left me with some very bloated database clusters. I have, in the past, scheduled downtime to run VACUUM FULL and tried CLUSTER as well, followed by a REINDEX on all tables. This does work, however the exclusive lock has become a real thorn in my side. As our system grows, I am having trouble scheduling enough downtime for either of these operations or a full dump/reload. I do run VACUUM regularly, it's just that sometimes we need to go back and update a huge percentage of rows in a single batch due to changing customer requirements, leaving us with significant table bloat. So within the last few days my db cluster has grown from 290GB to 370GB and because of some other major data updates on my TO-DO list, I expect this to double and I'll be bumping up against my storage capacity. The root of my question is due to my not understanding why the tables can't be in read-only mode while one of these is occurring? Since most of our usage is OLAP, this really wouldn't matter much as long as the users could still query their data while it was running. Is there some way I can allow users read-only access to this data while things are cleaned up in the background? INSERTs can wait, SELECTs cannot. So how do other people handle such a problem when downtime is heavily frowned upon? We have 24/7 access ( but again, the users only read data ).
On Wed, Jun 06, 2007 at 04:04:44PM -0600, Dan Harris wrote: > of these operations or a full dump/reload. I do run VACUUM regularly, it's > just that sometimes we need to go back and update a huge percentage of rows > in a single batch due to changing customer requirements, leaving us with > significant table bloat. Do you need to update those rows in one transaction (i.e. is the requirement that they all get updated such that the change only becomes visible at once)? If not, you can do this in batches and vacuum in between. Batch updates are the prime sucky area in Postgres. Another trick, if the table is otherwise mostly static, is to do the updating in a copy of the table, and then use the transactional DDL features of postgres to change the table names. A -- Andrew Sullivan | ajs@crankycanuck.ca Everything that happens in the world happens at some place. --Jane Jacobs
Andrew Sullivan wrote: > On Wed, Jun 06, 2007 at 04:04:44PM -0600, Dan Harris wrote: >> of these operations or a full dump/reload. I do run VACUUM regularly, it's >> just that sometimes we need to go back and update a huge percentage of rows >> in a single batch due to changing customer requirements, leaving us with >> significant table bloat. > > Do you need to update those rows in one transaction (i.e. is the > requirement that they all get updated such that the change only > becomes visible at once)? If not, you can do this in batches and > vacuum in between. Batch updates are the prime sucky area in > Postgres. They don't always have to be in a single transaction, that's a good idea to break it up and vacuum in between, I'll consider that. Thanks > > Another trick, if the table is otherwise mostly static, is to do the > updating in a copy of the table, and then use the transactional DDL > features of postgres to change the table names. I thought of this, but it seems to break other application logic that feeds a steady streams of inserts into the tables. Thanks again for your thoughts. I guess I'll just have to work around this problem in application logic.
On Thu, Jun 07, 2007 at 03:26:56PM -0600, Dan Harris wrote: > > They don't always have to be in a single transaction, that's a good idea to > break it up and vacuum in between, I'll consider that. Thanks If you can do it this way, it helps _a lot_. I've had to do this sort of thing, and breaking into groups of a couple thousand or so really made the difference. A -- Andrew Sullivan | ajs@crankycanuck.ca Unfortunately reformatting the Internet is a little more painful than reformatting your hard drive when it gets out of whack. --Scott Morris