Re: Weird issue with planner choosing seq scan
От | Sean Leach |
---|---|
Тема | Re: Weird issue with planner choosing seq scan |
Дата | |
Msg-id | C2823FCB-7B73-4838-B792-E1B73C65A254@wiggum.com обсуждение исходный текст |
Ответ на | Re: Weird issue with planner choosing seq scan ("Stephen Denne" <Stephen.Denne@datamail.co.nz>) |
Ответы |
Re: Weird issue with planner choosing seq scan
Re: Weird issue with planner choosing seq scan |
Список | pgsql-performance |
On Feb 25, 2008, at 1:19 PM, Stephen Denne wrote: > >> So should I do a vacuum full and then hope this doesn't >> happen again? >> Or should I run a VACUUM FULL after each aggregation run? > > If your usage pattern results in generating all of that unused space > in one transaction, and no further inserts or updates to that table > till next time you run the same process, then my guess is that you > probably should run a vacuum full on that table after each > aggregation run. In that case you wouldn't have to increase > max_fsm_pages solely to keep track of large amount of unused space > in that table, since you're cleaning it up as soon as you're > generating it. > > You earlier had 5.5 million row versions, 2 million of them dead but > not yet removable, and you said (even earlier) that the table had > 3.3 million rows in it. > You now say you've got 6.2 million row versions (with the same 2M > dead). So it looks like you're creating new row versions at quite a > pace, in which case increasing max_fsm_pages, and not worrying about > doing a vacuum full _every_ time is probably a good idea. So 281727 should be the minimum I bump it to correct? > > > Have you checked Scott Marlowe's note: > >>> unless you've got a long running transaction > > How come those 2 million dead rows are not removable yet? My guess > (based on a quick search of the mailing lists) would be that they > were generated from your aggregation run, and that a long running > transaction started before your aggregation run committed (possibly > even before it started), and that transaction is still alive. > > Alternatively, it may be a different 2 million dead row versions now > than earlier, and may simply be a side effect of your particular > usage, and nothing to worry about. (Though it is exactly the same > number of rows, which strongly hints at being exactly the same rows.) Great detective work, you are correct. We have a daemon that runs and is constantly adding new data to that table, then we aggregated it daily (I said weekly before, I was incorrect) - which deletes several rows as it updates a bunch of others. So it sounds like upping max_fsm_pages is the best option. Thanks again everyone!
В списке pgsql-performance по дате отправления: