Re: R: R: slow seqscan after vacuum analize
От | Christopher Browne |
---|---|
Тема | Re: R: R: slow seqscan after vacuum analize |
Дата | |
Msg-id | m3d68unops.fsf@wolfe.cbbrowne.com обсуждение исходный текст |
Ответ на | Re: R: slow seqscan after vacuum analize ("David F. Skoll" <dfs@roaringpenguin.com>) |
Ответы |
Re: R: R: slow seqscan after vacuum analize
|
Список | pgsql-admin |
In the last exciting episode, eddy@axa.it ("Edoardo Ceccarelli") wrote: > Yes, you are right but it wasn't the case this time, I have run the > explain plenty of times with same results. I think that the reason > was that I made a simple VACUUM, after a VACUUM FULL ANALYZE (1h!!) > things are ok It sounds as though you weren't vacuuming ("just plain vacuum") often enough. What tends to go wrong is when the table winds up with a LOT of empty space due to there being a lot of updates to the table without dead tuples being cleaned out. The table winds up big, with no way to shrink it without the cost of a VACUUM FULL. If you vacuumed more often, the size of the table would likely stay smaller which is sure to be helpful. Another factor worth considering: If a few values are very common in the field you are selecting on, then the query optimizer can get convinced (wrongly) that a Seq Scan is the best choice. Using ALTER TABLE T ALTER COLUMN C SET STATISTICS [some value] to increase the number of "bins" can be helpful in such cases. (My pet theory is that the present default value of 10 is a little low, and that a lot of optimizer errors might be resolved by bumping it up a bit...) -- (format nil "~S@~S" "cbbrowne" "ntlug.org") http://www.ntlug.org/~cbbrowne/sgml.html But what can you do with it? -- ubiquitous cry from Linux-user partner. -- Andy Pearce, <ajp@hpopd.pwd.hp.com>
В списке pgsql-admin по дате отправления: