Re: Out of Memory - 8.2.4
От | Alvaro Herrera |
---|---|
Тема | Re: Out of Memory - 8.2.4 |
Дата | |
Msg-id | 20070829184944.GP7911@alvh.no-ip.org обсуждение исходный текст |
Ответ на | Re: Out of Memory - 8.2.4 (Tom Lane <tgl@sss.pgh.pa.us>) |
Ответы |
Re: Out of Memory - 8.2.4
|
Список | pgsql-general |
Tom Lane escribió: > Alvaro Herrera <alvherre@commandprompt.com> writes: > >> Given that the worst-case consequence is extra index vacuum passes, > >> which don't hurt that much when a table is small, maybe some smaller > >> estimate like 100 TIDs per page would be enough. Or, instead of > >> using a hard-wired constant, look at pg_class.reltuples/relpages > >> to estimate the average tuple density ... > > > This sounds like a reasonable compromise. > > Do you want to make it happen? I'm not having much luck really. I think the problem is that ANALYZE stores reltuples as the number of live tuples, so if you delete a big portion of a big table, then ANALYZE and then VACUUM, there's a huge misestimation and extra index cleanup passes happen, which is a bad thing. There seems to be no way to estimate the dead space, is there? We could go to pgstats but that seems backwards. I was having a problem at first with estimating for small tables which had no valid info in pg_class.reltuples, but I worked around that by using MaxHeapTuplesPerPage. (I was experimenting with the code that estimates average tuple width in estimate_rel_size() but then figured it was too much work.) So this part is fine AFAICS. I attach the patch I am playing with, and the simple test I've been examining (on which I comment the ANALYZE on some runs, change the conditions on the DELETE, put the CREATE INDEX before insertion instead of after it, etc). -- Alvaro Herrera http://www.CommandPrompt.com/ The PostgreSQL Company - Command Prompt, Inc.
Вложения
В списке pgsql-general по дате отправления: