Re: Deteriorating performance when loading large objects
От | Mario Weilguni |
---|---|
Тема | Re: Deteriorating performance when loading large objects |
Дата | |
Msg-id | 492FAC30.7080409@sime.com обсуждение исходный текст |
Ответ на | Re: Deteriorating performance when loading large objects (Tom Lane <tgl@sss.pgh.pa.us>) |
Список | pgsql-performance |
Tom Lane schrieb: > "=?iso-8859-1?Q?Vegard_B=F8nes?=" <vegard.bones@met.no> writes: > >> Running VACUUM VERBOSE pg_largeobject took quite some time. Here's the >> output: >> > > >> INFO: vacuuming "pg_catalog.pg_largeobject" >> INFO: index "pg_largeobject_loid_pn_index" now contains 11060658 row >> versions in 230587 pages >> DETAIL: 178683 index pages have been deleted, 80875 are currently reusable. >> CPU 0.92s/0.10u sec elapsed 199.38 sec. >> INFO: "pg_largeobject": found 0 removable, 11060658 nonremovable row >> versions in 6849398 pages >> DETAIL: 0 dead row versions cannot be removed yet. >> There were 84508215 unused item pointers. >> 0 pages are entirely empty. >> CPU 0.98s/0.10u sec elapsed 4421.17 sec. >> VACUUM >> > > Hmm ... although you have no dead rows now, the very large number of > unused item pointers suggests that there were times in the past when > pg_largeobject didn't get vacuumed often enough. You need to look at > your vacuuming policy. If you're using autovacuum, it might need to have > its parameters adjusted. Otherwise, how often are you vacuuming, and > are you doing it as superuser? > > >> I will try to run VACUUM ANALYZE FULL after the next delete tonight, as >> suggested by Ivan Voras in another post. >> > > Actually, a CLUSTER might be more effective. > > regards, tom lane > > Does CLUSTER really help here? On my 8.2 database, I get: CLUSTER pg_largeobject_loid_pn_index on pg_largeobject ; ERROR: "pg_largeobject" is a system catalog Has this changed in >= 8.3?
В списке pgsql-performance по дате отправления: