Re: new vacuum is slower for small tables
От | Pavel Stehule |
---|---|
Тема | Re: new vacuum is slower for small tables |
Дата | |
Msg-id | 162867790812080740i2e77acd4g59fcbe71ef9d072c@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: new vacuum is slower for small tables (Heikki Linnakangas <heikki.linnakangas@enterprisedb.com>) |
Ответы |
Re: new vacuum is slower for small tables
|
Список | pgsql-hackers |
2008/12/8 Heikki Linnakangas <heikki.linnakangas@enterprisedb.com>: > Pavel Stehule wrote: >> >> I did small tests and I found so for small tables (less 1000 rows) >> VACUUM based on visibility maps are slower than old implementation >> >> it is about 5ms X 20ms > > How did you measure that? it's simple test create table x(a integer, b integer); insert into x select i, i from generate_series(1,1000) g(i); and then vacuum on 8.3.5 and vacuum on current CVS HEAD. both cases are reading from cache. > > I tried to reproduce that here, and indeed it seems to be much slower on CVS > HEAD than on PG 8.3. I tried to short-circuit the vacuum completely, making > it a no-op: > > diff --git a/src/backend/commands/vacuumlazy.c > b/src/backend/commands/vacuumlazy > index 475c38a..c31917d 100644 > --- a/src/backend/commands/vacuumlazy.c > +++ b/src/backend/commands/vacuumlazy.c > @@ -275,6 +275,7 @@ lazy_scan_heap(Relation onerel, LVRelStats *vacrelstats, > > lazy_space_alloc(vacrelstats, nblocks); > > + nblocks = 0; > for (blkno = 0; blkno < nblocks; blkno++) > { > Buffer buf; > > but that made no difference at all; vacuuming a one page table on CVS HEAD > with that hack is still slower than PG 8.3 without that hack. Which suggests > that the slowdown is not related to visibility map. > > Oprofile suggests that most of the time is actually spent in > pgstat_vacuum_stat. And more precisely in pstat_collect_oids, which is > called by pgstat_vacuum_stat. > > We added support for tracking call counts and elapsed runtime of > user-defined functions back in May. That added the code to > pgstat_vacuum_stat to tell stats collector about deleted functions, which > involves populating a hash table of all functions in the database. It looks > like *that's* what's causing the slowdown. > > I think we can live with the extra overhead. > > -- > Heikki Linnakangas > EnterpriseDB http://www.enterprisedb.com >
В списке pgsql-hackers по дате отправления: