Обсуждение: Strange VACUUM behaviour
Hi
We started a VACUUM (not a VACUUM FULL) on one of your postgres 7.4.9
databases a few days ago. It's still running yet, and says the
folloing about once per second:
INFO: index "pg_toast_2144146_index" now contains 1971674 row versions
in 10018 pages
DETAIL: 4 index row versions were removed.
2489 index pages have been deleted, 0 are currently reusable.
The number of row versions decreases by 4 each time the message is logged.
The file belonging to pg_toast_2144146_index has about 80MB,
for pg_toast_2144146 there are 6 files, five of them are
1GB, the last one is about 5MB in size. The "original" relation
(the one that references pg_toast_2144146 in it's reltoastrelid field)
has one datafile of 11MB.
The "original" relation is called image, and is defined the following:
Table "public.image"
Column | Type | Modifiers
---------------+------------------------+-----------
id | bigint | not null
image_code_id | bigint |
mandant_id | bigint |
name | text |
dat | text |
mime | text |
size | bigint |
md5 | bytea |
path | text |
copyright | character varying(255) |
Indexes:
"image_pkey" primary key, btree (id)
"i_image_id" btree (id)
Triggers:
_gti_denyaccess_17 BEFORE INSERT OR DELETE OR UPDATE ON image FOR
EACH ROW EXECUTE PROCEDURE _gti.denyaccess('_gti')
The table is part of a slony tableset, which is subscribed on this database.
Is there a reason that this vacuum takes so long? Maybe some lock
contention because slony replicates into this table?
greetings, Florian Pflug
Вложения
One issue is that pg_toast tables can't vacuum rows until their
respective rows have been deleted by vacuuming the base table. But it's
still odd that the count decreases by 4 each time you run it.
As for the length of time, that could be due to heavily loaded hardware.
You might do better if you increase vacuum_memory (or whatever the
setting was called in 7.4...)
That index does have about 20% bloat though; so a reindex would probably
be a good idea.
You might ask on the slony list...
On Fri, Nov 25, 2005 at 02:34:45PM +0100, Florian G. Pflug wrote:
> Hi
>
> We started a VACUUM (not a VACUUM FULL) on one of your postgres 7.4.9
> databases a few days ago. It's still running yet, and says the
> folloing about once per second:
>
> INFO: index "pg_toast_2144146_index" now contains 1971674 row versions
> in 10018 pages
> DETAIL: 4 index row versions were removed.
> 2489 index pages have been deleted, 0 are currently reusable.
>
> The number of row versions decreases by 4 each time the message is logged.
>
> The file belonging to pg_toast_2144146_index has about 80MB,
> for pg_toast_2144146 there are 6 files, five of them are
> 1GB, the last one is about 5MB in size. The "original" relation
> (the one that references pg_toast_2144146 in it's reltoastrelid field)
> has one datafile of 11MB.
>
> The "original" relation is called image, and is defined the following:
> Table "public.image"
> Column | Type | Modifiers
> ---------------+------------------------+-----------
> id | bigint | not null
> image_code_id | bigint |
> mandant_id | bigint |
> name | text |
> dat | text |
> mime | text |
> size | bigint |
> md5 | bytea |
> path | text |
> copyright | character varying(255) |
> Indexes:
> "image_pkey" primary key, btree (id)
> "i_image_id" btree (id)
> Triggers:
> _gti_denyaccess_17 BEFORE INSERT OR DELETE OR UPDATE ON image FOR
> EACH ROW EXECUTE PROCEDURE _gti.denyaccess('_gti')
>
> The table is part of a slony tableset, which is subscribed on this database.
>
> Is there a reason that this vacuum takes so long? Maybe some lock
> contention because slony replicates into this table?
>
> greetings, Florian Pflug
--
Jim C. Nasby, Sr. Engineering Consultant jnasby@pervasive.com
Pervasive Software http://pervasive.com work: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461
On Tue, November 29, 2005 0:37, Jim C. Nasby said: > One issue is that pg_toast tables can't vacuum rows until their > respective rows have been deleted by vacuuming the base table. But it's > still odd that the count decreases by 4 each time you run it. So, VACUUM <big-table> would first vacuum <big-table>, then pg_toast_<big-table-oid>, and finally pg_toast_<big-table-oid>_index? > As for the length of time, that could be due to heavily loaded hardware. > You might do better if you increase vacuum_memory (or whatever the > setting was called in 7.4...) Well, the hardware is a few years old, and vacuum runs used to "take their time" - but always in the range of a few hours, never a few days. vacuum_mem is already set to 256MB. The CPU-Load was quite high though (The VACUUM process continously used about 30% CPU) - Which is strange, since VACUUM is supposed to be CPU-bound, isn't it? > That index does have about 20% bloat though; so a reindex would probably > be a good idea. Will it help if I REINDEX the <big-table>? Will the automatically REINDEX the toast-indices too? BTW - Where do I find information about the internal workings of TOAST-Tables? I learned during this problem that I don't really know how these things work. greetings, Florian Pflug
Jim C. Nasby wrote: >>We started a VACUUM (not a VACUUM FULL) on one of your postgres 7.4.9 >>databases a few days ago. It's still running yet, and says the >>folloing about once per second: >> >>INFO: index "pg_toast_2144146_index" now contains 1971674 row versions >>in 10018 pages >>DETAIL: 4 index row versions were removed. >>2489 index pages have been deleted, 0 are currently reusable. >> >>The number of row versions decreases by 4 each time the message is logged. >> >>The file belonging to pg_toast_2144146_index has about 80MB, >>for pg_toast_2144146 there are 6 files, five of them are >>1GB, the last one is about 5MB in size. The "original" relation >>(the one that references pg_toast_2144146 in it's reltoastrelid field) >>has one datafile of 11MB. > One issue is that pg_toast tables can't vacuum rows until their > respective rows have been deleted by vacuuming the base table. But it's > still odd that the count decreases by 4 each time you run it. > > As for the length of time, that could be due to heavily loaded hardware. > You might do better if you increase vacuum_memory (or whatever the > setting was called in 7.4...) > > That index does have about 20% bloat though; so a reindex would probably > be a good idea. Hi Just for the archives - I finally solved the problem - and of course it was me being a dumbass. I _wanted to set vacuum mem to 256mb, but overlooked that the setting is in kb, not in bytes - so the value I set was actually 256GB - which seems to have overflowed to some awefully small value, and caused the vacuum to run in really small steps... Might be worth an overflow check and/or some notice in the docs, though greetings, Florian Pflug
"Florian G. Pflug" <fgp@phlo.org> writes:
> INFO: index "pg_toast_2144146_index" now contains 1971674 row versions
> in 10018 pages
> DETAIL: 4 index row versions were removed.
> 2489 index pages have been deleted, 0 are currently reusable.
> Just for the archives - I finally solved the problem - and of course
> it was me being a dumbass. I _wanted to set vacuum mem to 256mb, but
> overlooked that the setting is in kb, not in bytes - so the value
> I set was actually 256GB - which seems to have overflowed to some
> awefully small value, and caused the vacuum to run in really small steps...
Ah-hah, I should have thought of that :-(. I thought the 4-at-a-time
thing was pretty odd ...
> Might be worth an overflow check and/or some notice in the docs, though
There is an overflow check for this and other memory-size parameters in
PG 8.1.
regression=# set maintenance_work_mem = 256000000;
ERROR: 256000000 is outside the valid range for parameter "maintenance_work_mem" (1024 .. 2097151)
regards, tom lane