Обсуждение: autovacuum doesnt run on the pg_toast_id table
Hey,
I have a table with 3 columns and one of those columns is bytea type A(int,int,bytea).
Every row that I insert is pretty big and thats why postgresql decided to save that column in a toasted table(pg_toasted_Aid). I had a lot of bloat issues with that table so I set the vacuum_threshold of the original table(A) into 0.05. Usually the A table has about 1000+ rows but the toasted table has more then 25M . Now, I realized from the autovacuum logging, that when autovacuum runs on the original table (A) it doesn't necessary run on the toasted table and this is very weird.
I tried to set the same threshold for the toasted table but got an error that it is a catalog table and therefore permission is denied.
2019-01-17 12:04:15 EST db116109 ERROR: permission denied: "pg_toast_13388392" is a system catalog
2019-01-17 12:04:15 EST db116109 STATEMENT: alter table pg_toast.pg_toast_13388392 set (autovacuum_vacuum_scale_factor=0.05);
An example for the autovacuum run :
2019-01-17 00:00:51 EST 15652 LOG: automatic vacuum of table "db1.public.A": index scans: 1
pages: 0 removed, 117 remain
tuples: 142 removed, 1466 remain
buffer usage: 162 hits, 34 misses, 29 dirtied
avg read rate: 1.356 MiB/s, avg write rate: 1.157 MiB/s
--
2019-01-17 00:07:51 EST 25666 LOG: automatic vacuum of table "db1.public.A": index scans: 1
pages: 0 removed, 117 remain
tuples: 144 removed, 1604 remain
buffer usage: 157 hits, 41 misses, 27 dirtied
avg read rate: 1.651 MiB/s, avg write rate: 1.087 MiB/s
--
2019-01-17 00:12:39 EST 3902 LOG: automatic vacuum of table "db1.pg_toast.pg_toast_13388392": index scans: 17
pages: 459 removed, 25973888 remain
tuples: 45130560 removed, 54081616 remain
buffer usage: 30060044 hits, 43418591 misses, 37034834 dirtied
avg read rate: 2.809 MiB/s, avg write rate: 2.396 MiB/s
--
2019-01-17 00:13:51 EST 2684 LOG: automatic vacuum of table "db1.public.A": index scans: 1
pages: 0 removed, 117 remain
tuples: 122 removed, 1470 remain
buffer usage: 152 hits, 41 misses, 30 dirtied
avg read rate: 2.981 MiB/s, avg write rate: 2.181 MiB/s
--
2019-01-17 00:19:51 EST 10935 LOG: automatic vacuum of table "db1.public.A": index scans: 1
pages: 0 removed, 117 remain
tuples: 120 removed, 1471 remain
buffer usage: 145 hits, 41 misses, 28 dirtied
avg read rate: 3.637 MiB/s, avg write rate: 2.484 MiB/s
--
2019-01-17 00:42:51 EST 24385 LOG: automatic vacuum of table "db1.public.A": index scans: 1
pages: 0 removed, 117 remain
tuples: 130 removed, 1402 remain
buffer usage: 175 hits, 76 misses, 34 dirtied
Any idea why the autovacuum doesnt vacuum both tables ?
On Thu, Jan 17, 2019 at 07:28:52PM +0200, Mariel Cherkassky wrote: ... > Now, I realized from the autovacuum > logging, that when autovacuum runs on the original table (A) it doesn't > necessary run on the toasted table and this is very weird. ... > Any idea why the autovacuum doesnt vacuum both tables ? It *does* vacuum both, just not *necessarily*, as you saw. The toast is a separate table, so it's tracked separately. Note that: |If a table parameter value is set and the |equivalent <literal>toast.</literal> parameter is not, the TOAST table |will use the table's parameter value. You could look in pg_stat_all_tables, to see how frequently the toast is being autovacuumed relative to its table. Justin
On 2019-Jan-17, Mariel Cherkassky wrote: > I tried to set the same threshold for the toasted table but got an error > that it is a catalog table and therefore permission is denied. > 2019-01-17 12:04:15 EST db116109 ERROR: permission denied: > "pg_toast_13388392" is a system catalog > 2019-01-17 12:04:15 EST db116109 STATEMENT: alter table > pg_toast.pg_toast_13388392 set (autovacuum_vacuum_scale_factor=0.05); The right way to do this is alter table main_table set (toast.autovacuum_vacuum_scale_factor = 0.05); -- Álvaro Herrera https://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
I did it for the original table. But I see in the logs that the autovacuun on the toasted table isn't synced with the autovacuun of the original table. Therefore I thought that it worth to set it also for the toasted table. Can you explain why in the logs I see more vacuums of the original table then the toasted table ? Should they vacuumed together ?
On Jan 17, 2019 7:52 PM, "Alvaro Herrera" <alvherre@2ndquadrant.com> wrote:
On 2019-Jan-17, Mariel Cherkassky wrote:The right way to do this is
> I tried to set the same threshold for the toasted table but got an error
> that it is a catalog table and therefore permission is denied.
> 2019-01-17 12:04:15 EST db116109 ERROR: permission denied:
> "pg_toast_13388392" is a system catalog
> 2019-01-17 12:04:15 EST db116109 STATEMENT: alter table
> pg_toast.pg_toast_13388392 set (autovacuum_vacuum_scale_factor=0.05);
alter table main_table set (toast.autovacuum_vacuum_scale_factor = 0.05);
--
Álvaro Herrera https://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
On 2019-Jan-17, Mariel Cherkassky wrote: > I did it for the original table. But I see in the logs that the autovacuun > on the toasted table isn't synced with the autovacuun of the original > table. Therefore I thought that it worth to set it also for the toasted > table. Can you explain why in the logs I see more vacuums of the original > table then the toasted table ? Should they vacuumed together ? No, they are processed separately, according to the formula explained in the documentation. -- Álvaro Herrera https://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
But you said that the threshold that is chosen for the toasted table is identical to the originals table threshold right ? Is that a normal behavior that the original table has 1000recrods but the toasted has more than 10m? How can I set a different threshold for the toasted table ? As it seems right now the threshold for the original table is set to 0.05 and it it to often for the original but for the toasted table it isn't enough because it has more then 10 m records..
On Jan 17, 2019 9:09 PM, "Alvaro Herrera" <alvherre@2ndquadrant.com> wrote:
On 2019-Jan-17, Mariel Cherkassky wrote:No, they are processed separately, according to the formula explained in
> I did it for the original table. But I see in the logs that the autovacuun
> on the toasted table isn't synced with the autovacuun of the original
> table. Therefore I thought that it worth to set it also for the toasted
> table. Can you explain why in the logs I see more vacuums of the original
> table then the toasted table ? Should they vacuumed together ?
the documentation.
--
Álvaro Herrera https://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
On 2019-Jan-17, Mariel Cherkassky wrote: > But you said that the threshold that is chosen for the toasted table is > identical to the originals table threshold right ? You can configure them identical, or different. Up to you. > Is that a normal behavior that the original table has 1000recrods but > the toasted has more than 10m? Sure -- each large record in the main table is split into many 2kb records in the toast table. > How can I set a different threshold for the toasted table ? Just choose a different value in the command I showed. -- Álvaro Herrera https://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Got it, I didn't see the toast word in the command. Thanks !
On Thu, Jan 17, 2019, 10:17 PM Alvaro Herrera <alvherre@2ndquadrant.com wrote:
On 2019-Jan-17, Mariel Cherkassky wrote:
> But you said that the threshold that is chosen for the toasted table is
> identical to the originals table threshold right ?
You can configure them identical, or different. Up to you.
> Is that a normal behavior that the original table has 1000recrods but
> the toasted has more than 10m?
Sure -- each large record in the main table is split into many 2kb
records in the toast table.
> How can I set a different threshold for the toasted table ?
Just choose a different value in the command I showed.
--
Álvaro Herrera https://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services