On Tue, 7 Nov 2023 at 11:17, Robert Leach <rleach@princeton.edu> wrote:
ALTER TABLE "DataRepo_peakdata" SET (autovacuum_vacuum_scale_factor = 0.0);
ALTER TABLE "DataRepo_peakdata" SET (autovacuum_vacuum_threshold = 5000);
ALTER TABLE "DataRepo_peakdata" SET (autovacuum_analyze_scale_factor = 0.0);
ALTER TABLE "DataRepo_peakdata" SET (autovacuum_analyze_threshold = 5000);
I still don't like the fact that we have this database-architecture-specific code in our code-base that ties it to a specific database. I'm not sure if where I put it is the best place for it either. Is there a config file I can put these settings in?
You can apply those changes globally in postgresql.conf, but having
the autovacuum_vacuum_threshold / autovacuum_analyze_threshold set to
that constant is unlikely to be very good for all tables. Perhaps
there some scale_factor above 0.0 and below 0.2 that you can find that
makes it run fast.
And I'm still curious why this wasn't necessary in postgres 10?
If you show us the output of the following two queries:
SELECT relname, reltuples,relpages,pg_relation_size(oid) from pg_class
where oid = '"DataRepo_peakdata"'::regclass;
select c.relname, c.reltuples, c.relpages, pg_relation_size(c.oid)
from pg_class c inner join pg_index i on c.oid=i.indexrelid where
i.indrelid = '"DataRepo_peakdata"'::regclass;
run directly before the query in question both on PG10 and on PG13
both when the query runs quickly and when it runs slowly. We might
see something there that helps indicate what's going on.
David