Re: Linear slow-down while inserting into a table with an ON INSERT trigger ?
От | Justin Pryzby |
---|---|
Тема | Re: Linear slow-down while inserting into a table with an ON INSERT trigger ? |
Дата | |
Msg-id | 20210717044026.GA19498@telsasoft.com обсуждение исходный текст |
Ответ на | Linear slow-down while inserting into a table with an ON INSERT trigger ? (Tobias Gierke <tobias.gierke@code-sourcery.de>) |
Ответы |
Re: Linear slow-down while inserting into a table with an ON INSERT trigger ?
Re: Linear slow-down while inserting into a table with an ON INSERT trigger ? |
Список | pgsql-performance |
On Fri, Jul 16, 2021 at 11:27:24PM +0200, Tobias Gierke wrote: > CREATE OR REPLACE FUNCTION parent_table_changed() RETURNS trigger LANGUAGE plpgsql > AS $function$ > BEGIN > UPDATE data_sync SET last_parent_table_change=CURRENT_TIMESTAMP; > RETURN NEW; > END; > $function$ > > I'm trying to insert 30k rows (inside a single transaction) into the parent The problem is because you're doing 30k updates of data_sync within a txn. Ideally it starts with 1 tuple in 1 page but every row updated requires scanning the previous N rows, which haven't been vacuumed (and cannot). Update is essentially delete+insert, and the table will grow with each update until the txn ends and it's vacuumed. pages: 176 removed, 1 remain, 0 skipped due to pins, 0 skipped frozen tuples: 40000 removed, 1 remain, 0 are dead but not yet removable, oldest xmin: 2027 You could run a single UPDATE rather than 30k triggers. Or switch to an INSERT on the table, with an index on it, and call max(last_parent_table_change) from whatever needs to ingest it. And prune the old entries and vacuum it outside the transaction. Maybe someone else will have a better suggestion. -- Justin
В списке pgsql-performance по дате отправления: