Re: Deleting millions of rows
От | Robert Haas |
---|---|
Тема | Re: Deleting millions of rows |
Дата | |
Msg-id | 603c8f070902021526x67d34095gff54c36295f504e0@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: Deleting millions of rows (Tom Lane <tgl@sss.pgh.pa.us>) |
Ответы |
Re: Deleting millions of rows
|
Список | pgsql-performance |
> It's the pending trigger list. He's got two trigger events per row, > which at 40 bytes apiece would approach 4GB of memory. Apparently > it's a 32-bit build of Postgres, so he's running out of process address > space. > > There's a TODO item to spill that list to disk when it gets too large, > but the reason nobody's done it yet is that actually executing that many > FK check trigger events would take longer than you want to wait anyway. Have you ever given any thought to whether it would be possible to implement referential integrity constraints with statement-level triggers instead of row-level triggers? IOW, instead of planning this and executing it N times: DELETE FROM ONLY <fktable> WHERE $1 = fkatt1 [AND ...] ...we could join the original query against fktable with join clauses on the correct pairs of attributes and then execute it once. Is this insanely difficult to implement? ...Robert
В списке pgsql-performance по дате отправления: