Re: EXPLAIN (no ANALYZE) taking an hour for INSERT FROM SELECT
От | Tom Lane |
---|---|
Тема | Re: EXPLAIN (no ANALYZE) taking an hour for INSERT FROM SELECT |
Дата | |
Msg-id | 17467.1426090533@sss.pgh.pa.us обсуждение исходный текст |
Ответ на | Re: EXPLAIN (no ANALYZE) taking an hour for INSERT FROM SELECT (Gunnlaugur Thor Briem <gunnlaugur@gmail.com>) |
Ответы |
Re: EXPLAIN (no ANALYZE) taking an hour for INSERT FROM
SELECT
|
Список | pgsql-performance |
Gunnlaugur Thor Briem <gunnlaugur@gmail.com> writes: > Yes, I think that's it: I've just realized that immediately prior to the > INSERT, in the same transaction, an unfiltered DELETE has been issued; i.e. > the whole table is being rewritten. Then the INSERT is issued ... with a > WHERE clause on non-existence in the (now empty) table. > In that case of course the WHERE clause is unnecessary, as it will always > evaluate as true (and we've locked the whole table for writes). Looks like > it is a lot worse than unnecessary, though, if it triggers this performance > snafu in EXPLAIN INSERT. Ah-hah. So what's happening is that the planner is doing an indexscan over the entire table of now-dead rows, looking vainly for an undeleted maximal row. Ouch. I wonder how hard it would be to make the indexscan give up after hitting N consecutive dead rows, for some suitable N, maybe ~1000. From the planner's viewpoint it'd be easy enough to fall back to using whatever it had in the histogram after all. But that's all happening down inside index_getnext, and I'm hesitant to stick some kind of wart into that machinery for this purpose. regards, tom lane
В списке pgsql-performance по дате отправления: