Re: Out of memory error when doing an update with IN clause
От | Tom Lane |
---|---|
Тема | Re: Out of memory error when doing an update with IN clause |
Дата | |
Msg-id | 16162.1072724644@sss.pgh.pa.us обсуждение исходный текст |
Ответ на | Re: Out of memory error when doing an update with IN clause (Sean Shanny <shannyconsulting@earthlink.net>) |
Ответы |
Re: Out of memory error when doing an update with IN clause
|
Список | pgsql-general |
Sean Shanny <shannyconsulting@earthlink.net> writes: > There are no FK's or triggers on this or any of the tables in our > warehouse schema. Also I should have mentioned that this update will > produce 0 rows as these values do not exist in this table. Hm, that makes no sense at all ... > Here is output from the /usr/local/pgsql/data/servlerlog when this fails: > ... > DynaHashTable: 534773784 total in 65 blocks; 31488 free (255 chunks); > 534742296 used Okay, so here's the problem: this hash table has expanded to 500+Mb which is enough to overflow your ulimit setting. Some digging in the source code shows only two candidates for such a hash table: a tuple hash table used for grouping/aggregating, which doesn't seem likely for this query, or a tuple-pointer hash table used for detecting already-visited tuples in a multiple index scan. Could we see the EXPLAIN output (no ANALYZE, since it would fail) for the problem query? That should tell us which of these possibilities it is. regards, tom lane
В списке pgsql-general по дате отправления: