Re: hash_search and out of memory
От | Tom Lane |
---|---|
Тема | Re: hash_search and out of memory |
Дата | |
Msg-id | 27721.1350574509@sss.pgh.pa.us обсуждение исходный текст |
Ответ на | Re: hash_search and out of memory (Tom Lane <tgl@sss.pgh.pa.us>) |
Ответы |
Re: hash_search and out of memory
|
Список | pgsql-hackers |
I wrote: > Hitoshi Harada <umi.tanuki@gmail.com> writes: >> If OOM happens during expand_table() in hash_search_with_hash_value() >> for RelationCacheInsert, > What OOM? expand_table is supposed to return without doing anything > if it can't expand the table. If that's not happening, that's a bug > in the hash code. Oh, wait, I take that back --- the palloc-based allocator does throw errors. I think that when that was designed, we were thinking that palloc-based hash tables would be thrown away anyway after an error, but of course that's not true for long-lived tables such as the relcache hash table. I'm not terribly comfortable with trying to use a PG_TRY block to catch an OOM error - there are too many ways that could break, and this code path is by definition not very testable. I think moving up the expand_table action is probably the best bet. Will you submit a patch? regards, tom lane
В списке pgsql-hackers по дате отправления: