Re: handling out of memory conditions when fetching row descriptions
От | Tom Lane |
---|---|
Тема | Re: handling out of memory conditions when fetching row descriptions |
Дата | |
Msg-id | 2585.1325539596@sss.pgh.pa.us обсуждение исходный текст |
Ответ на | handling out of memory conditions when fetching row descriptions ("'Isidor Zeuner'" <postgresql@quidecco.de>) |
Ответы |
Re: handling out of memory conditions when fetching row descriptions
|
Список | pgsql-general |
"'Isidor Zeuner'" <postgresql@quidecco.de> writes: > using the latest git source code, I found that libpq will let the > connection stall when getRowDescriptions breaks on an out of memory > condition. I think this should better be handled differently to allow > application code to handle such situations gracefully. The basic assumption in there is that if we wait and retry, eventually there will be enough memory. I agree that that's not ideal, since the application may not be releasing memory elsewhere. But what you propose doesn't seem like an improvement: you're converting a maybe-failure into a guaranteed-failure, and one that's much more difficult to recover from than an ordinary query error. Also, this patch breaks async operation, in which a failure return from getRowDescriptions normally means that we have to wait for more data to arrive. The test would really need to be inserted someplace else. In any case, getRowDescriptions is really an improbable place for an out-of-memory to occur: it would be much more likely to happen while absorbing the body of a large query result. There already is some logic in getAnotherTuple for dealing with that case, which I suggest is a better model for what to do than "break the connection". But probably making things noticeably better here would require going through all the code to check for other out-of-memory cases, and developing some more uniform method of representing an already-known-failed query result. (For instance, it looks like getAnotherTuple might not work very well if it fails to get memory for one tuple and then succeeds on later ones. We probably ought to have some explicit state that says "we are absorbing the remaining data traffic for a query result that we already ran out of memory for".) regards, tom lane
В списке pgsql-general по дате отправления: