Re: Add the ability to limit the amount of memory that can be allocated to backends.
От | Drouvot, Bertrand |
---|---|
Тема | Re: Add the ability to limit the amount of memory that can be allocated to backends. |
Дата | |
Msg-id | 3b9b90c6-f4ae-a7df-6519-847ea9d5fe1e@amazon.com обсуждение исходный текст |
Ответ на | Add the ability to limit the amount of memory that can be allocated to backends. (Reid Thompson <reid.thompson@crunchydata.com>) |
Ответы |
Re: Add the ability to limit the amount of memory that can be allocated to backends.
|
Список | pgsql-hackers |
Hi, On 8/31/22 6:50 PM, Reid Thompson wrote: > Hi Hackers, > > Add the ability to limit the amount of memory that can be allocated to > backends. Thanks for the patch. + 1 on the idea. > Specifies a limit to the amount of memory (MB) that may be allocated to > backends in total (i.e. this is not a per user or per backend limit). > If unset, or set to 0 it is disabled. It is intended as a resource to > help avoid the OOM killer. A backend request that would push the total > over the limit will be denied with an out of memory error causing that > backends current query/transaction to fail. I'm not sure we are choosing the right victims here (aka the ones that are doing the request that will push the total over the limit). Imagine an extreme case where a single backend consumes say 99% of the limit, shouldn't it be the one to be "punished"? (and somehow forced to give the memory back). The problem that i see with the current approach is that a "bad" backend could impact all the others and continue to do so. what about punishing say the highest consumer , what do you think? (just speaking about the general idea here, not about the implementation) Regards, -- Bertrand Drouvot PostgreSQL Contributors Team RDS Open Source Databases Amazon Web Services: https://aws.amazon.com
В списке pgsql-hackers по дате отправления: