Re: Automatically setting work_mem
От | Tom Lane |
---|---|
Тема | Re: Automatically setting work_mem |
Дата | |
Msg-id | 28640.1142606776@sss.pgh.pa.us обсуждение исходный текст |
Ответ на | Re: Automatically setting work_mem ("Qingqing Zhou" <zhouqq@cs.toronto.edu>) |
Ответы |
Re: Automatically setting work_mem
Re: Automatically setting work_mem Re: Automatically setting work_mem Re: Automatically setting work_mem |
Список | pgsql-hackers |
"Qingqing Zhou" <zhouqq@cs.toronto.edu> writes: > So what's the difference between these two strategy? > (1) Running time: do they use the same amount of memory? Why option 2 is > better than 1? > (2) Idle time: after sort done, option 1 will return all 1024 to the OS and > 2 will still keep 512? Point 2 is actually a serious flaw in Simon's proposal, because there is no portable way to make malloc return freed memory to the OS. Some mallocs will do that ... in some cases ... but many simply don't ever move the brk address down. It's not an easy thing to do when the arena gets cluttered with a lot of different alloc chunks and only some of them get freed. So the semantics we'd have to adopt is that once a backend claims some "shared work mem", it keeps it until process exit. I don't think that makes the idea worthless, because there's usually a clear distinction between processes doing expensive stuff and processes doing cheap stuff. But it's definitely a limitation. Also, if you've got a process doing expensive stuff, it's certainly possible to expect the user to just increase work_mem locally. (BTW, given that work_mem is locally increasable, I'm not sure what's the point of considering that shared_work_mem has to be SUSET. It's not to prevent users from blowing out memory.) My own thoughts about the problems with our work_mem arrangement are that the real problem is the rule that we can allocate work_mem per sort or hash operation; this makes the actual total memory use per backend pretty unpredictable for nontrivial queries. I don't know how to fix this though. The planner needs to know the work_mem that will be used for any one of these operations in order to estimate costs, so simply trying to divide up work_mem among the operations of a completed plan tree is not going to improve matters. regards, tom lane
В списке pgsql-hackers по дате отправления: