Re: Direct I/O
От | Joe Conway |
---|---|
Тема | Re: Direct I/O |
Дата | |
Msg-id | f73117ef-efad-2a2d-3f9c-205c258ac8ec@joeconway.com обсуждение исходный текст |
Ответ на | Re: Direct I/O (Robert Haas <robertmhaas@gmail.com>) |
Список | pgsql-hackers |
On 4/19/23 10:11, Robert Haas wrote: > On Tue, Apr 18, 2023 at 3:35 PM Greg Stark <stark@mit.edu> wrote: >> Well.... I'm more optimistic... That may not always be impossible. >> We've already added the ability to add more shared memory after >> startup. We could implement the ability to add or remove shared buffer >> segments after startup. And it wouldn't be crazy to imagine a kernel >> interface that lets us judge whether the kernel memory pressure makes >> it reasonable for us to take more shared buffers or makes it necessary >> to release shared memory to the kernel. > > On this point specifically, one fairly large problem that we have > currently is that our buffer replacement algorithm is terrible. In > workloads I've examined, either almost all buffers end up with a usage > count of 5 or almost all buffers end up with a usage count of 0 or 1. > Either way, we lose all or nearly all information about which buffers > are actually hot, and we are not especially unlikely to evict some > extremely hot buffer. That has been my experience as well, although admittedly I have not looked in quite a while. > I'm not saying that it isn't possible to fix this. I bet it is, and I > hope someone does. I keep looking at this blog post about Transparent Memory Offloading and thinking that we could learn from it: https://engineering.fb.com/2022/06/20/data-infrastructure/transparent-memory-offloading-more-memory-at-a-fraction-of-the-cost-and-power/ Unfortunately, it is very Linux specific and requires a really up to date OS -- cgroup v2, kernel >= 5.19 > I'm just making the point that even if we knew the amount of kernel > memory pressure and even if we also had the ability to add and remove > shared_buffers at will, it probably wouldn't help much as things > stand today, because we're not in a good position to judge how large > the cache would need to be in order to be useful, or what we ought to > be storing in it. The tactic TMO uses is basically to tune the available memory to get a target memory pressure. That seems like it could work. -- Joe Conway PostgreSQL Contributors Team RDS Open Source Databases Amazon Web Services: https://aws.amazon.com
В списке pgsql-hackers по дате отправления: