Re: Scaling shared buffer eviction
От | Merlin Moncure |
---|---|
Тема | Re: Scaling shared buffer eviction |
Дата | |
Msg-id | CAHyXU0xRwOK9kvPsuLwS93vqP3hvw3a3y9bizuwYiBhXEUOLiQ@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: Scaling shared buffer eviction (Robert Haas <robertmhaas@gmail.com>) |
Ответы |
Re: Scaling shared buffer eviction
Re: Scaling shared buffer eviction |
Список | pgsql-hackers |
On Thu, Sep 25, 2014 at 8:51 AM, Robert Haas <robertmhaas@gmail.com> wrote: > 1. To see the effect of reduce-replacement-locking.patch, compare the > first TPS number in each line to the third, or the second to the > fourth. At scale factor 1000, the patch wins in all of the cases with > 32 or more clients and exactly half of the cases with 1, 8, or 16 > clients. The variations at low client counts are quite small, and the > patch isn't expected to do much at low concurrency levels, so that's > probably just random variation. At scale factor 3000, the situation > is more complicated. With only 16 bufmappinglocks, the patch gets its > biggest win at 48 clients, and by 96 clients it's actually losing to > unpatched master. But with 128 bufmappinglocks, it wins - often > massively - on everything but the single-client test, which is a small > loss, hopefully within experimental variation. > > Comments? Why stop at 128 mapping locks? Theoretical downsides to having more mapping locks have been mentioned a few times but has this ever been measured? I'm starting to wonder if the # mapping locks should be dependent on some other value, perhaps the # of shared bufffers... merlin
В списке pgsql-hackers по дате отправления: