Re: [WIP PATCH] for Performance Improvement in Buffer Management
От | Pavan Deolasee |
---|---|
Тема | Re: [WIP PATCH] for Performance Improvement in Buffer Management |
Дата | |
Msg-id | CABOikdOxCMgVXZ1To3sy02QeaAiW52N9+CwnyNfL+KjL5kn9Pg@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: [WIP PATCH] for Performance Improvement in Buffer Management (Amit kapila <amit.kapila@huawei.com>) |
Ответы |
Re: [WIP PATCH] for Performance Improvement in Buffer Management
|
Список | pgsql-hackers |
On Mon, Nov 19, 2012 at 8:52 PM, Amit kapila <amit.kapila@huawei.com> wrote:
On Monday, November 19, 2012 5:53 AM Jeff Janes wrote:Yes, thats true.
On Sun, Oct 21, 2012 at 12:59 AM, Amit kapila <amit.kapila@huawei.com> wrote:
> On Saturday, October 20, 2012 11:03 PM Jeff Janes wrote:
>
>>Run the modes in reciprocating order?
>> Sorry, I didn't understood this, What do you mean by modes in reciprocating order?
> Sorry for the long delay. In your scripts, it looks like you always
> run the unpatched first, and then the patched second.Today for some configurations, I have ran by reciprocating the order.
> By reciprocating, I mean to run them in the reverse order, or in random order.
Below are readings:
Configuration
16GB (Database) -7GB (Shared Buffers)
Here i had run in following order
1. Run perf report with patch for 32 client
2. Run perf report without patch for 32 client
3. Run perf report with patch for 16 client
4. Run perf report without patch for 16 client
Each execution is 5 minutes,
16 client /16 thread | 32 client /32 thread
@mv-free-lst @9.3devl | @mv-free-lst @9.3devl
-------------------------------------------------------
3669 4056 | 5356 5258
3987 4121 | 4625 5185
4840 4574 | 4502 6796
6465 6932 | 4558 8233
6966 7222 | 4955 8237
7551 7219 | 9115 8269
8315 7168 | 43171 8340
9102 7136 | 57920 8349
-------------------------------------------------------
6362 6054 | 16775 7333
Sorry, I haven't followed this thread at all, but the numbers (43171 and 57920) in the last two runs of @mv-free-list for 32 clients look aberrations, no ? I wonder if that's skewing the average.
I also looked at the the Results.htm file down thread. There seem to be a steep degradation when the shared buffers are increased from 5GB to 10GB, both with and without the patch. Is that expected ? If so, isn't that worth investigating and possibly even fixing before we do anything else ?
Thanks,
Pavan
В списке pgsql-hackers по дате отправления: