Re: Priority table or Cache table

Поиск
Список
Период
Сортировка
От Amit Kapila
Тема Re: Priority table or Cache table
Дата
Msg-id CAA4eK1L3HkZ8-M=ksVYTc98A4tOi0=Tg-HW_6bHj9EJo81f_6A@mail.gmail.com
обсуждение исходный текст
Ответ на Re: Priority table or Cache table  (Haribabu Kommi <kommi.haribabu@gmail.com>)
Ответы Re: Priority table or Cache table  (Haribabu Kommi <kommi.haribabu@gmail.com>)
Список pgsql-hackers
On Thu, Aug 6, 2015 at 12:24 PM, Haribabu Kommi <kommi.haribabu@gmail.com> wrote:
>
> On Mon, Jun 30, 2014 at 11:08 PM, Beena Emerson <memissemerson@gmail.com> wrote:
> >
> > I also ran the test script after making the same configuration changes that
> > you have specified. I found that I was not able to get the same performance
> > difference that you have reported.
> >
> > Following table lists the tps in each scenario and the % increase in
> > performance.
> >
> > Threads         Head     Patched            Diff
> >     1                  1669          1718              3%
> >     2                  2844          3195              12%
> >     4                  3909          4915              26%
> >     8                  7332          8329             14%
> >
>
>
> coming back to this old thread.
>
> I just tried a new approach for this priority table, instead of a
> entirely separate buffer pool,
> Just try to use a some portion of shared buffers to priority tables
> using some GUC variable
> "buffer_cache_ratio"(0-75) to specify what percentage of shared
> buffers to be used.
>
> Syntax:
>
> create table tbl(f1 int) with(buffer_cache=true);
>
> Comparing earlier approach, I though of this approach is easier to implement.
> But during the performance run, it didn't showed much improvement in
> performance.
> Here are the test results.
>

What is the configuration for test (RAM of m/c, shared_buffers, scale_factor, etc.)?

>  Threads         Head        Patched            Diff
>      1                  3123          3238              3.68%
>      2                  5997          6261              4.40%
>      4                 11102       11407              2.75%
>
> I am suspecting that, this may because of buffer locks that are
> causing the problem.
> where as in older approach of different buffer pools, each buffer pool
> have it's own locks.
> I will try to collect the profile output and analyze the same.
>
> Any better ideas?
>

I think you should try to find out during test, for how many many pages,
it needs to perform clocksweep (add some new counter like
numBufferBackendClocksweep in BufferStrategyControl to find out the
same).  By theory your patch should reduce the number of times it needs
to perform clock sweep.

I think in this approach even if you make some buffers as non-replaceable
(buffers for which BM_BUFFER_CACHE_PAGE is set), still clock sweep
needs to access all the buffers.  I think we might want to find some way to
reduce that if this idea helps.

Another thing is that, this idea looks somewhat similar (although not same)
to current Ring Buffer concept, where Buffers for particular types of scan
uses buffers from Ring.  I think it is okay to prototype as you have done
in patch and we can consider to do something on those lines if at all
this patch's idea helps.


With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com

В списке pgsql-hackers по дате отправления:

Предыдущее
От: Franck Verrot
Дата:
Сообщение: Re: Mention column name in error messages
Следующее
От: Simon Riggs
Дата:
Сообщение: Re: Test code is worth the space