Re: Perf Benchmarking and regression.

Поиск
Список
Период
Сортировка
От Robert Haas
Тема Re: Perf Benchmarking and regression.
Дата
Msg-id CA+TgmoZP7i35Atd9q2AcxkGNuJh0rjT6KiDMXMoGMFA2217mGw@mail.gmail.com
обсуждение исходный текст
Ответ на Re: Perf Benchmarking and regression.  (Andres Freund <andres@anarazel.de>)
Ответы Re: Perf Benchmarking and regression.  (Andres Freund <andres@anarazel.de>)
Re: Perf Benchmarking and regression.  (Noah Misch <noah@leadboat.com>)
Список pgsql-hackers
On Fri, Jun 3, 2016 at 2:20 PM, Andres Freund <andres@anarazel.de> wrote:
>> I've always heard that guideline as "roughly 1/4, but not more than
>> about 8GB" - and the number of people with more than 32GB of RAM is
>> going to just keep going up.
>
> I think that upper limit is wrong.  But even disregarding that:

Many people think the upper limit should be even lower, based on good,
practical experience.  Like I've seen plenty of people recommend
2-2.5GB.

> To hit the issue in that case you have to access more data than
> shared_buffers (8GB), and very frequently re-dirty already dirtied
> data. So you're basically (on a very rough approximation) going to have
> to write more than 8GB within 30s (256MB/s).  Unless your hardware can
> handle that many mostly random writes, you are likely to hit the worst
> case behaviour of pending writeback piling up and stalls.

I'm not entire sure that this is true, because my experience is that
the background writing behavior under Linux is not very aggressive.  I
agree you need a working set >8GB, but I think if you have that you
might not actually need to write data this quickly, because if Linux
decides to only do background writing (as opposed to blocking
processes) it may not actually keep up.

Also, 256MB/s is not actually all that crazy write rate.  I mean, it's
a lot, but even if each random UPDATE touched only 1 8kB block, that
would be about 32k TPS.  When you add in index updates and TOAST
traffic, the actual number of block writes per TPS could be
considerably higher, so we might be talking about something <10k TPS.
That's well within the range of what people try to do with PostgreSQL,
at least IME.

>> > I'm inclined to give up and disable backend_flush_after (not the rest),
>> > because it's new and by far the "riskiest". But I do think it's a
>> > disservice for the majority of our users.
>>
>> I think that's the right course of action.  I wasn't arguing for
>> disabling either of the other two.
>
> Noah was...

I know, but I'm not Noah.  :-)

We have no evidence of the other settings causing any problems yet, so
I see no reason to second-guess the decision to leave them on by
default at this stage.  Other people may disagree with that analysis,
and that's fine, but my analysis is that the case for
disable-by-default has been made for backend_flush_after but not the
others.  I also agree that backend_flush_after is much more dangerous
on theoretical grounds; the checkpointer is in a good position to sort
the requests to achieve locality, but backends are not.  And in fact I
think what the testing shows so far is that when they can't achieve
locality, backend flush control sucks.  When it can, it's neutral or
positive.  But I really see no reason to believe that that's likely to
be true on general workloads.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



В списке pgsql-hackers по дате отправления:

Предыдущее
От: "David G. Johnston"
Дата:
Сообщение: Re: Change in order of criteria - reg
Следующее
От: Robert Haas
Дата:
Сообщение: Re: Negators for operators