Re: per table random-page-cost?

Поиск
Список
Период
Сортировка
От Greg Stark
Тема Re: per table random-page-cost?
Дата
Msg-id 407d949e0910191639k6bc9d71bu2c5638a260ce13a3@mail.gmail.com
обсуждение исходный текст
Ответ на Re: per table random-page-cost?  ("Kevin Grittner" <Kevin.Grittner@wicourts.gov>)
Ответы Re: per table random-page-cost?  (Jeff Davis <pgsql@j-davis.com>)
Список pgsql-hackers
On Mon, Oct 19, 2009 at 2:54 PM, Kevin Grittner
<Kevin.Grittner@wicourts.gov> wrote:
> How about calculating an effective percentage based on other
> information.  effective_cache_size, along with relation and database
> size, come to mind.

I think previous proposals for this have fallen down when you actually
try to work out a formula for this. The problem is that you could have
a table which is much smaller than effective_cache_size but is never
in cache due to it being one of many such tables.

I think it would still be good to have some naive kind of heuristic
here as long as it's fairly predictable for DBAs.

But the long-term strategy here I think is to actually have some way
to measure the real cache hit rate on a per-table basis. Whether it's
by timing i/o operations, programmatic access to dtrace, or some other
kind of os interface, if we could know the real cache hit rate it
would be very helpful.

Perhaps we could extrapolate from the shared buffer cache percentage.
If there's a moderately high percentage in shared buffers then it
seems like a reasonable supposition to assume the filesystem cache
would have a similar distribution.

--
greg


В списке pgsql-hackers по дате отправления:

Предыдущее
От: Greg Stark
Дата:
Сообщение: Re: per table random-page-cost?
Следующее
От: Robert Haas
Дата:
Сообщение: Re: per table random-page-cost?