Re: per table random-page-cost?

Поиск
Список
Период
Сортировка
От Robert Haas
Тема Re: per table random-page-cost?
Дата
Msg-id 603c8f070910191729x2547ddb5x89218e471c1bd0@mail.gmail.com
обсуждение исходный текст
Ответ на Re: per table random-page-cost?  ("Kevin Grittner" <Kevin.Grittner@wicourts.gov>)
Список pgsql-hackers
On Mon, Oct 19, 2009 at 5:54 PM, Kevin Grittner
<Kevin.Grittner@wicourts.gov> wrote:
> Robert Haas <robertmhaas@gmail.com> wrote:
>
>> I've been wondering if it might make sense to have a
>> "random_page_cost" and "seq_page_cost" setting for each TABLESPACE,
>> to compensate for the fact that different media might be faster or
>> slower, and a percent-cached setting for each table over top of
>> that.
>
> [after recovering from the initial cringing reaction...]
>
> How about calculating an effective percentage based on other
> information.  effective_cache_size, along with relation and database
> size, come to mind.  How about the particular index being considered
> for the plan?  Of course, you might have to be careful about working
> in TOAST table size for a particular query, based on the columns
> retrieved.

I think that a per-tablespace page cost should be set by the DBA, same
as we do with global page-costs now.

OTOH, I think that a per-relation percent-in-cache should be
automatically calculated by the database (somehow) and the DBA should
have an option to override in case the database does the wrong thing.
I gave a lightning talk on this topic at PGcon.

> I have no doubt that there would be some major performance regressions
> in the first cut of anything like this, for at least *some* queries.
> The toughest part of this might be to get adequate testing to tune it
> for a wide enough variety of real-life situations.

Agreed.

...Robert


В списке pgsql-hackers по дате отправления:

Предыдущее
От: Greg Stark
Дата:
Сообщение: Re: per table random-page-cost?
Следующее
От: Jeff Davis
Дата:
Сообщение: Re: per table random-page-cost?