Re: per table random-page-cost?
От | Kevin Grittner |
---|---|
Тема | Re: per table random-page-cost? |
Дата | |
Msg-id | 4ADC99D7020000250002BB4E@gw.wicourts.gov обсуждение исходный текст |
Ответ на | Re: per table random-page-cost? (Robert Haas <robertmhaas@gmail.com>) |
Ответы |
Re: per table random-page-cost?
Re: per table random-page-cost? |
Список | pgsql-hackers |
Robert Haas <robertmhaas@gmail.com> wrote: > I've been wondering if it might make sense to have a > "random_page_cost" and "seq_page_cost" setting for each TABLESPACE, > to compensate for the fact that different media might be faster or > slower, and a percent-cached setting for each table over top of > that. [after recovering from the initial cringing reaction...] How about calculating an effective percentage based on other information. effective_cache_size, along with relation and database size, come to mind. How about the particular index being considered for the plan? Of course, you might have to be careful about working in TOAST table size for a particular query, based on the columns retrieved. I have no doubt that there would be some major performance regressions in the first cut of anything like this, for at least *some* queries. The toughest part of this might be to get adequate testing to tune it for a wide enough variety of real-life situations. -Kevin
В списке pgsql-hackers по дате отправления: