Re: How is random_page_cost=4 ok?
От | Kevin Grittner |
---|---|
Тема | Re: How is random_page_cost=4 ok? |
Дата | |
Msg-id | 48EF5F97.EE98.0025.0@wicourts.gov обсуждение исходный текст |
Ответ на | Re: How is random_page_cost=4 ok? (Greg Smith <gsmith@gregsmith.com>) |
Список | pgsql-hackers |
>>> Greg Smith <gsmith@gregsmith.com> wrote: > I don't think random_page_cost actually corresponds with any real number > anymore. I just treat it as an uncalibrated knob you can turn and > benchmark the results at. Same here. We have always found best performance in our production environments with this set to somewhere from the same as seq_page_cost to twice seq_page_cost -- depending on how much of the database is cached. As we get toward more heavily cached databases we also reduce seq_page_cost. So we range from (0.1,0.1) to (1,2). These have really become abstractions with legacy names. If I had to suggest how someone choose a starting setting, I would say that seq_page_cost should be the proportion of sequential scans likely to need to go the disk, and random_page_cost should be two times the proportion of heap data which doesn't fit in cache space. Add 0.1 to both numbers and then truncate to one decimal position. This, of course, assumes a battery backed caching RAID controller, a reasonable RAID for the data set, and one of the more typical types of usage patterns. -Kevin
В списке pgsql-hackers по дате отправления: