Re: random_page_cost = 2.0 on Heroku Postgres
От | Peter van Hardenberg |
---|---|
Тема | Re: random_page_cost = 2.0 on Heroku Postgres |
Дата | |
Msg-id | CAAcg=kV+dKXQWbkcMT-pmYHVNjR1NWXCN86+UL=AWbTenHfzoA@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: random_page_cost = 2.0 on Heroku Postgres (Joshua Berkus <josh@agliodbs.com>) |
Ответы |
Re: random_page_cost = 2.0 on Heroku Postgres
|
Список | pgsql-performance |
On Sun, Feb 12, 2012 at 12:01 PM, Joshua Berkus <josh@agliodbs.com> wrote: > You'd pretty much need to do large-scale log harvesting combined with samples of query concurrency taken several timesper minute. Even that won't "normalize" things the way you want, though, since all queries are not equal in terms ofthe amount of data they hit. > > Given that, I'd personally take a statistical approach. Sample query execution times across a large population of serversand over a moderate amount of time. Then apply common tests of statistical significance. This is why Heroku hasthe opportunity to do this in a way that smaller sites could not; they have enough servers to (probably) cancel out anyrandom activity effects. > Yes, I think if we could normalize, anonymize, and randomly EXPLAIN ANALYZE 0.1% of all queries that run on our platform we could look for bad choices by the planner. I think the potential here could be quite remarkable. -- Peter van Hardenberg San Francisco, California "Everything was beautiful, and nothing hurt." -- Kurt Vonnegut
В списке pgsql-performance по дате отправления: