Re: random_page_cost = 2.0 on Heroku Postgres
От | Peter van Hardenberg |
---|---|
Тема | Re: random_page_cost = 2.0 on Heroku Postgres |
Дата | |
Msg-id | CAAcg=kUDs62oSpkra1xc=T_GGL1prKXEg2Lwz5xZA9ej0KUj7A@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: random_page_cost = 2.0 on Heroku Postgres (Scott Marlowe <scott.marlowe@gmail.com>) |
Ответы |
Re: random_page_cost = 2.0 on Heroku Postgres
|
Список | pgsql-performance |
On Wed, Feb 8, 2012 at 6:28 PM, Scott Marlowe <scott.marlowe@gmail.com> wrote: > On Wed, Feb 8, 2012 at 6:45 PM, Peter van Hardenberg <pvh@pvh.ca> wrote: >> That said, I have access to a very large fleet in which to can collect >> data so I'm all ears for suggestions about how to measure and would >> gladly share the results with the list. > > I wonder if some kind of script that grabbed random queries and ran > them with explain analyze and various random_page_cost to see when > they switched and which plans are faster would work? We aren't exactly in a position where we can adjust random_page_cost on our users' databases arbitrarily to see what breaks. That would be... irresponsible of us. How would one design a meta-analyzer which we could run across many databases and collect data? Could we perhaps collect useful information from pg_stat_user_indexes, for example? -p -- Peter van Hardenberg San Francisco, California "Everything was beautiful, and nothing hurt." -- Kurt Vonnegut
В списке pgsql-performance по дате отправления: