The science of optimization in practical terms?
От | Joshua D. Drake |
---|---|
Тема | The science of optimization in practical terms? |
Дата | |
Msg-id | 1234483591.9467.188.camel@jd-laptop.pragmaticzealot.org обсуждение исходный текст |
Ответы |
Re: The science of optimization in practical terms?
Re: The science of optimization in practical terms? |
Список | pgsql-hackers |
Hello, I was helping a customer today with what is becoming a common theme with a lot of work we do. Basically, "It was working fine until recently." Now 90% of the time it is as simple as running an ANALYZE VERBOSE and picking apart relations that aren't being maintained properly and adjust autovacuum or vacuum appropriately. If it isn't that, it is usually something like increasing effective_cache_size, or default_statistics_target. However, in recent times I have found that increasing cpu_tuple_cost, cpu_operator_cost and cpu_index_tuple_cost to be very useful. This is always in the scenario of, "queries were running fine for months and then all of a sudden, they are not". It is also always on systems that we are already maintaining and thus (in theory) are in good shape. So my question is, what is the science in practical terms behind those parameters? Normally I would just accept it as another PostgreSQL idiosyncrasy but the performance differences I am talking about are large. After changing cpu_tuple_cost and cpu_operator_cost today to 0.5 I decreased two queries from 10 seconds and 15 seconds to 2 seconds and ~900 ms respectively. Sincerely, Joshua D. Drake -- PostgreSQL - XMPP: jdrake@jabber.postgresql.org Consulting, Development, Support, Training 503-667-4564 - http://www.commandprompt.com/ The PostgreSQL Company, serving since 1997
В списке pgsql-hackers по дате отправления: