Re: Variable (degrading) performance
От | Heikki Linnakangas |
---|---|
Тема | Re: Variable (degrading) performance |
Дата | |
Msg-id | 466EE403.5080200@enterprisedb.com обсуждение исходный текст |
Ответ на | Re: Variable (degrading) performance (Vladimir Stankovic <V.Stankovic@city.ac.uk>) |
Список | pgsql-performance |
Vladimir Stankovic wrote: > What I am hoping to see is NOT the same value for all the executions of > the same type of transaction (after some transient period). Instead, I'd > like to see that if I take appropriately-sized set of transactions I > will see at least steady-growth in transaction average times, if not > exactly the same average. Each chunk would possibly include sudden > performance drop due to the necessary vacuum and checkpoints. The > performance might be influenced by the change in the data set too. > I am unhappy about the fact that durations of experiments can differ > even 30% (having in mind that they are not exactly the same due to the > non-determinism on the client side) . I would like to eliminate this > variability. Are my expectations reasonable? What could be the cause(s) > of this variability? You should see that if you define your "chunk" to be long enough. Long enough is probably hours, not minutes or seconds. As I said earlier, checkpoints and vacuum are a major source of variability. -- Heikki Linnakangas EnterpriseDB http://www.enterprisedb.com
В списке pgsql-performance по дате отправления: