Re: Super Optimizing Postgres
От | Tom Lane |
---|---|
Тема | Re: Super Optimizing Postgres |
Дата | |
Msg-id | 18056.1005954344@sss.pgh.pa.us обсуждение исходный текст |
Ответ на | Re: Super Optimizing Postgres (mlw <markw@mohawksoft.com>) |
Список | pgsql-hackers |
mlw <markw@mohawksoft.com> writes: > Also, you don't go into the COST variables. If what is documented > about them is correct, they are woefully incorrect with a modern > machine. The numbers seemed in the right ballpark when I experimented with them a year or two ago. Keep in mind that all these things are quite fuzzy, given that we never know for sure whether a read() request to the kernel is going to cause actual I/O or be satisfied from kernel cache. One should not mistake "operator" for "addition instruction", either --- at the very least, there are several levels of function call overhead involved. And using one cost number for all Postgres operators is obviously a simplification of reality anyhow. > Would a 1.3 ghz Athlon really have a cpu_operator_cost of 0.0025? That > would imply that that computer could process 2500 conditionals in the > time it would take to make a sequential read. If Postgres is run on a > 10K RPM disk vs a 5.4K RPM disk on two different machines with the > same processor and speed, these numbers can't hope to be right, one > should be about twice as high as the other. We've talked in the past about autoconfiguring these numbers, but I have not seen any proposals for automatically deriving trustworthy numbers in a reasonable period of time. There's too much uncertainty and noise in any simple test. (I spent literally weeks convincing myself that the current numbers were reasonable.) But having said all that, it's true that CPU speed has been increasing much faster than disk speed over the last few years. If you feel like reducing the CPU cost numbers, try it and see what happens. regards, tom lane
В списке pgsql-hackers по дате отправления: