Re: Initial 9.2 pgbench write results
От | Jeff Janes |
---|---|
Тема | Re: Initial 9.2 pgbench write results |
Дата | |
Msg-id | CAMkU=1wKz5LVrB8z0CGrbdPB2J-agFg_GtL+d3HGCN_KGnw8SA@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: Initial 9.2 pgbench write results (Greg Smith <greg@2ndQuadrant.com>) |
Список | pgsql-hackers |
On Tue, Feb 14, 2012 at 12:25 PM, Greg Smith <greg@2ndquadrant.com> wrote: > On 02/14/2012 01:45 PM, Greg Smith wrote: >> >> scale=1000, db is 94% of RAM; clients=4 >> Version TPS >> 9.0 535 >> 9.1 491 (-8.4% relative to 9.0) >> 9.2 338 (-31.2% relative to 9.1) > > > A second pass through this data noted that the maximum number of buffers > cleaned by the background writer is <=2785 in 9.0/9.1, while it goes as high > as 17345 times in 9.2. There is something strange about the data for Set 4 (9.1) at scale 1000. The number of buf_alloc varies a lot from run to run in that series (by a factor of 60 from max to min). But the TPS doesn't vary by very much. How can that be? If a transaction needs a page that is not in the cache, it needs to allocate a buffer. So the only thing that could lower the allocation would be a higher cache hit rate, right? How could there be so much variation in the cache hit rate from run to run at the same scale? Cheers, Jeff
В списке pgsql-hackers по дате отправления: