Re: Any risk in increasing BLCKSZ to get larger tuples?
От | Joseph Shraibman |
---|---|
Тема | Re: Any risk in increasing BLCKSZ to get larger tuples? |
Дата | |
Msg-id | 39EF867B.2EF67B8@selectacast.net обсуждение исходный текст |
Ответ на | Any risk in increasing BLCKSZ to get larger tuples? (Philip Hallstrom <philip@adhesivemedia.com>) |
Список | pgsql-general |
Steve Wolfe wrote: > > > In some cases yes, in some no. Simple text should compress/decompress > > quickly and the cpu time wasted is made up for by less hardware access > > time and smaller db files. If you have a huge database the smaller db > > files could be critical. > > Hmm... that doesn't seem quite right to me. Whether it is compressed or > not, the same amount of final data has to move across the system bus to the > CPU for processing. It's the difference of (A) moving a large amount of > data to the CPU and processing it, or (B) moving a small amount of data to > the CPU, use the CPU cycles to turn it into the large set (as large as in > (A)), then processing it. I could be wrong, though. > It isn't the system bus, its the hardware of the hard disk. In general hardware costs are much bigger than a few cpu cycles (especially as cpu cycles are increasing with Moore's law and hardware access times aren't), but that isn't always the case (like in drivespace in Windows). Recently I was doing performance tuning on my application where I was adding a bunch of users to the system. i was making 6 db calls per user added. I assumed that the cpu costs of what I was doing was the limiting factor, but the cpu usage was only at like %20. Reducing the db calls to 4 meant a big increase in performance, streamlining the code was negligble. That's why I said for some cases automatic compression makes sense, for others it doesn't. -- Joseph Shraibman jks@selectacast.net Increase signal to noise ratio. http://www.targabot.com
В списке pgsql-general по дате отправления: