Re: PostgreSQL and HugePage
От | Robert Haas |
---|---|
Тема | Re: PostgreSQL and HugePage |
Дата | |
Msg-id | AANLkTinwzLqxE0Q57cU3UE+jV5Ei-YgCk6SFUNg6Zwbn@mail.gmail.com обсуждение исходный текст |
Ответ на | PostgreSQL and HugePage (Hsien-Wen Chu <chu.hsien.wen@gmail.com>) |
Ответы |
Re: PostgreSQL and HugePage
|
Список | pgsql-hackers |
On Wed, Oct 20, 2010 at 3:47 PM, daveg <daveg@sonic.net> wrote: > On Wed, Oct 20, 2010 at 12:28:25PM -0700, Greg Stark wrote: >> On Wed, Oct 20, 2010 at 12:17 PM, Greg Stark <gsstark@mit.edu> wrote: >> > I don't think it's a big cost once all the processes >> > have been forked if you're reusing them beyond perhaps slightly more >> > efficient cache usage. >> >> Hm, this site claims to get a 13% win just from the reduced tlb misses >> using a preload hack with Pg 8.2. That would be pretty substantial. >> >> http://oss.linbit.com/hugetlb/ > > That was my motivation in trying a patch. TLB misses can be a substantial > overhead. I'm not current on the state of play, but working at Sun's > benchmark lab on a DB TPC-B benchmark something for the first generation > of MP systems, something like 30% of all bus traffic was TLB misses. The > next iteration of the hardward had a much larger TLB. > > I have a client with 512GB memory systems, currently with 128GB configured > as postgresql buffer cache. Which is 32M TLB entires trying to fit in the > few dozed cpu TLB slots. I suspect there may be some contention. > > I'll benchmark of course. Do you mean 128GB shared buffers, or shared buffers + OS cache? I think that the general wisdom is that performance tails off beyond 8-10GB of shared buffers anyway, so a performance improvement on 128GB shared buffers might not mean much unless you can also show that 128GB shared buffers actually performs better than some smaller amount. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
В списке pgsql-hackers по дате отправления: