Re: Hugetables question
От | Radosław Smogura |
---|---|
Тема | Re: Hugetables question |
Дата | |
Msg-id | 0d9a4e407a15bc34fd8737542e802d75@mail.softperience.eu обсуждение исходный текст |
Ответ на | Re: Hugetables question (Marti Raudsepp <marti@juffo.org>) |
Ответы |
Re: Hugetables question
|
Список | pgsql-hackers |
On Wed, 22 Jun 2011 14:24:17 +0300, Marti Raudsepp wrote: > On Sun, Jun 19, 2011 at 12:56, Radosław Smogura > <rsmogura@softperience.eu> wrote: >> I want to implement hugepages for shared memory > > Hi, > > Have you read this post by Tom Lane about the performance estimation > and a proof-of-concept patch with hugepages? > http://archives.postgresql.org/pgsql-hackers/2010-11/msg01842.php > > It's possible that there was a flaw in his analysis, but his > conclusion is that it's not worth it: > >> And the bottom line is: if there's any performance benefit at all, >> it's on the order of 1%. The best result I got was about 3200 TPS >> with hugepages, and about 3160 without. The noise in these numbers >> is more than 1% though. > > Regards, > Marti Actually when I tried to implement hugepages for palloc (I ware unable to write fast and effective mallocator), my resultwas that when I was using normal pages I got small performance degree, but when I was using huge pages this was fasterthen normal build (even with infective mallocator). I know there are some problems with accessing higher memory (when server is more then 8GB), and hugepages may resolve this. I strictly disagree with opinion if there is 1% it's worthless. 1% here, 1% there, and finally You get 10%, but of coursehugepages will work quite well if will be used in code that require many random "jumps". I think this can be reproducedand some not-common case may be found to get performance of about 10% (maybe upload whole table in shared bufferand randomly "peek" records one by one). Regards,Radek
В списке pgsql-hackers по дате отправления: