Memory Allocation

Поиск
Список
Период
Сортировка
От Ryan Hansen
Тема Memory Allocation
Дата
Msg-id 011101c95013$b6fca0a0$24f5e1e0$@hansen@brightbuilders.com
обсуждение исходный текст
Ответы Re: Memory Allocation  (Alan Hodgson <ahodgson@simkin.ca>)
Re: Memory Allocation  (Carlos Moreno <morenopg@mochima.com>)
Re: Memory Allocation  (Tom Lane <tgl@sss.pgh.pa.us>)
Re: Memory Allocation  (Scott Carey <scott@richrelevance.com>)
Список pgsql-performance

Hey all,

 

This may be more of a Linux question than a PG question, but I’m wondering if any of you have successfully allocated more than 8 GB of memory to PG before.

 

I have a fairly robust server running Ubuntu Hardy Heron, 24 GB of memory, and I’ve tried to commit half the memory to PG’s shared buffer, but it seems to fail.  I’m setting the kernel shared memory accordingly using sysctl, which seems to work fine, but when I set the shared buffer in PG and restart the service, it fails if it’s above about 8 GB.  I actually have it currently set at 6 GB.

 

I don’t have the exact failure message handy, but I can certainly get it if that helps.  Mostly I’m just looking to know if there’s any general reason why it would fail, some inherent kernel or db limitation that I’m unaware of. 

 

If it matters, this DB is going to be hosting and processing hundreds of GB and eventually TB of data, it’s a heavy read-write system, not transactional processing, just a lot of data file parsing (python/bash) and bulk loading.  Obviously the disks get hit pretty hard already, so I want to make the most of the large amount of available memory wherever possible.  So I’m trying to tune in that direction.

 

Any info is appreciated.

 

Thanks!

В списке pgsql-performance по дате отправления:

Предыдущее
От: Richard Huxton
Дата:
Сообщение: Re: Increasing pattern index query speed
Следующее
От: Alan Hodgson
Дата:
Сообщение: Re: Memory Allocation