Re: Help tuning a large table off disk and into RAM
От | Greg Smith |
---|---|
Тема | Re: Help tuning a large table off disk and into RAM |
Дата | |
Msg-id | Pine.GSO.4.64.0709261302150.16566@westnet.com обсуждение исходный текст |
Ответ на | Help tuning a large table off disk and into RAM ("James Williams" <james.wlms@googlemail.com>) |
Список | pgsql-general |
On Wed, 26 Sep 2007, James Williams wrote: > The box has 4 x Opterons, 4Gb RAM & five 15k rpm disks, RAID 5. We > wanted fast query/lookup. We know we can get fast disk IO. You might want to benchmark to prove that if you haven't already. You would not be the first person to presume you have fast disk I/O on RAID 5 only to discover that's not actually true when tested. http://www.westnet.com/~gsmith/content/postgresql/pg-disktesting.htm gives some details here. > shared_buffers = 128MB > temp_buffers = 160MB > work_mem = 200MB > max_stack_depth = 7MB The one you're missing is effective_cache_size, and I'd expect you'd need to more than double shared_buffers to have that impact things given what you've described of your tasks. Take a look at http://www.westnet.com/~gsmith/content/postgresql/pg-5minute.htm to get a better idea the right range for those two you should be considering; 128MB for shared_buffers is way low for your system, something >1GB is probably right, and effective_cache_size should probably be in the multiple GB range. If you actually want to see what's inside the shared_buffers memory, take a look at the contrib/pg_buffercache module. Installing that for your database will let you see how the memory is being used, to get a better idea how much of your indexes are staying in that part of memory. The hint you already got from Bill Moran about using pg_relation_size() will give you some basis for figuring out what % of the index is being held there. -- * Greg Smith gsmith@gregsmith.com http://www.gregsmith.com Baltimore, MD
В списке pgsql-general по дате отправления: