Re: Enough RAM for entire Database.. cost aside, is this
От | Mike Rylander |
---|---|
Тема | Re: Enough RAM for entire Database.. cost aside, is this |
Дата | |
Msg-id | cc3rt0$22kv$1@news.hub.org обсуждение исходный текст |
Ответ на | Enough RAM for entire Database.. cost aside, is this going to be fastest? ("Andy B" <abhousehuntRE-M--O--V-E@blueyonder.co.uk>) |
Список | pgsql-general |
<posted & mailed> Andy B wrote: > Hello Shridhar, > > Thanks for the reply. > >> There is no reason why you should not do it. How remains to be a point of >> disagreement though. You don't allocate 16GB of shared buffers to > postgresql. >> That won't give you performance you need. > > I think in the other thread, Tom was alluding to this too. What is it > about the shared buffer cache behaviour that makes it inefficient when it > is very large? (assuming that the address space it occupies is allocated > to RAM pages) It's not that making the cache bigger is inefficient, it's that the cache is not used the way you are thinking. Postgres does not try to create its own large persistent cache of recently used data, because the OS (especially Linux, and especially on an Opteron and compiled for 64 bit) is really much better at caching. In fact, other that herding programs and implementing security, optimizing the use of resources is what the OS is for. > > Is there a good place I could look for some in depth details of its > behaviour? There's a good bit of depth in the archives of this list. I would start searching back for discussions of effective_cache_size, as that is involved in *costing* the caching job that the OS is doing. > > Many thanks, > Andy --miker
В списке pgsql-general по дате отправления: