Re: How to keep a table in memory?
От | Christopher Browne |
---|---|
Тема | Re: How to keep a table in memory? |
Дата | |
Msg-id | 87wssmaf0d.fsf@wolfe.cbbrowne.com обсуждение исходный текст |
Ответ на | How to keep a table in memory? (adrobj <adrobj@yahoo.com>) |
Ответы |
Re: How to keep a table in memory?
|
Список | pgsql-hackers |
Quoth tgl@sss.pgh.pa.us (Tom Lane): > Devrim GÜNDÜZ <devrim@CommandPrompt.com> writes: >> So, IMHO, saying "trust your OS + PostgreSQL" is not a 100% perfect >> approach for the people who are asking to keep their objects on RAM, >> even though I know that there is nothing we can say right now. > > Well, nothing is a 100% solution. But my opinion is that people who > think they are smarter than an LRU caching algorithm are typically > mistaken. If the table is all that heavily used, it will stay in memory > just fine. If it's not sufficiently heavily used to stay in memory > according to an LRU algorithm, maybe the memory space really should be > spent on something else. > > Now there are certainly cases where a standard caching algorithm falls > down --- the main one I can think of offhand is where you would like to > give one class of queries higher priority than another, and so memory > space should preferentially go to tables that are needed by the first > class. But if that's your problem, "pin these tables in memory" is > still an awfully crude solution to the problem. I'd be inclined to > think instead about a scheme that lets references made by > higher-priority queries bump buffers' use-counts by more than 1, > or some other way of making the priority considerations visible to an > automatic cache management algorithm. Something I found *really* interesting was that whenever we pushed any "high traffic" systems onto PostgreSQL 8.1, I kept seeing measurable performance improvements taking place every day for a week. Evidently, it took that long for cache to *truly* settle down. Given that, and given that we've gotten a couple of good steps *more* sophisticated than mere LRU, I'm fairly willing to go pretty far down the "trust the shared memory cache" road. The scenario described certainly warrants doing some benchmarking; it warrants analyzing the state of the internal buffers over a period of time to see what is actually in them. If, after a reasonable period of time (that includes some variations in system load), a reasonable portion (or perhaps the entirety) of the Essential Table has consistently resided in buffers, then that should be pretty decent evidence that cacheing is working the way it should. -- output = ("cbbrowne" "@" "gmail.com") http://linuxdatabases.info/info/slony.html A Plateau is the highest form of flattery.
В списке pgsql-hackers по дате отправления: