Re: Questions regarding contrib/tsearch
От | Tom Lane |
---|---|
Тема | Re: Questions regarding contrib/tsearch |
Дата | |
Msg-id | 29370.1028295594@sss.pgh.pa.us обсуждение исходный текст |
Ответ на | Questions regarding contrib/tsearch ("Markus Wollny" <Markus.Wollny@computec.de>) |
Ответы |
Re: Questions regarding contrib/tsearch
Re: Questions regarding contrib/tsearch |
Список | pgsql-general |
"Markus Wollny" <Markus.Wollny@computec.de> writes: > ... I suspect that the high running > time for the first call of that query is due to the database having to > do harddisk-access in order to get the needed parts of the table into > memory. This would explain the acceptably low running time of the second > call - the information needed is already in memory, so there's no slow > harddisk-access involved and the query is completed quite quickly. Is > this correct? Yup, that's my interpretation as well. > If so, what can I do to have all of the database in memory? Buy enough RAM to hold it ;-) If the database is being accessed heavily then it will tend to remain swapped in; you don't have to (and really can't) do anything to tweak the kernel-level and Postgres-level algorithms that determine this. What you want is to ensure there's enough RAM to hold not only all the database hotspots, but also all the other programs and working data that the server machine will be running. Check the actual size-on-disk of the tables and indexes you would like to be resident. (Do a vacuum, then look at pg_class.relpages for these items. See http://developer.postgresql.org/docs/postgres/diskusage.html for more info.) I would allow about 10MB of RAM per server process, plus a healthy chunk for the kernel and other programs. Also, it's probably best not to go overboard on shared_buffers in this scenario. You want the tables to stay resident in kernel disk cache, not necessarily in Postgres shared buffers. regards, tom lane
В списке pgsql-general по дате отправления: