Re: speeding up a query on a large table
От | Mike Rylander |
---|---|
Тема | Re: speeding up a query on a large table |
Дата | |
Msg-id | b918cf3d05081714557259d7eb@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: speeding up a query on a large table (Manfred Koizar <mkoi-pg@aon.at>) |
Ответы |
Re: speeding up a query on a large table
|
Список | pgsql-general |
On 8/17/05, Manfred Koizar <mkoi-pg@aon.at> wrote: > On Mon, 25 Jul 2005 17:50:55 -0400, Kevin Murphy > <murphy@genome.chop.edu> wrote: > > and because the number of possible search terms is so large, it > >would be nice if the entire index could somehow be preloaded into memory > >and encouraged to stay there. > > Postgres does not have such a feature and I wouldn't recommend to mess > around inside Postgres. You could try to copy the relevant index > file(s) to /dev/null to populate the OS cache ... That actually works fine. When I had big problems with a large GiST index I just used cat to dump it at /dev/null and the OS grabbed it. Of course, that was on linux so YMMV. -- Mike Rylander mrylander@gmail.com GPLS -- PINES Development Database Developer http://open-ils.org
В списке pgsql-general по дате отправления: