Re: speeding up a query on a large table
От | Kevin Murphy |
---|---|
Тема | Re: speeding up a query on a large table |
Дата | |
Msg-id | 4303EBC8.2090503@genome.chop.edu обсуждение исходный текст |
Ответ на | Re: speeding up a query on a large table (Mike Rylander <mrylander@gmail.com>) |
Список | pgsql-general |
Mike Rylander wrote: >On 8/17/05, Manfred Koizar <mkoi-pg@aon.at> wrote: > > >>On Mon, 25 Jul 2005 17:50:55 -0400, Kevin Murphy >><murphy@genome.chop.edu> wrote: >> >> >>>and because the number of possible search terms is so large, it >>>would be nice if the entire index could somehow be preloaded into memory >>>and encouraged to stay there. >>> >>> >>You could try to copy the relevant index >>file(s) to /dev/null to populate the OS cache ... >> >> > >That actually works fine. When I had big problems with a large GiST >index I just used cat to dump it at /dev/null and the OS grabbed it. >Of course, that was on linux so YMMV. > > > Thanks, Manfred & Mike. That is a very nice solution. And just for the sake of the archive ... I can find the filename of the relevant index or table file name(s) by finding pg_class.relfilenode where pg_class.relname is the name of the entity, then doing, e.g.: sudo -u postgres find /usr/local/pgsql/data -name "somerelfilenode*". -Kevin Murphy
В списке pgsql-general по дате отправления: