Re: [SQL] quest:optimization with indexes
От | Tom Lane |
---|---|
Тема | Re: [SQL] quest:optimization with indexes |
Дата | |
Msg-id | 18216.934468461@sss.pgh.pa.us обсуждение исходный текст |
Ответ на | Re: [SQL] quest:optimization with indexes (Clayton Cottingham <drfrog@smartt.com>) |
Список | pgsql-sql |
Clayton Cottingham <drfrog@smartt.com> writes: > what about helper software like glimpse, as per my other thread? > i was told to use glimpse, because full text index {fti} isnt robust > enough to handle large db's AFAIK there isn't any real good way to use glimpse in combination with Postgres. Glimpse is a file-granularity indexer --- that is, if you are searching for a word, say "foobar", then glimpse's index tells it which files within a large directory tree contain that word, and then it goes and runs "grep foobar" on each such file to find the actual occurrences of that word. Glimpse works great for large, slowly changing collections of files (I say "slowly changing" because updating the index is fairly expensive). For instance, I use it all the time to search the Postgres source files. But it'd be quite useless for a Postgres database where all the data is in one table (= one file). Also, there's no natural equivalent of file granularity in a database; if you did adapt glimpse to know about the inner structure of a Postgres database, you'd end up indexing at the record level, which'd be a mighty inefficient way to use glimpse. If you do have a moderately large collection of slowly-changing documents, I'd suggest storing them as a plain tree of files, and using glimpse and Postgres separately as index-maintaining tools. (That is, your database would just contain references to the files, plus whatever information you wanted to manipulate with SQL operations.) regards, tom lane
В списке pgsql-sql по дате отправления: