Re: TSearch2 / Get all unique lexems
От | Teodor Sigaev |
---|---|
Тема | Re: TSearch2 / Get all unique lexems |
Дата | |
Msg-id | 43981267.6090802@sigaev.ru обсуждение исходный текст |
Ответ на | Re: TSearch2 / Get all unique lexems (Hannes Dorbath <light@theendofthetunnel.de>) |
Список | pgsql-general |
> Thanks. I hoped for something possible inside a pl/pgsql proc. I'm > trying to integrate pg_trgm with Tsearch2. I'm still on my UTF-8 > database. Yes I know, there is _NO_ UTF-8 support of any kind in > Tsearch2 yet, but I got it working to a degree that is OK for my > application (Created my own stemmer variant, ispell dict, affix file > etc). The last missing bit is to get a source for pg_trgm. I cannot use > the the stat() function, because it breaks as soon it sees an UTF-8 char. I suppose noncompatible with UTF wordparser can produce illegal lexemes (with part of multibyte char) and stores it in tsvector. Tsvector hasn't any control of breakness lexemes (with a help pg_verifymbstr() call), but stat() makes text field and then postgres check it and found incomplete mbchars. Which way I see (except waiting UTF support in tsearch2 which we develop now): 1 modify stat() function to check text field and if it fails then remove lexeme from output 2 Take from CVS HEAD wordpaser (ts_locale.[ch], wparser_def.c, wordparser/parser.[ch]). to_tsvector will works fine, to_tsquery will works correct only with quoted string (for examle, 'foo' & 'bar', bad: foo & bar). But casting 'asasas'::tsvector and dump/reload will not work correct. -- Teodor Sigaev E-mail: teodor@sigaev.ru WWW: http://www.sigaev.ru/
В списке pgsql-general по дате отправления: