Re: integrated tsearch doesn't work with non utf8 database
От | Heikki Linnakangas |
---|---|
Тема | Re: integrated tsearch doesn't work with non utf8 database |
Дата | |
Msg-id | 46E55B1F.3090207@enterprisedb.com обсуждение исходный текст |
Ответ на | Re: integrated tsearch doesn't work with non utf8 database (Tom Lane <tgl@sss.pgh.pa.us>) |
Ответы |
Re: integrated tsearch doesn't work with non utf8 database
|
Список | pgsql-hackers |
Tom Lane wrote: > Teodor Sigaev <teodor@sigaev.ru> writes: >>> Note the Seq Scan on pg_ts_config_map, with filter on ts_lexize(mapdict, >>> $1). That means that it will call ts_lexize on every dictionary, which >>> will try to load every dictionary. And loading danish_stem dictionary >>> fails in latin2 encoding, because of the problem with the stopword file. > >> Attached patch should fix it, I hope. > > Uh, how will that help? AFAICS it still has to call ts_lexize with > every dictionary. No, ts_lexize is no longer in the seq scan filter, but in the sort key that's calculated only for those rows that match the filter 'mapcfg=? AND maptokentype=?'. It is pretty kludgey, though. The planner could choose another plan, that fails, if the statistics were different. Rewriting the function in C would be a more robust fix. -- Heikki Linnakangas EnterpriseDB http://www.enterprisedb.com
В списке pgsql-hackers по дате отправления: