Re: tsvector pg_stats seems quite a bit off.
От | Jan Urbański |
---|---|
Тема | Re: tsvector pg_stats seems quite a bit off. |
Дата | |
Msg-id | 4C02A81D.30209@wulczer.org обсуждение исходный текст |
Ответ на | Re: tsvector pg_stats seems quite a bit off. (Jesper Krogh <jesper@krogh.cc>) |
Ответы |
Re: tsvector pg_stats seems quite a bit off.
Re: tsvector pg_stats seems quite a bit off. |
Список | pgsql-hackers |
On 30/05/10 09:08, Jesper Krogh wrote: > On 2010-05-29 15:56, Jan Urbański wrote: >> On 29/05/10 12:34, Jesper Krogh wrote: >>> I can "fairly easy" try out patches or do other kind of testing. >>> >> I'll try to come up with a patch for you to try and fiddle with these >> values before Monday. Here's a patch against recent git, but should apply to 8.4 sources as well. It would be interesting to measure the memory and time needed to analyse the table after applying it, because we will be now using a lot bigger bucket size and I haven't done any performance impact testing on it. I updated the initial comment block in compute_tsvector_stats, but the prose could probably be improved. > testdb=# explain select id from testdb.reference where document_tsvector > @@ plainto_tsquery('where') order by id limit 50; > NOTICE: text-search query contains only stop words or doesn't contain > lexemes, ignored That's orthogonal to the issue with the statistics collection, you just need to modify your stopwords list (for instance make it empty). Cheers, Jan
Вложения
В списке pgsql-hackers по дате отправления: