Re: easy way of copying regex_t
От | Artur Zakirov |
---|---|
Тема | Re: easy way of copying regex_t |
Дата | |
Msg-id | 56A607F1.2040303@postgrespro.ru обсуждение исходный текст |
Ответ на | Re: easy way of copying regex_t (Tomas Vondra <tomas.vondra@2ndquadrant.com>) |
Список | pgsql-hackers |
On 25.01.2016 13:07, Tomas Vondra wrote: > > Right, it's definitely not thread-safe so there'd need to be some lock > protecting the regex_t copy. I was thinking about either using a group > of locks, each protecting a small subset of the affixes (thus making it > possible to work in parallel to some extent), or simply using a single > lock and each process would make a private copy at the beginning. > > In the end, I've decided to do it differently, and simply parse the > affix list from scratch in each process. The affix list is tiny and > takes less than a millisecond to parse in most cases, and I don't have > to care about the regex stuff at all. The main benefit is from sharing > parsed wordlist anyway. This is nice decision since the affix list is small. For our task I will change shared_ispell to use this solution. > > It's an old-school shared segment created by the extension at init time. > You're right the size is fixed so it's possible to run out of space by > loading too many dictionaries, but that was not a big deal for the type > of setups it was designed for - in those cases the list of dictionaries > is stable, so it's possible to size the segment accordingly in advance. > > But I guess we could do better now that we have dynamic shared memory, > possibly allocating one segment per dictionary as needed, or something > like that. > > regards > Yes it would be better as we will not need to define the maximum size of the shared segment in postgresql.conf. -- Artur Zakirov Postgres Professional: http://www.postgrespro.com Russian Postgres Company
В списке pgsql-hackers по дате отправления: