Re: [PROPOSAL] Shared Ispell dictionaries
От | Andres Freund |
---|---|
Тема | Re: [PROPOSAL] Shared Ispell dictionaries |
Дата | |
Msg-id | 20180302043149.tn2xjgt2vcigknhe@alap3.anarazel.de обсуждение исходный текст |
Ответ на | Re: [PROPOSAL] Shared Ispell dictionaries (Arthur Zakirov <a.zakirov@postgrespro.ru>) |
Ответы |
Re: [PROPOSAL] Shared Ispell dictionaries
Re: [PROPOSAL] Shared Ispell dictionaries |
Список | pgsql-hackers |
Hi, On 2018-02-07 19:28:29 +0300, Arthur Zakirov wrote: > + { > + {"max_shared_dictionaries_size", PGC_POSTMASTER, RESOURCES_MEM, > + gettext_noop("Sets the maximum size of all text search dictionaries loaded into shared memory."), > + gettext_noop("Currently controls only loading of Ispell dictionaries. " > + "If total size of simultaneously loaded dictionaries " > + "reaches the maximum allowed size then a new dictionary " > + "will be loaded into local memory of a backend."), > + GUC_UNIT_KB, > + }, > + &max_shared_dictionaries_size, > + 100 * 1024, 0, MAX_KILOBYTES, > + NULL, NULL, NULL > + }, So this uses shared memory, allocated at server start? That doesn't seem right. Wouldn't it make more sense to have a 'num_shared_dictionaries' GUC, and then allocate them with dsm? Or even better not have any such limit and us a dshash table to point to individual loaded tables? Is there any chance we can instead can convert dictionaries into a form we can just mmap() into memory? That'd scale a lot higher and more dynamicallly? Regards, Andres
В списке pgsql-hackers по дате отправления: