Re: Caching Python modules
От | Jan Urbański |
---|---|
Тема | Re: Caching Python modules |
Дата | |
Msg-id | 4E4BB227.1000108@wulczer.org обсуждение исходный текст |
Ответ на | Re: Caching Python modules (Jan Urbański <wulczer@wulczer.org>) |
Список | pgsql-hackers |
On 17/08/11 14:19, Jan Urbański wrote: > On 17/08/11 14:09, PostgreSQL - Hans-Jürgen Schönig wrote: >> CREATE OR REPLACE FUNCTION textprocess.add_to_corpus(lang text, t text) RETURNS float4 AS $$ >> >> from SecondCorpus import SecondCorpus >> from SecondDocument import SecondDocument >> >> i am doing some intense text mining here. >> the problem is: is it possible to cache those imported modules from function to function call. >> GD works nicely for variables but can this actually be done with imported modules as well? >> the import takes around 95% of the total time so it is definitely something which should go away somehow. >> i have checked the docs but i am not more clever now. > > After a module is imported in a backend, it stays in the interpreter's > sys.modules dictionary and importing it again will not cause the module > Python code to be executed. > > As long as you are using the same backend you should be able to call > add_to_corpus repeatedly and the import statements should take a long > time only the first time you call them. > > This simple test demonstrates it: > > [example missing the slow() function code] Oops, forgot to show the CREATE statement of the test function: postgres=# create or replace function slow() returns void language plpythonu as $$ import slow $$; Jan
В списке pgsql-hackers по дате отправления: