Re: Sphinx indexing problem
От | Joshua Tolley |
---|---|
Тема | Re: Sphinx indexing problem |
Дата | |
Msg-id | AANLkTinR6Fo2M3gAuduV78sqYqPEOw0RX_5-BdbeMdZS@mail.gmail.com обсуждение исходный текст |
Ответ на | Sphinx indexing problem (Mladen Gogala <mladen.gogala@vmsinfo.com>) |
Ответы |
Re: Sphinx indexing problem
|
Список | pgsql-novice |
On Sun, May 23, 2010 at 4:36 PM, Mladen Gogala <mladen.gogala@vmsinfo.com> wrote: > I am trying to create a Sphinx index on a fairly large Postgres table. My > problem is the fact that the Postgres API is trying to put the entire > result set into the memory: > > [root@medo etc]# ../bin/indexer --all > Sphinx 0.9.9-release (r2117) > Copyright (c) 2001-2009, Andrew Aksyonoff > > using config file '/usr/local/etc/sphinx.conf'... > indexing index 'test1'... > ERROR: index 'test1': sql_query: out of memory for query result > (DSN=pgsql://news:***@medo:5432/news). > total 0 docs, 0 bytes > total 712.593 sec, 0 bytes/sec, 0.00 docs/sec > total 0 reads, 0.000 sec, 0.0 kb/call avg, 0.0 msec/call avg > total 0 writes, 0.000 sec, 0.0 kb/call avg, 0.0 msec/call avg > Is there anything I can do to prevent the API from attempting to put the > entire query result in memory? Use a cursor, and fetch chunks of the result set one at a time. http://www.postgresql.org/docs/current/interactive/sql-declare.html -- Joshua Tolley / eggyknap End Point Corporation
В списке pgsql-novice по дате отправления: