Re: large xml database
От | Lutz Steinborn |
---|---|
Тема | Re: large xml database |
Дата | |
Msg-id | 20101031070857.a9298792.l.steinborn@4c-ag.de обсуждение исходный текст |
Ответ на | large xml database (Viktor Bojović <viktor.bojovic@gmail.com>) |
Ответы |
Re: large xml database
|
Список | pgsql-sql |
On Sat, 30 Oct 2010 23:49:29 +0200 Viktor Bojović <viktor.bojovic@gmail.com> wrote: > > many tries have failed because 8GB of ram and 10gb of swap were not enough. > also sometimes i get that more than 2^32 operations were performed, and > functions stopped to work. > we have a similar problem and we use the Amara xml Toolkit for python. To avoid the big memory consumption use pushbind. A 30G bme catalog file takes a maximum up to 20min to import. It might be faster because we are preparing complex objects with an orm. So the time consumption depends how complex the catalog is. If you use amara only to perform a conversion from xml to csv the final import can be done much faster. regards -- Lutz http://www.4c-gmbh.de
В списке pgsql-sql по дате отправления: