plsql gets "out of memory"
От | Rural Hunter |
---|---|
Тема | plsql gets "out of memory" |
Дата | |
Msg-id | 4E5B8FF7.4040007@gmail.com обсуждение исходный текст |
Ответы |
Re: plsql gets "out of memory"
|
Список | pgsql-admin |
Hi all, I'm a newbie here. I'm trying to test pgsql with my mysql data. If the performance is good, I will migrate from mysql to pgsql. I installed pgsql 9.1rc on my Ubuntu server. I'm trying to import a large sql file dumped from mysql into pgsql with 'plsql -f'. The file is around 30G with bulk insert commands in it. It rans several hours and then aborted with an "out of memory" error. This is the tail of the log I got: INSERT 0 280 INSERT 0 248 INSERT 0 210 INSERT 0 199 invalid command \n out of memory On server side, I only found these errors related to invalid UTF-8 characters which is related to escape characters when exported from mysql. 2011-08-29 19:19:29 CST ERROR: invalid byte sequence for encoding "UTF8": 0x00 2011-08-29 19:55:35 CST LOG: unexpected EOF on client connection My understanding is this is a client side issue and not related to any server memory setting. But how can ajust the memory setting of the psql program? To handle the escape character '\' which is default in mysql but not in pgsql, I have already made some rough modification to the exported sql dump file: sed "s/,'/,E'/g" |sed 's/\\0/ /g'. I guess there might still be some characters missing handling and that might cause the insert command to be split to several invalid pgsql commands. Would that be the cause of the "out of memory" error?
В списке pgsql-admin по дате отправления: