Re: Optimize for insertions?
От | Gregory Wood |
---|---|
Тема | Re: Optimize for insertions? |
Дата | |
Msg-id | 03df01c176c7$060317c0$7889ffcc@comstock.com обсуждение исходный текст |
Ответ на | Optimize for insertions? ("Gurupartap Davis" <partap@yahoo.com>) |
Список | pgsql-general |
1. Look into using the COPY command. This allows fast bulk imports. 2. Consider tweaking your transactions. If you are not using transactions (possible considering you are migrating from MySQL), understand that PostgreSQL creates an implicit transaction for each statement, unless an explicit transaction is created. That means that you are dealing with the additional overhead of 200 million transactions on top of the INSERTs themselves. Of course, doing everything inside a single transaction can be cumbersome too. Try stuffing a thousand or so INSERTs into a single transaction and see if that speeds things up. 3. If you created indexes, drop them during your import, and readd them after you've finished. Maintaining those slows things down a bit as well. Hope this helps! Greg ----- Original Message ----- From: Gurupartap Davis To: pgsql-general@postgresql.org Sent: Monday, November 26, 2001 4:17 PM Subject: [GENERAL] Optimize for insertions? Hi, I'm trying to migrate a mysql database to postgresql. I've got the tables all set up, but inserting the data from the mysql dumps is taking forever. I've got about 200 million rows in the big table (growing by about 1.5 million per day), but at the 100 inserts/second I'm getting, this will take over 3 weeks. MySQL on the same machine averages about 1100-1200 inserts/second (no, I'm not running them both at the same time ;-) Any thoughts on how to tweak the postmaster for quick inserts? I've got fsync turned off, but that's about all I've tried so far. I'm running postgresql 7.1.3 on Linux (Redhat 7.2 with ext3) with a 700MHz processor, 256MB RAM, and an 80GB IDE HD Thanks -partap
В списке pgsql-general по дате отправления: