Re: Slowdown problem when writing 1.7million records
От | Gregory Wood |
---|---|
Тема | Re: Slowdown problem when writing 1.7million records |
Дата | |
Msg-id | 001101c0a0ce$b46c56c0$7889ffcc@comstock.com обсуждение исходный текст |
Ответ на | Slowdown problem when writing 1.7million records ("Stephen Livesey" <ste@exact3ex.co.uk>) |
Список | pgsql-general |
43 hours? Ouch, that is quite a long time! I'm no expert by any means, but here are a few tips that I've picked up on this list that might help out: 1. The COPY command is blazing fast for importing, if you are certain your input is clean, this is the way to go. Read more about that here: http://www.postgresql.org/users-lounge/docs/7.0/postgres/sql-copy.htm 2. If you have any indexes, drop them, then do your import, then readd them after that is complete. Indexes slow inserts down quite a bit. 3. If you are using INSERTs (I can't see a COPY taking more than a few minutes), make sure that you are wrapping them in an explicit transaction. Otherwise, each INSERT becomes its own transaction with all that overhead. 4. If you *are* using transactions, break the transactions up into chunks. Trying to maintain a single transaction over 1.7 million INSERTs will slow things down. Personally I'd probably go with about 500 at a time. I'm sure someone else will have another suggestion or three... Greg ----- Original Message ----- From: "Stephen Livesey" <ste@exact3ex.co.uk> To: <pgsql-general@postgresql.org> Sent: Tuesday, February 27, 2001 4:44 AM Subject: Slowdown problem when writing 1.7million records > I am very new to PostgreSQL and have installed v7.03 on a Red Hat Linux > Server (v6.2), I am accessing the files using JDBC from a Windows 2000 PC. > > I have created a small file as follows: > CREATE TABLE expafh ( > postcode CHAR(8) NOT NULL, > postcode_record_no INT, > street_name CHAR(30), > town CHAR(31), > PRIMARY KEY(postcode) ) > > I am now writing 1.7million records to this file. > > The first 100,000 records took 15mins. > The next 100,000 records took 30mins > The last 100,000 records took 4hours. > > In total, it took 43 hours to write 1.7million records. > > Is this sort of degradation normal using a PostgreSQL database? > > I have never experienced this sort of degradation with any other database > and I have done exactly the same test (using the same hardware) on the > following databases: > DB2 v7 in total took 10hours 6mins > Oracle 8i in total took 3hours 20mins > Interbase v6 in total took 1hr 41min > MySQL v3.23 in total took 54mins > > > Any Help or advise would be appreciated. > > Thanks > Stephen Livesey > > > >
В списке pgsql-general по дате отправления: