Re: Slowdown problem when writing 1.7million records
От | Tom Lane |
---|---|
Тема | Re: Slowdown problem when writing 1.7million records |
Дата | |
Msg-id | 7749.983301941@sss.pgh.pa.us обсуждение исходный текст |
Ответ на | Slowdown problem when writing 1.7million records ("Stephen Livesey" <ste@exact3ex.co.uk>) |
Ответы |
Re: Slowdown problem when writing 1.7million records
RE: Slowdown problem when writing 1.7million records |
Список | pgsql-general |
"Stephen Livesey" <ste@exact3ex.co.uk> writes: > I have created a small file as follows: > CREATE TABLE expafh ( > postcode CHAR(8) NOT NULL, > postcode_record_no INT, > street_name CHAR(30), > town CHAR(31), > PRIMARY KEY(postcode) ) > I am now writing 1.7million records to this file. > The first 100,000 records took 15mins. > The next 100,000 records took 30mins > The last 100,000 records took 4hours. > In total, it took 43 hours to write 1.7million records. > Is this sort of degradation normal using a PostgreSQL database? No, it's not. Do you have any triggers or rules on this table that you haven't shown us? How about other tables referencing this one as foreign keys? (Probably not, if you're running an identical test on MySQL, but I just want to be sure that I'm not missing something.) How exactly are you writing the records? I have a suspicion that the slowdown must be on the client side (perhaps some inefficiency in the JDBC code?) but that's only a guess at this point. regards, tom lane
В списке pgsql-general по дате отправления: