besides dbfdump........ HOW TO DUMP!
От | harry.yau@grandford.com (harry.yau@grandford.com) |
---|---|
Тема | besides dbfdump........ HOW TO DUMP! |
Дата | |
Msg-id | 54792679.0312152302.17e783d1@posting.google.com обсуждение исходный текст |
Список | pgsql-general |
Hi all, I am working on a project to convert the data from dozens of .dbf files to PostgreSQL Database. I wrote some perl script to employ dbfdump to extract data from .dbf files which sit on other remote server to a source text file. (Actually I just simply mount the directory to local server.) Then I modified the data in the source text file and output them into a target text file as the structure of the data is different between .dbf and PostgreSQL database. Then I just use PostgresSQL SQL command COPY to copy data from the target text file to PostgreSQL. It works fine!! I have about 40 ~ 50 tables, each table with about 1 million ~ 2 million records. Everything is done in 2 hours. I think it is amazing (it used to take upto 8 hours to finish with simple select and insert statement.) I run the same set of perl scripts every single day at the morning. It work great most of the time. However, I found out that the dbfdump give me the wrong output once or twice a week! The dbfdump mess up the data of one field of one single record (usually the date field or numeric field.) Since the messed up data will violate the format of the copy statement, I could find out such litte mistake among million records. This problem won't happen everyday. However, it did happen!~ I am wondering what causes this kind of problem? Is there anyway to correct or avoid this happen again?? Oh!~ I've forgot to mention everything is happen on Linux Redhat 8.1 server. I am using Postgres 7.3 and dbfdump 0.220. Any suggestion will be welcomed!~ Thank You Very Much!~ Best regards, Harry Yau
В списке pgsql-general по дате отправления: