Обсуждение: COPY optimization issue

Поиск
Список
Период
Сортировка

COPY optimization issue

От
terry@greatgulfhomes.com
Дата:
My postgres database does a nightly sync with a Legacy database by
clobbering a postgres table with data from a CSV.

I currently just use the COPY command, but with over 800,000 records, this
takes quite some time.

Is there a faster way?

eg  I notice that any validation failure of ANY records causes the entire
copy to roll back.  Is this begin/commit action wrapped around the copy
costing me CPU cycles?  And if so, can I turn it off, or is there a better
way then using copy?

Note:  I do nightly vacuum's so deleted tuples is not the issue, I don't
think.

Thanks

Terry Fielder
Network Engineer
Great Gulf Homes / Ashton Woods Homes
terry@greatgulfhomes.com


Re: COPY optimization issue

От
Tom Lane
Дата:
terry@greatgulfhomes.com writes:
> I currently just use the COPY command, but with over 800,000 records, this
> takes quite some time.
> Is there a faster way?

Drop and recreate indexes might help.  See
http://www.ca.postgresql.org/users-lounge/docs/7.2/postgres/populate.html

            regards, tom lane