Re: Bulk Insert into PostgreSQL
От | Pavel Stehule |
---|---|
Тема | Re: Bulk Insert into PostgreSQL |
Дата | |
Msg-id | CAFj8pRA+3xfHkTWnEr89xXWhC9dBMMF9a1z5ZdaQMmEemghDkg@mail.gmail.com обсуждение исходный текст |
Ответ на | Bulk Insert into PostgreSQL (Srinivas Karthik V <skarthikv.iitb@gmail.com>) |
Ответы |
Re: Bulk Insert into PostgreSQL
|
Список | pgsql-hackers |
2018-06-27 13:18 GMT+02:00 Srinivas Karthik V <skarthikv.iitb@gmail.com>:
Hi,I am performing a bulk insert of 1TB TPC-DS benchmark data into PostgreSQL 9.4. It's taking around two days to insert 100 GB of data. Please let me know your suggestions to improve the performance. Below are the configuration parameters I am using:shared_buffers =12GBmaintainence_work_mem = 8GBwork_mem = 1GBfsync = offsynchronous_commit = offcheckpoint_segments = 256
checkpoint_timeout = 1h
checkpoint_completion_target = 0.9
checkpoint_warning = 0autovaccum = offOther parameters are set to default value. Moreover, I have specified the primary key constraint during table creation. This is the only possible index being created before data loading and I am sure there are no other indexes apart from the primary key column(s).
The main factor is using COPY instead INSERTs.
load 100GB database should to get about few hours, not two days.
Regards
Pavel
Regards,Srinivas Karthik
В списке pgsql-hackers по дате отправления: