Re: [PERFORM] performance problem on big tables
От | Mariel Cherkassky |
---|---|
Тема | Re: [PERFORM] performance problem on big tables |
Дата | |
Msg-id | CA+t6e1nG8bF4-hrvjijhJ_nC5OXmw32eXNYdxuYMkPSk2QLrag@mail.gmail.com обсуждение исходный текст |
Ответ на | [PERFORM] performance problem on big tables (Mariel Cherkassky <mariel.cherkassky@gmail.com>) |
Ответы |
Re: [PERFORM] performance problem on big tables
Re: [PERFORM] performance problem on big tables |
Список | pgsql-performance |
I have performance issues with two big tables. Those tables are located on an oracle remote database. I'm running the quert :
insert into local_postgresql_table select * from oracle_remote_table.
The first table has 45M records and its size is 23G. The import of the data from the oracle remote database is taking 1 hour and 38 minutes. After that I create 13 regular indexes on the table and it takes 10 minutes per table ->2 hours and 10 minutes in total.
The second table has 29M records and its size is 26G. The import of the data from the oracle remote database is taking 2 hours and 30 minutes. The creation of the indexes takes 1 hours and 30 minutes (some are indexes on one column and the creation takes 5 min and some are indexes on multiples column and it takes 11 min.
Those operation are very problematic for me and I'm searching for a solution to improve the performance. The parameters I assigned :
min_parallel_relation_size = 200MB
max_parallel_workers_per_gather = 5 max_worker_processes = 8effective_cache_size = 2500MBwork_mem = 16MBmaintenance_work_mem = 1500MBshared_buffers = 2000MBRAM : 5GCPU CORES : 8-I tried running select count(*) from table in oracle and in postgresql the running time is almost equal.
-Before importing the data I drop the indexes and the constraints.
-I tried to copy a 23G file from the oracle server to the postgresql server and it took me 12 minutes.
Please advice how can I continue ? How can I improve something in this operation ?
В списке pgsql-performance по дате отправления: