Regarding Postgres - Insertion Time Getting Increased As Data Volume is getting increased
От | Rajnish Vishwakarma |
---|---|
Тема | Regarding Postgres - Insertion Time Getting Increased As Data Volume is getting increased |
Дата | |
Msg-id | CADH9T5OtrQD3cr17j8miFzEX=brueOH2qC_=TP8oZAROG+sd2g@mail.gmail.com обсуждение исходный текст |
Ответы |
Re: Regarding Postgres - Insertion Time Getting Increased As Data Volume is getting increased
Re: Regarding Postgres - Insertion Time Getting Increased As Data Volume is getting increased |
Список | pgsql-general |
Hi Postgres Team,
The below are the scenarios which we are dealing with.
1) There are 20 Tables - On an average each having 150 columns.
2) There are 20 Threads Handled by Thread Pool Executor ( here we are using Python's - psycopg2 module / library to fetch the data .)
3) I am using the below statement to insert the data using Python - psycopg2 module - using the exceute(...) command as .
sql_stmt = "INSERT INTO " + name_Table + final_col_string + "VALUES" + str(tuple(array_of_curly_values))
print('Sql statement', sql_stmt)col_cursor_db = db_conn.cursor()
v = col_cursor_db.execute(sql_stmt);
print('Sql statement', sql_stmt)col_cursor_db = db_conn.cursor()
v = col_cursor_db.execute(sql_stmt);
But earlier the same 22 threads were running and the insertion time was gradually increased from 1 second to 30-35 seconds.
Requesting and urging the postgres general support team to help me out on this.
How can i increase the INSERTION speed to minimize the insertion time taken by each thread in the THREAD POOL.
Or there any different python libraries other than psycopg2 ?
Is there any different functions in python psycopg2 ?
Or what performance tuning has to be done to increaser the insertion speed ?
В списке pgsql-general по дате отправления: