Performance of batch COMMIT

Поиск
Список
Период
Сортировка
От Benjamin Arai
Тема Performance of batch COMMIT
Дата
Msg-id 007801c604d4$9a965880$d7cc178a@uni
обсуждение исходный текст
Ответы Re: Performance of batch COMMIT  ("Jim C. Nasby" <jnasby@pervasive.com>)
Список pgsql-general
Each week I have to update a very large database.  Currently I  run a commit about every 1000 queries.  This vastly increased performance but I am wondering if the performance can be increased further.  I could send all of the queries to a file but COPY doesn't support plain queries such as UPDATE, so I don't think that is going to help.  The only time I have to run a commit is when I need to make a new table.  The server has 4GB of memory and fast everything else.  The only postgresql.conf variable I have changed is for the shared_memory.
 
Would sending all of the queries in a single query string increase performance? 
 
What is the optimal batch size for commits?
 
Are there any postgresql.conf variable that should be tweaked?
 
Anybody have any suggestions?

В списке pgsql-general по дате отправления:

Предыдущее
От: Martijn van Oosterhout
Дата:
Сообщение: Re: is this a bug or I am blind?
Следующее
От: Peter Eisentraut
Дата:
Сообщение: Re: Installation trouble - Solved