Transactions vs speed.
От | mlw |
---|---|
Тема | Transactions vs speed. |
Дата | |
Msg-id | 3A60FF35.EDE3516C@mohawksoft.com обсуждение исходный текст |
Ответы |
Re: Transactions vs speed.
Re: Transactions vs speed. |
Список | pgsql-hackers |
I have a question about Postgres: Take this update:update table set field = 'X' ; This is a very expensive function when the table has millions of rows, it takes over an hour. If I dump the database, and process the data with perl, then reload the data, it takes minutes. Most of the time is used creating indexes. I am not asking for a feature, I am just musing. I have a database update procedure which has to merge our data with that of more than one third party. It takes 6 hours to run. Do you guys know of any tricks that would allow postgres operate really fast with an assumption that it is operating on tables which are not being used. LOCK does not seem to make much difference. Any bit of info would be helpful. -- http://www.mohawksoft.com
В списке pgsql-hackers по дате отправления: