Re: Exponential processing time for multiple SELECT FOR UPDATE / UPDATE in a single transaction with PostgreSQL 9.x ?
От | Alvaro Herrera |
---|---|
Тема | Re: Exponential processing time for multiple SELECT FOR UPDATE / UPDATE in a single transaction with PostgreSQL 9.x ? |
Дата | |
Msg-id | 20141205115919.GC1768@alvh.no-ip.org обсуждение исходный текст |
Ответ на | Exponential processing time for multiple SELECT FOR UPDATE / UPDATE in a single transaction with PostgreSQL 9.x ? (Nathanael Terrien <nte@mustinformatique.fr>) |
Ответы |
Re: Exponential processing time for multiple SELECT FOR UPDATE
/ UPDATE in a single transaction with PostgreSQL 9.x ?
Re: Exponential processing time for multiple SELECT FOR UPDATE / UPDATE in a single transaction with PostgreSQL 9.x ? |
Список | pgsql-odbc |
Nathanael Terrien wrote: > Hi List. > > Our application does something like this, through psqlodbc : > ------------------------------------------------------------------------------ > Open transaction (« BEGIN ») > FOR x TO y STEP 1 > Do Stuff > « SELECT col1 FROM table1 WHERE condition1 FOR UPDATE ; » > Do Stuff > « UPDATE table1 SET col1=z WHERE condition1 ; » > Do Stuff > NEXT x > End transaction (« COMMIT ») > ------------------------------------------------------------------------------ > > Against PostgreSQL 8.4 : no problem. > Against PostgreSQL 9.x : starting at about a few hundred loops (locks), the process slows down, and continues to slow downexponentially, until the COMMIT happens. Exactly what version is 9.x? We solved a number of issues in FOR UPDATE locking in early 9.3 minor releases; these should all be fixed in 9.3.5. You might be running into the problem supposedly fixed by the below commit, but it'd imply you're on 9.3.2 or earlier, which is unadvisable because of other data-eating bugs: commit 0bc00363b9b1d5ee44a0b25ed2dfc83f81e68258 Author: Alvaro Herrera <alvherre@alvh.no-ip.org> Date: Fri Dec 13 17:16:25 2013 -0300 Rework MultiXactId cache code The original performs too poorly; in some scenarios it shows way too high while profiling. Try to make it a bit smarter to avoid excessive cosst. In particular, make it have a maximum size, and have entries be sorted in LRU order; once the max size is reached, evict the oldest entry to avoid it from growing too large. Per complaint from Andres Freund in connection with new tuple freezing code. Now that I think about this, maybe the cache in your case is not being useful for some reason or other, and it's causing more of a slowdown. Is this plpgsql? If so, do you have EXCEPTION blocks in plpgsql code? Maybe SAVEPOINTs somewhere? (Does the ODBC driver create SAVEPOINTs automatically?) -- Álvaro Herrera http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
В списке pgsql-odbc по дате отправления: