Re: pgsql functions and transactions?
От | Tom Lane |
---|---|
Тема | Re: pgsql functions and transactions? |
Дата | |
Msg-id | 2045.1093635253@sss.pgh.pa.us обсуждение исходный текст |
Ответ на | Re: pgsql functions and transactions? (Betsy Barker <betsy.barker@supportservicesinc.com>) |
Ответы |
Re: pgsql functions and transactions?
|
Список | pgsql-novice |
Betsy Barker <betsy.barker@supportservicesinc.com> writes: > I'm not touching a lot of different tables per se, but I have about 10 > functions that each process one or more cursor that all combined end > up creating about 45,000 records. The functions cascade. That doesn't in itself seem like it would require locking a lot of different tables. > And like I said, I get the error on my development box with 512 M of > RAM. Production has 3 G of RAM. Available RAM has nothing to do with this --- you are overflowing the lock table in PG shared memory, which is sized according to max_locks_per_transaction (times max_connections). So kicking up that parameter should fix it. I'm just curious as to why you're overflowing the default setting --- we don't see that happen all that often. > Can I ask you what you mean by "are you touching a whole lot of > different tables in one transaction? " Do I have a transaction? Yes, you do --- if you're using JDBC then the driver's autocommit setting determines how long the transaction lasts, but in any case it will last at least as long as one SQL statement sent to the backend. So a pile of nested functions will necessarily all execute in one transaction. If that whole process involves accessing more than a few hundred tables, you'll need to do something with max_locks_per_transaction. But if you're only accessing a few tables (say tens) then there's something else going on here. regards, tom lane
В списке pgsql-novice по дате отправления: