Re: Trigger function in a multi-threaded environment behavior
От | Robert Haas |
---|---|
Тема | Re: Trigger function in a multi-threaded environment behavior |
Дата | |
Msg-id | AANLkTimQsgv0aiO2ivVQ6r1Pte6v7gXgGDMRP7i3C2MQ@mail.gmail.com обсуждение исходный текст |
Ответ на | Trigger function in a multi-threaded environment behavior (Hardik Belani <hardikbelani@gmail.com>) |
Список | pgsql-hackers |
On Tue, Jun 1, 2010 at 7:10 AM, Hardik Belani <hardikbelani@gmail.com> wrote: > If we introduce postgres triggers (trigger functions) on some of the tables > to track insert/update/delete operations, (This is done by keeping and > updating a counter field for every insert, update and delete operation > performed on a set of tables inside trigger function) at this point, one or > more threads get stuck in lock while executing the query, to the extent that > sometimes even with the pgadmin, the database tables cannot be updated. Well, if you have a lot of concurrent backends trying to update the same counters, it is pretty reasonable to think that you're going to have some lock contention. There are a variety of ways to work around this - insert rows (instead of updating) and summarize them later, write the data to a flatfile somewhere and summarize it later, use the built-in statistics mechanisms, etc. Fundamentally the problem is that if transaction A is adding 1 to a counter and transaction B is adding 1 to a counter, one must wait for the other to commit. If A gets there first and updates the counter from, say, 37 to 38, B has to wait to see whether A commits or aborts.If A aborts, B must add 1 to 37; if A commits, B mustadd 1 to 38. Until A commits or aborts, B doesn't know what value it's adding 1 *to*. So every transaction updating that counter serializes on the counter itself. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise Postgres Company
В списке pgsql-hackers по дате отправления: