Обсуждение: deadlock problem in Ad serving..
Hi, I use postgresql in serving Ads and counting impression of the ad banners. whenever an Ad is served I do a update. update banner_stats set imp=imp+1 where uniqid=4330 in a transaction. In a busy web environment like ours there are high chances that the same page is displyed concurrently to two different viewers. I have been able to verify using two psql sessions that unless one transaction commits the update, the update from the second client (also in a transaction block) keeps wating to happen. as a result such updates get queued up and i finally get "dead lock detected" error. ~~~~~~~~~~~~~~~~~~~~~~ Error: DBD::Pg::st execute failed: ERROR: deadlock detected at /usr/local/perlapache/lib/perl/Banner.pm line 71, <GEN1> line 7. ~~~~~~~~~~~~~~~~~~~~~~ can anyone tell me how shud i go ahead tosolve this problem regds mallah. -- Rajesh Kumar Mallah, Project Manager (Development) Infocom Network Limited, New Delhi phone: +91(11)6152172 (221) (L) ,9811255597 (M) Visit http://www.trade-india.com , India's Leading B2B eMarketplace.
> ~~~~~~~~~~~~~~~~~~~~~~ > Error: DBD::Pg::st execute failed: ERROR: deadlock detected at > /usr/local/perlapache/lib/perl/Banner.pm line 71, <GEN1> line 7. > ~~~~~~~~~~~~~~~~~~~~~~ It is a genuine error, occurs while two or more transaction process tries to update/delete a same record simultaneously. You can overcome this by locking the table in share row exclusive mode also. More details can be found in the lock command documentation. regards, bhuvaneswaran
thanks for ur response bhuvan, i will read that doc and try it again. regds mallah. On Monday 20 January 2003 03:32 pm, Bhuvan A wrote: > > ~~~~~~~~~~~~~~~~~~~~~~ > > Error: DBD::Pg::st execute failed: ERROR: deadlock detected at > > /usr/local/perlapache/lib/perl/Banner.pm line 71, <GEN1> line 7. > > ~~~~~~~~~~~~~~~~~~~~~~ > > It is a genuine error, occurs while two or more transaction process tries > to update/delete a same record simultaneously. You can overcome this by > locking the table in share row exclusive mode also. More details can be > found in the lock command documentation. > > regards, > bhuvaneswaran > > > ---------------------------(end of broadcast)--------------------------- > TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org -- Rajesh Kumar Mallah, Project Manager (Development) Infocom Network Limited, New Delhi phone: +91(11)6152172 (221) (L) ,9811255597 (M) Visit http://www.trade-india.com , India's Leading B2B eMarketplace.
Bhuvan A <bhuvansql@myrealbox.com> writes: >> ~~~~~~~~~~~~~~~~~~~~~~ >> Error: DBD::Pg::st execute failed: ERROR: deadlock detected at >> /usr/local/perlapache/lib/perl/Banner.pm line 71, <GEN1> line 7. >> ~~~~~~~~~~~~~~~~~~~~~~ > It is a genuine error, occurs while two or more transaction process tries > to update/delete a same record simultaneously. You can overcome this by > locking the table in share row exclusive mode also. That will just move his problem somewhere else. I think the only real answer is to use shorter transactions (one per page, not one per several pages). regards, tom lane
On Mon, 20 Jan 2003, Tom Lane wrote: > > Bhuvan A <bhuvansql@myrealbox.com> writes: > >> Error:...deadlock detected... > > ... You can overcome this by locking the table in share row > > exclusive mode also... > > ...use shorter transactions (one per page, not one per several pages). Hmm... with his query: "update banner_stats set imp=imp+1 where uniqid=4330" Is it true that the problem happens when updates are done in a different order by two transactions like this: trans.1: "update banner_stats set imp=imp+1 where uniqid=4330" trans.2: "update banner_stats set imp=imp+1 where uniqid=10" trans.1: "update banner_stats set imp=imp+1 where uniqid=10" trans.2: "update banner_stats set imp=imp+1 where uniqid=4330" If so, then could the problem be avoided if in his application logic he always did the updates in the same order? ... I.e. Make sure the each transaction does the updates in the same order by sorting his updates based on uniqid in the client? Ron
> > ~~~~~~~~~~~~~~~~~~~~~~ > > Error: DBD::Pg::st execute failed: ERROR: deadlock detected at > > /usr/local/perlapache/lib/perl/Banner.pm line 71, <GEN1> line 7. > > ~~~~~~~~~~~~~~~~~~~~~~ > > It is a genuine error, occurs while two or more transaction process tries > to update/delete a same record simultaneously. You can overcome this by > locking the table in share row exclusive mode also. More details can be > found in the lock command documentation. Since your using DBI all you need to do is turn AutoCommit off when you load DBI and call commits after you verify your transaction succeeded. Turning off AutoCommit wraps a 'begin transaction' around everything. More info: http://www.perldoc.com/cpan/DBI.html#Transactions
Ron Mayer <ron@intervideo.com> writes: > Is it true that the problem happens when updates are done > in a different order by two transactions like this: > trans.1: "update banner_stats set imp=imp+1 where uniqid=4330" > trans.2: "update banner_stats set imp=imp+1 where uniqid=10" > trans.1: "update banner_stats set imp=imp+1 where uniqid=10" > trans.2: "update banner_stats set imp=imp+1 where uniqid=4330" Yeah, I would assume that that's the problem. > If so, then could the problem be avoided if in his application > logic he always did the updates in the same order? ... I.e. Make > sure the each transaction does the updates in the same order by > sorting his updates based on uniqid in the client? If I understood correctly, he's tracking webpage hits; so the updates are going to correspond to the sequence in which visitors move to different webpages. I don't think he can force a particular order (and if he could, there'd be little need for the per-page counts anyway). I suppose he could hold all the updates in a temp table and apply them in a sorted order at end of transaction, but that seems like a pain. I would suggest using a separate transaction for each webpage visited. Holding a transaction open across multiple page traversals is widely considered bad news for a number of reasons, not only this one. regards, tom lane
On Mon, 20 Jan 2003, Tom Lane wrote: > > If I understood correctly, he's tracking webpage hits; so the updates > are going to correspond to the sequence in which visitors move to > different webpages. Ah... I was thinking he was counting banners served within a single page (perhaps a banner on top and a banner on the bottom), and doing accounting of which banners were shown. In that case it might have been interesting to keep some of his information in a transaction. start transaction... insert_a_new_cookie_record_if_it_didn't_exits.. record the top_banner... record the bottom_banner... end transaction... I've done something like that to count how many distinct users saw particular ads. In this case sorting the small number (2) of banners in his application logic would be easy. > I would suggest using a separate transaction for each webpage visited. > Holding a transaction open across multiple page traversals is widely > considered bad news for a number of reasons, not only this one. I understand this part. Ron
Hi, Guys thanks for the response.. I am using perl DBI. In fact i have tried doing the commit immediately after the update in the script then i saw my pg_stat_activity reporting lots of queued update transactions. I would like to discuss this problem further but not able to do so at the moment. I have temporarily disabled impression counting. I am very thankful to you guys for the interest / response once again and will keep you posted. regds mallah. On Monday 20 January 2003 08:48 pm, Tom Lane wrote: > Bhuvan A <bhuvansql@myrealbox.com> writes: > >> ~~~~~~~~~~~~~~~~~~~~~~ > >> Error: DBD::Pg::st execute failed: ERROR: deadlock detected at > >> /usr/local/perlapache/lib/perl/Banner.pm line 71, <GEN1> line 7. > >> ~~~~~~~~~~~~~~~~~~~~~~ > > > > It is a genuine error, occurs while two or more transaction process tries > > to update/delete a same record simultaneously. You can overcome this by > > locking the table in share row exclusive mode also. > > That will just move his problem somewhere else. I think the only real > answer is to use shorter transactions (one per page, not one per several > pages). > > regards, tom lane > > ---------------------------(end of broadcast)--------------------------- > TIP 2: you can get off all lists at once with the unregister command > (send "unregister YourEmailAddressHere" to majordomo@postgresql.org) -- Rajesh Kumar Mallah, Project Manager (Development) Infocom Network Limited, New Delhi phone: +91(11)6152172 (221) (L) ,9811255597 (M) Visit http://www.trade-india.com , India's Leading B2B eMarketplace.