Re: Slow concurrent update of same row in a given table
От | Rajesh Kumar Mallah |
---|---|
Тема | Re: Slow concurrent update of same row in a given table |
Дата | |
Msg-id | a97c77030509281929ca316a1@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: Slow concurrent update of same row in a given table (Gavin Sherry <swm@alcove.com.au>) |
Ответы |
Re: Slow concurrent update of same row in a given table
(Gavin Sherry <swm@alcove.com.au>)
Re: Slow concurrent update of same row in a given table ("Jim C. Nasby" <jnasby@pervasive.com>) |
Список | pgsql-performance |
On 9/29/05, Gavin Sherry <swm@alcove.com.au> wrote: > On Wed, 28 Sep 2005, Rajesh Kumar Mallah wrote: > > > > > Number of Copies | Update perl Sec > > > > > > > > 1 --> 119 > > > > 2 ---> 59 > > > > 3 ---> 38 > > > > 4 ---> 28 > > > > 5 --> 22 > > > > 6 --> 19 > > > > 7 --> 16 > > > > 8 --> 14 > > > > 9 --> 11 > > > > 10 --> 11 > > > > 11 --> 10 > > > > > > So, 11 instances result in 10 updated rows per second, database wide or > > > per instance? If it is per instance, then 11 * 10 is close to the > > > performance for one connection. > > > > > > Sorry do not understand the difference between "database wide" > > and "per instance" > > Per instance. > > > > > > > > > That being said, when you've got 10 connections fighting over one row, I > > > wouldn't be surprised if you had bad performance. > > > > > > Also, at 119 updates a second, you're more than doubling the table's > > > initial size (dead tuples) each second. How often are you vacuuming and > > > are you using vacuum or vacuum full? > > > > > > Yes I realize the obvious phenomenon now, (and the uselessness of the script) > > , we should not consider it a performance degradation. > > > > I am having performance issue in my live database thats why i tried to > > simulate the situation(may the the script was overstresser). > > > > My original problem is that i send 100 000s of emails carrying a > > beacon for tracking readership every tuesday and on wednesday i see > > lot of the said query in pg_stat_activity each of these query update > > the SAME row that corresponds to the dispatch of last day and it is > > then i face the performance problem. > > > > I think i can only post further details next wednesday , please lemme > > know how should i be dealing with the situation if each the updates takes > > 100times more time that normal update duration. > > I see. These problems regularly come up in database design. The best thing > you can do is modify your database design/application such that instead of > incrementing a count in a single row, you insert a row into a table, > recording the 'dispatch_id'. Counting the number of rows for a given > dispatch id will give you your count. > sorry i will be accumulating huge amount of rows in seperate table with no extra info when i really want just the count. Do you have a better database design in mind? Also i encounter same problem in implementing read count of articles in sites and in counting banner impressions where same row get updated by multiple processes frequently. Thanks & Regds mallah. > Thanks, > > Gavin >
В списке pgsql-performance по дате отправления: