Re: Hard problem with concurrency
От | Manfred Koizar |
---|---|
Тема | Re: Hard problem with concurrency |
Дата | |
Msg-id | ar3l5vglsuimq6vlkn5pg1qtcvahf8ltjb@4ax.com обсуждение исходный текст |
Ответ на | Hard problem with concurrency ("Christopher Kings-Lynne" <chriskl@familyhealth.com.au>) |
Список | pgsql-hackers |
On Mon, 17 Feb 2003 09:51:54 +0800, "Christopher Kings-Lynne" <chriskl@familyhealth.com.au> wrote: >Strategy three: > >begin; >lock table in exclusive mode; >update row; >if (no rows affected) insert row; >commit; > >Problem - Works, but this table needs high concurrency. Chris, distributing congestion might improve that. Instead of locking the whole table just lock one row in a dummy table: CREATE TABLE dummylock(id INT PRIMARY KEY);INSERT INTO dummylock VALUES (0);INSERT INTO dummylock VALUES (1);...INSERT INTOdummylock VALUES (999); Create an immutable function lockhash(<type of PK>) returning a value between 0 and 999. BEGIN;SET TRANSACTION ISOLATION LEVEL READ COMMITTED;UPDATE t SET c = 'newval' WHERE pk = 'pk';if (no rows affected) THEN SELECT * FROM dummylock WHERE id = lockhash('pk') FOR UPDATE; -- try again UPDATE t SET c = 'newval'WHERE pk = 'pk'; if (no rows affected) THEN INSERT INTO t ...; END IF;END IF;COMMIT; This is just an idea. Completely untested ... ServusManfred
В списке pgsql-hackers по дате отправления: