[GENERAL] Concurrency-safe Replacing a Set of Rows (Without SERIALIZABLE)
От | Gavin Wahl |
---|---|
Тема | [GENERAL] Concurrency-safe Replacing a Set of Rows (Without SERIALIZABLE) |
Дата | |
Msg-id | CAAAf5g-amPWT9MpTJCUZ61rnGNUmt8OQNPx2cT9=Xas00KsTkg@mail.gmail.com обсуждение исходный текст |
Ответы |
Re: [GENERAL] Concurrency-safe Replacing a Set of Rows (Without SERIALIZABLE)
|
Список | pgsql-general |
I have a table that stores user notifications: CREATE TABLE notifications ( user_id INT, type CHAR(1), PRIMARY KEY (user_id, type) ); When a user edits their notifications, I need to atomically replace the old set with the new set. My first instinct is to do this: BEGIN; DELETE FROM notifications WHERE user_id = 1; INSERT INTO notifications (user_id, type) VALUES (1, 'a'), (1, 'b'); COMMIT; This of course doesn't work when two transactions run concurrently though -- one of them will get a unique constraint violation. My next thought was to use upsert: BEGIN; DELETE FROM notifications WHERE user_id = 1; INSERT INTO notifications (user_id, type) VALUES (1, 'a'), (1, 'b') ON CONFLICT DO NOTHING; COMMIT; This doesn't give an error for concurrent transactions, but doesn't do the right thing. Consider if one transaction runs to replace the set with {'a', 'b'} and another runs with {'b', 'c'}. The result should either be {'a', 'b'} or {'b', 'c'}, but they actually get merged together and the user ends up with notifications {'a', 'b', 'c'}. Is there any way to do this correctly without SERIALIZABLE transactions? It would be nice to avoid having to retry transactions. Ideally I'd like to avoid explicit locking as well.
В списке pgsql-general по дате отправления: