Re: User-facing aspects of serializable transactions
От | Markus Wanner |
---|---|
Тема | Re: User-facing aspects of serializable transactions |
Дата | |
Msg-id | 20090602091700.541050vnsk1wx33g@mail.bluegap.ch обсуждение исходный текст |
Ответ на | Re: User-facing aspects of serializable transactions (Greg Stark <stark@enterprisedb.com>) |
Ответы |
Re: User-facing aspects of serializable transactions
|
Список | pgsql-hackers |
Hi, Quoting "Greg Stark" <stark@enterprisedb.com>: > No, I'm not. I'm questioning whether a serializable transaction > isolation level that makes no guarantee that it won't fire spuriously > is useful. It would certainly be an improvement compared to our status quo, where truly serializable transactions aren't supported at all. And it seems more promising than heading for a perfect *and* scalable implementation. > Heikki proposed a list of requirements which included a requirement > that you not get spurious serialization failures That requirement is questionable. If we get truly serializable transactions (i.e. no false negatives) with reasonably good performance, that's more than enough and a good step ahead. Why care about a few false positives (which don't seem to matter performance wise)? We can probably reduce or eliminate them later on. But eliminating false negatives is certainly more important to start with. What I'm more concerned is the requirement of the proposed algorithm to keep track of the set of tuples read by any transaction and keep that set until sometime well after the transaction committed (as questioned by Neil [1]). That doesn't sound like a negligible overhead. Maybe the proposed algorithm has to be applied to pages instead of tuples, as they did it in the paper for Berkeley DB. Just to keep that overhead reasonably low. Regards Markus Wanner [1]: Neil Conway's blog, Serializable Snapshot Isolation: http://everythingisdata.wordpress.com/2009/02/25/february-25-2009/
В списке pgsql-hackers по дате отправления: