Re: Latches with weak memory ordering (Re: max_wal_senders must die)
От | Tom Lane |
---|---|
Тема | Re: Latches with weak memory ordering (Re: max_wal_senders must die) |
Дата | |
Msg-id | 4162.1290287244@sss.pgh.pa.us обсуждение исходный текст |
Ответ на | Re: Latches with weak memory ordering (Re: max_wal_senders must die) (Robert Haas <robertmhaas@gmail.com>) |
Ответы |
Re: Latches with weak memory ordering (Re: max_wal_senders must die)
Re: Latches with weak memory ordering (Re: max_wal_senders must die) |
Список | pgsql-hackers |
Robert Haas <robertmhaas@gmail.com> writes: > So what DO we need to guard against here? I think the general problem can be stated as "process A changes two or more values in shared memory in a fairly short span of time, and process B, which is concurrently examining the same variables, sees those changes occur in a different order than A thought it made them in". In practice we do not need to worry about changes made with a kernel call in between, as any sort of context swap will cause the kernel to force cache synchronization. Also, the intention is that the locking primitives will take care of this for any shared structures that are protected by a lock. (There were some comments upthread suggesting maybe our lock code is not bulletproof; but if so that's something to fix in the lock code, not a logic error in code using the locks.) So what this boils down to is being an issue for shared data structures that we access without using locks. As, for example, the latch structures. The other case that I can think of offhand is the signal multiplexing flags. I think we're all right there so far as the flags themselves are concerned because only one atomic update is involved on each side: there's no possibility of inconsistency due to cache visibility skew. But we'd be at some risk if we were using any such flag as a cue to go look at some other shared-memory state. regards, tom lane
В списке pgsql-hackers по дате отправления: