Re: sinval synchronization considered harmful
От | Robert Haas |
---|---|
Тема | Re: sinval synchronization considered harmful |
Дата | |
Msg-id | CA+TgmoarXRH7uqP-uBcfCwWWOVtukko3cMh+28UQ1r2q7m4w7Q@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: sinval synchronization considered harmful (Simon Riggs <simon@2ndQuadrant.com>) |
Ответы |
Re: sinval synchronization considered harmful
(Simon Riggs <simon@2ndQuadrant.com>)
Re: sinval synchronization considered harmful (Simon Riggs <simon@2ndQuadrant.com>) |
Список | pgsql-hackers |
On Tue, Jul 26, 2011 at 2:56 PM, Simon Riggs <simon@2ndquadrant.com> wrote: > On Tue, Jul 26, 2011 at 7:24 PM, Alvaro Herrera > <alvherre@commandprompt.com> wrote: >> Excerpts from Simon Riggs's message of mar jul 26 14:11:19 -0400 2011: >> >>> Let me ask a few questions to stimulate a different solution >>> >>> * Can we do this using an active technique (e.g. signals) rather than >>> a passive one (reading a counter?) >> >> Signals are already in use for special cases (queue is full), and I >> think going through the kernel to achieve much more will lower >> performance significantly. > > If there are no invalidations, there would be no signals. How would > zero signals decrease performance? It wouldn't, although it might be bad in the case where there are lots of temp tables being created and dropped. I think the biggest problem with signals is that they don't provide any meaningful synchronization guarantees. When you send somebody a signal, you don't really know how long it's going to take for them to receive it. >>> * Can we partition the sinval lock, so we have multiple copies? That >>> increases the task for those who trigger an invalidation, but will >>> relieve the pressure for most readers. >> >> Not sure there's a way to meaningfully partition the queue. In any >> case, I think the problem being dealt with here is how to update the >> read heads of the queue, not its contents. > > I agree there's no meaningful way to partition the queue, but we can > store the information in more than one place to reduce the contention > of people requesting it. I thought about that. Basically, that saves you a factor of N in contention on the read side (where N is the number of copies) and costs you a factor of N on the write side (you have to update N copies, taking a spinlock or lwlock for each). In the limit, you could do one copy of the counter per backend. I think, though, that a lock-free implementation using memory barriers is going to end up being the only real solution. We might possibly convince ourselves that we're OK with increasing the cost of SIInsertDataEntries(), but any solution that involves replication the data is still based on doing at least some locking. And I am pretty well convinced that even one spinlock acquisition in SIInsertDataEntries() is too many. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
В списке pgsql-hackers по дате отправления: