Re: extending relations more efficiently
От | Robert Haas |
---|---|
Тема | Re: extending relations more efficiently |
Дата | |
Msg-id | CA+TgmobH1zPWQe4LzX+A7kKf887oyJNOjw+=Tuu0uC3=_kZOXA@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: extending relations more efficiently (Simon Riggs <simon@2ndQuadrant.com>) |
Список | pgsql-hackers |
On Tue, May 1, 2012 at 10:22 AM, Simon Riggs <simon@2ndquadrant.com> wrote: > Fair enough, but my understanding was that tests showed that the > extension lock was a bottleneck, so doing extensions in larger chunks > should reduce the time we spend waiting for a lock and thus improve > performance. So while your results here show no gain, there is gain to > be had elsewhere as a result. Maybe, but I'm skeptical. There are a few cases - such as the problem I fixed with SInvalReadLock - where the actual overhead of taking and releasing the lock is the problem. ProcArrayLock has some as-yet-unfixed issues in this area as well. But most of our locking bottlenecks are caused by doing too much work while holding the lock, not by acquiring and releasing it too frequently. A single lock manager partition can cope with upwards of 30k acquire/release cycles per second, which would amount to >240MB/sec of file extension. It seems very unlikely that we're hitting our head against that ceiling, although maybe you'd like to suggest a test case. Rather, I suspect that it's just plain taking too long to perform the actual extension. If we could extend by 8 blocks in only 4 times the time it takes to extend by 1 block, then obviously there would be a win available, but the test results suggest that isn't the case. <...thinks...> Maybe the solution here isn't extending in larger chunks, but allowing several extensions to proceed in parallel. It seems likely that a big chunk of the system call time is being used up waiting for write() to copy data from user space to kernel space, and there's no reason several backends couldn't do that in parallel. I think it would be sufficient to ensure that nobody starts using block N until the initial writes of all blocks < N have completed. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
В списке pgsql-hackers по дате отправления: