Re: Speed up transaction completion faster after many relations areaccessed in a transaction
От | David Rowley |
---|---|
Тема | Re: Speed up transaction completion faster after many relations areaccessed in a transaction |
Дата | |
Msg-id | CAKJS1f_c4GY3dB+FGh6UbF5XYQhc6N1Bxv6sY=efUdqNODqbGQ@mail.gmail.com обсуждение исходный текст |
Ответ на | RE: Speed up transaction completion faster after many relations areaccessed in a transaction ("Tsunakawa, Takayuki" <tsunakawa.takay@jp.fujitsu.com>) |
Ответы |
RE: Speed up transaction completion faster after many relations areaccessed in a transaction
|
Список | pgsql-hackers |
On Mon, 22 Jul 2019 at 14:21, Tsunakawa, Takayuki <tsunakawa.takay@jp.fujitsu.com> wrote: > > From: David Rowley [mailto:david.rowley@2ndquadrant.com] > > I personally don't think that's true. The only way you'll notice the > > LockReleaseAll() overhead is to execute very fast queries with a > > bloated lock table. It's pretty hard to notice that a single 0.1ms > > query is slow. You'll need to execute thousands of them before you'll > > be able to measure it, and once you've done that, the lock shrink code > > will have run and the query will be performing optimally again. > > Maybe so. Will the difference be noticeable between plan_cache_mode=auto (default) and plan_cache_mode=custom? For the use case we've been measuring with partitioned tables and the generic plan generation causing a sudden spike in the number of obtained locks, then having plan_cache_mode = force_custom_plan will cause the lock table not to become bloated. I'm not sure there's anything interesting to measure there. The only additional code that gets executed is the hash_get_num_entries() and possibly hash_get_max_bucket. Maybe it's worth swapping the order of those calls since most of the time the entry will be 0 and the max bucket count threshold won't be exceeded. -- David Rowley http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services
В списке pgsql-hackers по дате отправления: