Re: Caching of Queries
От | Tom Lane |
---|---|
Тема | Re: Caching of Queries |
Дата | |
Msg-id | 9617.1096341460@sss.pgh.pa.us обсуждение исходный текст |
Ответ на | Re: Caching of Queries ("Iain" <iain@mst.co.jp>) |
Список | pgsql-performance |
"Iain" <iain@mst.co.jp> writes: > I can only tell you (roughly) how it works wth Oracle, Which unfortunately has little to do with how it works with Postgres. This "latches" stuff is irrelevant to us. In practice, any repetitive planning in PG is going to be consulting catalog rows that it draws from the backend's local catalog caches. After the first read of a given catalog row, the backend won't need to re-read it unless the associated table has a schema update. (There are some other cases, like a VACUUM FULL of the catalog the rows came from, but in practice catalog cache entries don't change often in most scenarios.) We need place only one lock per table referenced in order to interlock against schema updates; not one per catalog row used. The upshot of all this is that any sort of shared plan cache is going to create substantially more contention than exists now --- and that's not even counting the costs of managing the cache, ie deciding when to throw away entries. A backend-local plan cache would avoid the contention issues, but would of course not allow amortizing planning costs across multiple backends. I'm personally dubious that sharing planning costs is a big deal. Simple queries generally don't take that long to plan. Complicated queries do, but I think the reusability odds go down with increasing query complexity. regards, tom lane
В списке pgsql-performance по дате отправления: