Re: Reducing Catalog Locking
| От | Simon Riggs |
|---|---|
| Тема | Re: Reducing Catalog Locking |
| Дата | |
| Msg-id | CA+U5nMJUJ2pXs-UG824qkK3yxXvJJiuT=g_c1Pv9MVi-irQByQ@mail.gmail.com обсуждение исходный текст |
| Ответ на | Re: Reducing Catalog Locking (Andres Freund <andres@2ndquadrant.com>) |
| Список | pgsql-hackers |
On 31 October 2014 14:49, Andres Freund <andres@2ndquadrant.com> wrote: > On 2014-10-31 10:02:28 -0400, Tom Lane wrote: >> Andres Freund <andres@2ndquadrant.com> writes: >> > On 2014-10-31 09:48:52 -0400, Tom Lane wrote: >> >> But more to the point, this seems like optimizing pg_dump startup by >> >> adding overhead everywhere else, which doesn't really sound like a >> >> great tradeoff to me. >> >> > Well, it'd finally make pg_dump "correct" under concurrent DDL. That's >> > quite a worthwile thing. >> >> I lack adequate caffeine at the moment, so explain to me how this adds >> any guarantees whatsoever? It sounded like only a performance >> optimization from here. > > A performance optimization might be what Simon intended, but it isn't > primarily what I (and presumably Robert) thought it be useful for. > > Consider the example in > http://archives.postgresql.org/message-id/20130507141526.GA6117%40awork2.anarazel.de > > If pg_dump were to take the 'ddl lock' *before* acquiring the snapshot > to lock all tables, that scenario couldn't happen anymore. As soon as > pg_dump has acquired the actual locks the ddl lock could be released > again. > > Taking the ddl lock from SQL would probably require some 'backup' or > superuser permission, but luckily there seems to be movement around > that. Good idea. But it is a different idea. I can do that as well... -- Simon Riggs http://www.2ndQuadrant.com/PostgreSQL Development, 24x7 Support, Training & Services
В списке pgsql-hackers по дате отправления: