Re: Refactor pg_dump as a library?
От | David Steele |
---|---|
Тема | Re: Refactor pg_dump as a library? |
Дата | |
Msg-id | 570FD605.5030501@pgmasters.net обсуждение исходный текст |
Ответ на | Re: Refactor pg_dump as a library? (Tom Lane <tgl@sss.pgh.pa.us>) |
Ответы |
Re: Refactor pg_dump as a library?
|
Список | pgsql-hackers |
On 4/14/16 1:33 PM, Tom Lane wrote: > David Steele <david@pgmasters.net> writes: >> On 4/14/16 7:16 AM, Andreas Karlsson wrote: >>> I am personally not a fan of the pg_get_Xdef() functions due to their >>> heavy reliance on the syscache which feels rather unsafe in combination >>> with concurrent DDL. > >> As far as I know pg_dump share locks everything before it starts so >> there shouldn't be issues with concurrent DDL. Try creating a new >> inherited table with FKs, etc. during a pg_dump and you'll see lots of >> fun lock waits. > > I think pg_dump is reasonably proof against DDL on tables. It is not > at all proof against DDL on other sorts of objects, such as functions, > because of the fact that the syscache will follow catalog updates that > occur after pg_dump's transaction snapshot. Hmm, OK. I'll need to go look at that. I thought that the backend running the pg_dump would fill it's syscache when it took all the locks and then not update them during the actual dump. If that's not the case then it's a bit scary, yes. It seems to make a good case for physical backups vs. logical. -- -David david@pgmasters.net
В списке pgsql-hackers по дате отправления: