Re: Improve pg_dump dumping publication tables
От | Cary Huang |
---|---|
Тема | Re: Improve pg_dump dumping publication tables |
Дата | |
Msg-id | 160261701028.1154.3238486867767993908.pgcf@coridan.postgresql.org обсуждение исходный текст |
Ответ на | Improve pg_dump dumping publication tables ("Hsu, John" <hsuchen@amazon.com>) |
Ответы |
Re: Improve pg_dump dumping publication tables
|
Список | pgsql-hackers |
The following review has been posted through the commitfest application: make installcheck-world: tested, passed Implements feature: tested, passed Spec compliant: tested, passed Documentation: not tested Hi I applied the patch to PG master branch successfully and the patch seems to do as described. If there are a lot of tablescreated in the Postgres and only a few of them having publications, the getPublicationTables() function will loop throughall the tables and check each one to see if it is the desired relation type and if it contains one or more publications.So potentially a lot of looping and checking may happen. In John's patch, all these looping and checking arereplaced with one query that will return all the tables that have publications. One problem though, is that, if thereare a million tables created in the server like you say and all of them have publications, the query can return a millionrows, which will require a lot of memory on client side and the pg_dump client may need to handle this extreme casewith proper pagination, which adds complexity to the function's logic. Existing logic does not have this problem, becauseit simply queries a table's publication one at a time and do it a million times. thank you Cary Huang HighGo Software
В списке pgsql-hackers по дате отправления: