Re: pg_dump additional options for performance
От | Pavan Deolasee |
---|---|
Тема | Re: pg_dump additional options for performance |
Дата | |
Msg-id | 2e78013d0802242139i6642bef4y215909808f7c1960@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: pg_dump additional options for performance ("Jochem van Dieten" <jochemd@gmail.com>) |
Список | pgsql-hackers |
On Sun, Feb 24, 2008 at 6:52 PM, Jochem van Dieten <jochemd@gmail.com> wrote: > > Or we could have a switch that specifies a directory and have pg_dump > split the dump not just in pre-schema, data and post-schema, but also > split the data in a file for each table. That would greatly facilitate > a parallel restore of the data through multiple connections. > How about having a single switch like --optimize <level> and then based on the "level", pg_dump behaves differently. For example, if optimization is turned off (i.e. -O0), pg_dump just dumps the schema and data. At level 1, it will dump the pre-schema, data and post-schema. We can then add more levels and optimize it further. For example, postponing the creation of non-constraining indexes, splitting the data into multiple files etc. I can also think of adding constructs to the dump so that we can identify what can be restored in parallel and pg_restore using that information during restore. Thanks, Pavan -- Pavan Deolasee EnterpriseDB http://www.enterprisedb.com
В списке pgsql-hackers по дате отправления: