Re: pg_dump additional options for performance
От | Magnus Hagander |
---|---|
Тема | Re: pg_dump additional options for performance |
Дата | |
Msg-id | 20080226113138.GM528@svr2.hagander.net обсуждение исходный текст |
Ответ на | Re: pg_dump additional options for performance (Tom Lane <tgl@sss.pgh.pa.us>) |
Ответы |
Re: pg_dump additional options for performance
|
Список | pgsql-hackers |
On Tue, Feb 26, 2008 at 12:39:29AM -0500, Tom Lane wrote: > Simon Riggs <simon@2ndquadrant.com> writes: > > ... So it would be good if we could dump objects in 3 groups > > 1. all commands required to re-create table > > 2. data > > 3. all commands required to complete table after data load > > [ much subsequent discussion snipped ] > > BTW, what exactly was the use-case for this? The recent discussions > about parallelizing pg_restore make it clear that the all-in-one > dump file format still has lots to recommend it. So I'm just wondering > what the actual advantage of splitting the dump into multiple files > will be. It clearly makes life more complicated; what are we buying? One use-case would be when you have to make some small change to the schema while reloading it, that's still compatible with the data format. Then you'd dump schema-no-indexes-and-stuff, then *edit* that file, before reloading things. It's a lot easier to edit the file if it's not hundreds of gigabytes.. //Magnus
В списке pgsql-hackers по дате отправления: