Re: WIP patch for parallel pg_dump
От | Tatsuo Ishii |
---|---|
Тема | Re: WIP patch for parallel pg_dump |
Дата | |
Msg-id | 20101207.162754.999254373512590109.t-ishii@sraoss.co.jp обсуждение исходный текст |
Ответ на | Re: WIP patch for parallel pg_dump (Stefan Kaltenbrunner <stefan@kaltenbrunner.cc>) |
Список | pgsql-hackers |
> On 12/07/2010 01:22 AM, Tom Lane wrote: >> Josh Berkus <josh@agliodbs.com> writes: >>>> However, if you were doing something like parallel pg_dump you could >>>> just run the parent and child instances all against the slave, so the >>>> pg_dump scenario doesn't seem to offer much of a supporting use-case for >>>> worrying about this. When would you really need to be able to do it? >> >>> If you had several standbys, you could distribute the work of the >>> pg_dump among them. This would be a huge speedup for a large database, >>> potentially, thanks to parallelization of I/O and network. Imagine >>> doing a pg_dump of a 300GB database in 10min. >> >> That does sound kind of attractive. But to do that I think we'd have to >> go with the pass-the-snapshot-through-the-client approach. Shipping >> internal snapshot files through the WAL stream doesn't seem attractive >> to me. > > this kind of functionality would also be very useful/interesting for > connection poolers/loadbalancers that are trying to distribute load > across multiple hosts and could use that to at least give some sort of > consistency guarantee. In addition to this, that will greatly help query based replication tools such as pgpool-II. Sounds great. -- Tatsuo Ishii SRA OSS, Inc. Japan English: http://www.sraoss.co.jp/index_en.php Japanese: http://www.sraoss.co.jp
В списке pgsql-hackers по дате отправления: