Re: WIP patch for parallel pg_dump
От | Bruce Momjian |
---|---|
Тема | Re: WIP patch for parallel pg_dump |
Дата | |
Msg-id | 201012022312.oB2NCF119818@momjian.us обсуждение исходный текст |
Ответ на | Re: WIP patch for parallel pg_dump (Dimitri Fontaine <dimitri@2ndQuadrant.fr>) |
Список | pgsql-hackers |
Dimitri Fontaine wrote: > Heikki Linnakangas <heikki.linnakangas@enterprisedb.com> writes: > > I don't see the point of the sort-by-relpages code. The order the objects > > are dumped should be irrelevant, as long as you obey the restrictions > > dictated by dependencies. Or is it only needed for the multiple-target-dirs > > feature? Frankly I don't see the point of that, so it would be good to cull > > it out at least in this first stage. > > >From the talk at CHAR(10), and provided memory serves, it's an > optimisation so that you're doing largest file in a process and all the > little file in other processes. In lots of case the total pg_dump > duration is then reduced to about the time to dump the biggest files. Seems there should be a comment in the code explaining why this is being done. -- Bruce Momjian <bruce@momjian.us> http://momjian.us EnterpriseDB http://enterprisedb.com + It's impossible for everything to be true. +
В списке pgsql-hackers по дате отправления: