Re: An idea for parallelizing COPY within one backend
От | A.M. |
---|---|
Тема | Re: An idea for parallelizing COPY within one backend |
Дата | |
Msg-id | 16377CE3-7581-4E94-BB3A-5846440D09CC@themactionfaction.com обсуждение исходный текст |
Ответ на | Re: An idea for parallelizing COPY within one backend ("Florian G. Pflug" <fgp@phlo.org>) |
Ответы |
Re: An idea for parallelizing COPY within one backend
Re: An idea for parallelizing COPY within one backend |
Список | pgsql-hackers |
On Feb 27, 2008, at 9:11 AM, Florian G. Pflug wrote: > Dimitri Fontaine wrote: >> Of course, the backends still have to parse the input given by >> pgloader, which only pre-processes data. I'm not sure having the >> client prepare the data some more (binary format or whatever) is a >> wise idea, as you mentionned and wrt Tom's follow-up. But maybe I'm >> all wrong, so I'm all ears! > > As far as I understand, pgloader starts N threads or processes that > open up N individual connections to the server. In that case, moving > then text->binary conversion from the backend into the loader won't > give any > additional performace I'd say. > > The reason that I'd love some within-one-backend solution is that > I'd allow you to utilize more than one CPU for a restore within a > *single* transaction. This is something that a client-side solution > won't be able to deliver, unless major changes to the architecture > of postgres happen first... It seems like multiple backends should be able to take advantage of 2PC for transaction safety. Cheers, M
В списке pgsql-hackers по дате отправления: