Re: [WIP] speeding up GIN build with parallel workers

Поиск
Список
Период
Сортировка
От Amit Kapila
Тема Re: [WIP] speeding up GIN build with parallel workers
Дата
Msg-id CAA4eK1+wt_hVSSOP1nnqPtUfB5m0beTtBC0so--eSGN-O8i0xA@mail.gmail.com
обсуждение исходный текст
Ответ на Re: [WIP] speeding up GIN build with parallel workers  ("Constantin S. Pan" <kvapen@gmail.com>)
Ответы Re: [WIP] speeding up GIN build with parallel workers  (Robert Haas <robertmhaas@gmail.com>)
Список pgsql-hackers
On Thu, Mar 17, 2016 at 2:56 PM, Constantin S. Pan <kvapen@gmail.com> wrote:
>
> On Thu, 17 Mar 2016 13:21:32 +0530
> Amit Kapila <amit.kapila16@gmail.com> wrote:
>
> > On Wed, Mar 16, 2016 at 7:50 PM, Constantin S. Pan <kvapen@gmail.com>
> > wrote:
> > >
> > > On Wed, 16 Mar 2016 18:08:38 +0530
> > > Amit Kapila <amit.kapila16@gmail.com> wrote:
> > >
> > > >
> > > > Why backend just waits, why can't it does the same work as any
> > > > worker does?  In general, for other parallelism features the
> > > > backend also behaves the same way as worker in producing the
> > > > results if the results from workers is not available.
> > >
> > > We can make backend do the same work as any worker, but that
> > > will complicate the code for less than 2 % perfomance boost.
> >
> > Why do you think it will be just 2%?  I think for single worker case,
> > it should be much more as the master backend will be less busy in
> > consuming tuples from tuple queue.  I can't say much about
> > code-complexity, as I haven't yet looked carefully at the logic of
> > patch, but we didn't find much difficulty while doing it for parallel
> > scans.  One of the commit which might help you in understanding how
> > currently heap scans are parallelised is
> > ee7ca559fcf404f9a3bd99da85c8f4ea9fbc2e92, you can see if that can
> > help you in anyway for writing a generic API for Gin parallel builds.
>
> I looked at the timing details some time ago, which showed
> that the backend spent about 1% of total time on data
> transfer from 1 worker, and 3% on transfer and merging from
> 2 workers. So if we use (active backend + 1 worker) instead
> of (passive backend + 2 workers), we still have to spend
> 1.5% on transfer and merging.
>

I think here the comparison should be between the case of (active backend + 1 worker) with (passive backend + 1 worker) or  (active backend + 2 worker) with (passive backend + 2 workers).  I don't think it is good assumption that workers are always freely available and you can use them as and when required for any operation.

>
> Or we can look at these measurements (from yesterday's
> message):
>
> wnum mem(MB) time(s)
>    0      16     247
>    1      16     256
>    2      16     126
>
> If 2 workers didn't have to transfer and merge their
> results, they would have finished in 247 / 2 = 123.5
> seconds. But the transfer and merging took another 2.5
> seconds. The merging takes a little longer than the
> transfer. If we now use backend+worker we get rid of 1
> transfer, but still have to do 1 transfer and then merge, so
> we will save less than a quarter of those 2.5 seconds.
>

If I understand the above data correctly, then it seems to indicate that majority of the work is done in processing the data, so I think it should be better if master and worker both can work together.


With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com

В списке pgsql-hackers по дате отправления:

Предыдущее
От: Mark Dilger
Дата:
Сообщение: Re: Make primnodes.h gender neutral
Следующее
От: Kevin Grittner
Дата:
Сообщение: Re: Make primnodes.h gender neutral