Re: [HACKERS] Effect of changing the value for PARALLEL_TUPLE_QUEUE_SIZE
От | Amit Kapila |
---|---|
Тема | Re: [HACKERS] Effect of changing the value for PARALLEL_TUPLE_QUEUE_SIZE |
Дата | |
Msg-id | CAA4eK1+mTd0qjH6zb6tsy9O54tshZBd2t1DFaN4wmb=Dmbn2VA@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: [HACKERS] Effect of changing the value for PARALLEL_TUPLE_QUEUE_SIZE (Robert Haas <robertmhaas@gmail.com>) |
Ответы |
Re: [HACKERS] Effect of changing the value for PARALLEL_TUPLE_QUEUE_SIZE
|
Список | pgsql-hackers |
On Fri, Jun 2, 2017 at 6:38 PM, Robert Haas <robertmhaas@gmail.com> wrote: > On Fri, Jun 2, 2017 at 9:01 AM, Amit Kapila <amit.kapila16@gmail.com> wrote: >> Your reasoning sounds sensible to me. I think the other way to attack >> this problem is that we can maintain some local queue in each of the >> workers when the shared memory queue becomes full. Basically, we can >> extend your "Faster processing at Gather node" patch [1] such that >> instead of fixed sized local queue, we can extend it when the shm >> queue become full. I think that way we can handle both the problems >> (worker won't stall if shm queues are full and workers can do batched >> writes in shm queue to avoid the shm queue communication overhead) in >> a similar way. > > We still have to bound the amount of memory that we use for queueing > data in some way. > Yeah, probably till work_mem (or some percentage of work_mem). If we want to have some extendable solution then we might want to back it up with some file, however, we might not need to go that far. I think we can do some experiments to see how much additional memory is sufficient to give us maximum benefit. -- With Regards, Amit Kapila. EnterpriseDB: http://www.enterprisedb.com
В списке pgsql-hackers по дате отправления: