Re: [HACKERS] why not parallel seq scan for slow functions
От | Amit Kapila |
---|---|
Тема | Re: [HACKERS] why not parallel seq scan for slow functions |
Дата | |
Msg-id | CAA4eK1KUYk8XbYwnK3CE9VAm_w_oJmX-x-3+_FPrRV0BQYhr7g@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: [HACKERS] why not parallel seq scan for slow functions (Dilip Kumar <dilipbalaut@gmail.com>) |
Ответы |
Re: [HACKERS] why not parallel seq scan for slow functions
|
Список | pgsql-hackers |
On Thu, Aug 17, 2017 at 2:45 PM, Dilip Kumar <dilipbalaut@gmail.com> wrote: > On Thu, Aug 17, 2017 at 2:09 PM, Dilip Kumar <dilipbalaut@gmail.com> wrote: >> >> Either we can pass "num_gene" to merge_clump or we can store num_gene >> in the root. And inside merge_clump we can check. Do you see some more >> complexity? >> I think something like that should work. > After putting some more thought I see one more problem but not sure > whether we can solve it easily. Now, if we skip generating the gather > path at top level node then our cost comparison while adding the > element to pool will not be correct as we are skipping some of the > paths (gather path). And, it's very much possible that the path1 is > cheaper than path2 without adding gather on top of it but with gather, > path2 can be cheaper. > I think that should not matter because the costing of gather is mainly based on a number of rows and that should be same for both path1 and path2 in this case. -- With Regards, Amit Kapila. EnterpriseDB: http://www.enterprisedb.com
В списке pgsql-hackers по дате отправления: