Re: [v9.5] Custom Plan API
От | Simon Riggs |
---|---|
Тема | Re: [v9.5] Custom Plan API |
Дата | |
Msg-id | CA+U5nM+5W3xwqQsaJV=Q25RtdPu3LcBuJp_1B7riUF9==TPVcQ@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: [v9.5] Custom Plan API (Kouhei Kaigai <kaigai@ak.jp.nec.com>) |
Список | pgsql-hackers |
On 8 May 2014 04:33, Kouhei Kaigai <kaigai@ak.jp.nec.com> wrote: >> From your description, my understanding is that you would like to stream >> data from 2 standard tables to the GPU, then perform a join on the GPU itself. >> >> I have been told that is not likely to be useful because of the data transfer >> overheads. >> > Here are two solutions. One is currently I'm working; in case when number > of rows in left- and right- tables are not balanced well, we can keep a hash > table in the GPU DRAM, then we transfer the data stream chunk-by-chunk from > the other side. Kernel execution and data transfer can be run asynchronously, > so it allows to hide data transfer cost as long as we have enough number of > chunks, like processor pipelining. Makes sense to me, thanks for explaining. The hardware-enhanced hash join sounds like a great idea. My understanding is we would need * a custom cost-model * a custom execution node The main question seems to be whether doing that would be allowable, cos its certainly doable. I'm still looking for a way to avoid adding planning time for all queries though. > Other solution is "integrated" GPU that kills necessity of data transfer, > like Intel's Haswell, AMD's Kaveri or Nvidia's Tegra K1; all majors are > moving to same direction. Sounds useful, but very non-specific, as yet. -- Simon Riggs http://www.2ndQuadrant.com/PostgreSQL Development, 24x7 Support, Training & Services
В списке pgsql-hackers по дате отправления: