Re: [v9.5] Custom Plan API
От | Peter Geoghegan |
---|---|
Тема | Re: [v9.5] Custom Plan API |
Дата | |
Msg-id | CAM3SWZR705rqmbcTZSc4R2aeswKWFnkyy=1pdSW5mFXc9OWXvw@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: [v9.5] Custom Plan API (Kouhei Kaigai <kaigai@ak.jp.nec.com>) |
Ответы |
Re: [v9.5] Custom Plan API
Re: [v9.5] Custom Plan API |
Список | pgsql-hackers |
On Thu, May 8, 2014 at 6:34 AM, Kouhei Kaigai <kaigai@ak.jp.nec.com> wrote: > Umm... I'm now missing the direction towards my goal. > What approach is the best way to glue PostgreSQL and PGStrom? I haven't really paid any attention to PGStrom. Perhaps it's just that I missed it, but I would find it useful if you could direct me towards a benchmark or something like that, that demonstrates a representative scenario in which the facilities that PGStrom offers are compelling compared to traditional strategies already implemented in Postgres and other systems. If I wanted to make joins faster, personally, I would look at opportunities to optimize our existing hash joins to take better advantage of modern CPU characteristics. A lot of the research suggests that it may be useful to implement techniques that take better advantage of available memory bandwidth through techniques like prefetching and partitioning, perhaps even (counter-intuitively) at the expense of compute bandwidth. It's possible that it just needs to be explained to me, but, with respect, intuitively I have a hard time imagining that offloading joins to the GPU will help much in the general case. Every paper on joins from the last decade talks a lot about memory bandwidth and memory latency. Are you concerned with some specific case that I may have missed? In what scenario might a cost-based optimizer reasonably prefer a custom join node implemented by PgStrom, over any of the existing join node types? It's entirely possible that I simply missed relevant discussions here. -- Peter Geoghegan
В списке pgsql-hackers по дате отправления: