Обсуждение: Our trial to TPC-DS but optimizer made unreasonable plan

Поиск
Список
Период
Сортировка

Our trial to TPC-DS but optimizer made unreasonable plan

От
Kouhei Kaigai
Дата:
(Please read this message on wide display)

Our team recently tries to run TPC-DS benchmark to know capability of
PostgreSQL towards typical analytic queries.
TPC-DS defines about 100 complicated queries. We noticed optimizer made
unreasonable execution plan towards some of queries.

Here is an example (query.04) below, on bottom of this message.

Its query execution plan by EXPLAIN is below. The most time consuming
portion is multilevel nested-loop on bottom of the EXPLAIN output,
because it sequentially runs on the CTE result.
I cannot complete the query execution within 30minutes towards SF=1
on Xeon E5-2670v3 and RAM=384GB environment.

----------------------------------------------------------------------
Limit  (cost=1248808.70..1248808.71 rows=1 width=132)  CTE year_total    ->  Append  (cost=193769.66..496076.44
rows=4778919width=220)          ->  HashAggregate  (cost=193769.66..226692.26 rows=2633808 width=178)
GroupKey: customer.c_customer_id, customer.c_first_name, customer.c_last_name, customer.c_preferred_cust_flag,
customer.c_birth_country,customer.c_login, customer.c_email_address, date_dim.d_year                ->  Custom Scan
(GpuJoin) (cost=14554.84..108170.90 rows=2633808 width=178)                      Bulkload: On (density: 100.00%)
             Depth 1: Logic: GpuHashJoin, HashKeys: (ss_sold_date_sk), JoinQual: (ss_sold_date_sk = d_date_sk),
nrows_ratio:0.95623338                      Depth 2: Logic: GpuHashJoin, HashKeys: (ss_customer_sk), JoinQual:
(ss_customer_sk= c_customer_sk), nrows_ratio: 0.91441411                      ->  Custom Scan (BulkScan) on store_sales
(cost=0.00..96501.23 rows=2880323 width=38)                      ->  Seq Scan on date_dim  (cost=0.00..2705.49
rows=73049width=16)                      ->  Seq Scan on customer  (cost=0.00..4358.00 rows=100000 width=156)
-> HashAggregate  (cost=125474.72..143301.10 rows=1426111 width=181)                Group Key:
customer_1.c_customer_id,customer_1.c_first_name, customer_1.c_last_name, customer_1.c_preferred_cust_flag,
customer_1.c_birth_country,customer_1.c_login, customer_1.c_email_address, date_dim_1.d_year                ->  Custom
Scan(GpuJoin)  (cost=14610.07..79126.11 rows=1426111 width=181)                      Bulkload: On (density: 100.00%)
                 Depth 1: Logic: GpuHashJoin, HashKeys: (cs_bill_customer_sk), JoinQual: (c_customer_sk =
cs_bill_customer_sk),nrows_ratio: 0.99446636                      Depth 2: Logic: GpuHashJoin, HashKeys:
(cs_sold_date_sk),JoinQual: (cs_sold_date_sk = d_date_sk), nrows_ratio: 0.98929483                      ->  Custom Scan
(BulkScan)on catalog_sales  (cost=0.00..65628.43 rows=1441543 width=41)                      ->  Seq Scan on customer
customer_1 (cost=0.00..4358.00 rows=100000 width=156)                      ->  Seq Scan on date_dim date_dim_1
(cost=0.00..2705.49rows=73049 width=16)          ->  HashAggregate  (cost=69306.38..78293.88 rows=719000 width=181)
          Group Key: customer_2.c_customer_id, customer_2.c_first_name, customer_2.c_last_name,
customer_2.c_preferred_cust_flag,customer_2.c_birth_country, customer_2.c_login, customer_2.c_email_address,
date_dim_2.d_year               ->  Custom Scan (GpuJoin)  (cost=14702.18..45938.88 rows=719000 width=181)
       Bulkload: On (density: 100.00%)                      Depth 1: Logic: GpuHashJoin, HashKeys:
(ws_bill_customer_sk),JoinQual: (c_customer_sk = ws_bill_customer_sk), nrows_ratio: 0.99973309
Depth2: Logic: GpuHashJoin, HashKeys: (ws_sold_date_sk), JoinQual: (ws_sold_date_sk = d_date_sk), nrows_ratio:
0.99946618                     ->  Custom Scan (BulkScan) on web_sales  (cost=0.00..32877.84 rows=719384 width=41)
               ->  Seq Scan on customer customer_2  (cost=0.00..4358.00 rows=100000 width=156)                      ->
SeqScan on date_dim date_dim_2  (cost=0.00..2705.49 rows=73049 width=16)  ->  Sort  (cost=752732.27..752732.27 rows=1
width=132)       Sort Key: t_s_secyear.customer_id, t_s_secyear.customer_first_name, t_s_secyear.customer_last_name,
t_s_secyear.customer_email_address       ->  Nested Loop  (cost=0.00..752732.26 rows=1 width=132)              Join
Filter:((t_s_secyear.customer_id = t_w_secyear.customer_id) AND (CASE WHEN (t_c_firstyear.year_total > '0'::numeric)
THEN(t_c_secyear.year_total / t_c_firstyear.year_total) ELSE NULL::numeric END > CASE WHEN (t_w_firstyear.year_total >
'0'::numeric)THEN (t_w_secyear.year_total / t_w_firstyear.year_total) ELSE NULL::numeric END))              ->  Nested
Loop (cost=0.00..633256.31 rows=1 width=308)                    Join Filter: ((t_s_secyear.customer_id =
t_c_secyear.customer_id)AND (CASE WHEN (t_c_firstyear.year_total > '0'::numeric) THEN (t_c_secyear.year_total /
t_c_firstyear.year_total)ELSE NULL::numeric END > CASE WHEN (t_s_firstyear.year_total > '0'::numeric) THEN
(t_s_secyear.year_total/ t_s_firstyear.year_total) ELSE NULL::numeric END))                    ->  Nested Loop
(cost=0.00..513780.36rows=1 width=320)                          Join Filter: (t_s_firstyear.customer_id =
t_s_secyear.customer_id)                         ->  Nested Loop  (cost=0.00..394303.22 rows=2 width=156)
                Join Filter: (t_s_firstyear.customer_id = t_w_firstyear.customer_id)                                ->
NestedLoop  (cost=0.00..262876.15 rows=8 width=104)                                      Join Filter:
(t_s_firstyear.customer_id= t_c_firstyear.customer_id)                                      ->  CTE Scan on year_total
t_s_firstyear (cost=0.00..131420.27 rows=40 width=52)                                            Filter: ((year_total >
'0'::numeric)AND (sale_type = 's'::text) AND (dyear = 2001))                                      ->  CTE Scan on
year_totalt_c_firstyear  (cost=0.00..131420.27 rows=40 width=52)                                            Filter:
((year_total> '0'::numeric) AND (sale_type = 'c'::text) AND (dyear = 2001))                                ->  CTE Scan
onyear_total t_w_firstyear  (cost=0.00..131420.27 rows=40 width=52)                                      Filter:
((year_total> '0'::numeric) AND (sale_type = 'w'::text) AND (dyear = 2001))                          ->  CTE Scan on
year_totalt_s_secyear  (cost=0.00..119472.98 rows=119 width=164)                                Filter: ((sale_type =
's'::text)AND (dyear = 2002))                    ->  CTE Scan on year_total t_c_secyear  (cost=0.00..119472.98 rows=119
width=52)                         Filter: ((sale_type = 'c'::text) AND (dyear = 2002))              ->  CTE Scan on
year_totalt_w_secyear  (cost=0.00..119472.98 rows=119 width=52)                    Filter: ((sale_type = 'w'::text) AND
(dyear= 2002)) 
(54 rows)
----------------------------------------------------------------------

On the other hands, once I turned off the nested-loop using SET enable_nestloop=off,
EXPLAIN displayed the following output, and query gets completed with 11.2sec.
This plan replaced a bunch of NestLoop on CTE Scan by HashJoin, therefore, JOIN
logic does not need to take full-scan on CTE multiple times.

-----------------------------------------------------------------------
Limit  (cost=1248761.93..1248761.93 rows=1 width=132)  CTE year_total    ->  Append  (cost=193769.66..496076.44
rows=4778919width=220)          ->  HashAggregate  (cost=193769.66..226692.26 rows=2633808 width=178)
GroupKey: customer.c_customer_id, customer.c_first_name, customer.c_last_name, customer.c_preferred_cust_flag,
customer.c_birth_country,customer.c_login, customer.c_email_address, date_dim.d_year                ->  Custom Scan
(GpuJoin) (cost=14554.84..108170.90 rows=2633808 width=178)                      Bulkload: On (density: 100.00%)
             Depth 1: Logic: GpuHashJoin, HashKeys: (ss_sold_date_sk), JoinQual: (ss_sold_date_sk = d_date_sk),
nrows_ratio:0.95623338                      Depth 2: Logic: GpuHashJoin, HashKeys: (ss_customer_sk), JoinQual:
(ss_customer_sk= c_customer_sk), nrows_ratio: 0.91441411                      ->  Custom Scan (BulkScan) on store_sales
(cost=0.00..96501.23 rows=2880323 width=38)                      ->  Seq Scan on date_dim  (cost=0.00..2705.49
rows=73049width=16)                      ->  Seq Scan on customer  (cost=0.00..4358.00 rows=100000 width=156)
-> HashAggregate  (cost=125474.72..143301.10 rows=1426111 width=181)                Group Key:
customer_1.c_customer_id,customer_1.c_first_name, customer_1.c_last_name, customer_1.c_preferred_cust_flag,
customer_1.c_birth_country,customer_1.c_login, customer_1.c_email_address, date_dim_1.d_year                ->  Custom
Scan(GpuJoin)  (cost=14610.07..79126.11 rows=1426111 width=181)                      Bulkload: On (density: 100.00%)
                 Depth 1: Logic: GpuHashJoin, HashKeys: (cs_bill_customer_sk), JoinQual: (c_customer_sk =
cs_bill_customer_sk),nrows_ratio: 0.99446636                      Depth 2: Logic: GpuHashJoin, HashKeys:
(cs_sold_date_sk),JoinQual: (cs_sold_date_sk = d_date_sk), nrows_ratio: 0.98929483                      ->  Custom Scan
(BulkScan)on catalog_sales  (cost=0.00..65628.43 rows=1441543 width=41)                      ->  Seq Scan on customer
customer_1 (cost=0.00..4358.00 rows=100000 width=156)                      ->  Seq Scan on date_dim date_dim_1
(cost=0.00..2705.49rows=73049 width=16)          ->  HashAggregate  (cost=69306.38..78293.88 rows=719000 width=181)
          Group Key: customer_2.c_customer_id, customer_2.c_first_name, customer_2.c_last_name,
customer_2.c_preferred_cust_flag,customer_2.c_birth_country, customer_2.c_login, customer_2.c_email_address,
date_dim_2.d_year               ->  Custom Scan (GpuJoin)  (cost=14702.18..45938.88 rows=719000 width=181)
       Bulkload: On (density: 100.00%)                      Depth 1: Logic: GpuHashJoin, HashKeys:
(ws_bill_customer_sk),JoinQual: (c_customer_sk = ws_bill_customer_sk), nrows_ratio: 0.99973309
Depth2: Logic: GpuHashJoin, HashKeys: (ws_sold_date_sk), JoinQual: (ws_sold_date_sk = d_date_sk), nrows_ratio:
0.99946618                     ->  Custom Scan (BulkScan) on web_sales  (cost=0.00..32877.84 rows=719384 width=41)
               ->  Seq Scan on customer customer_2  (cost=0.00..4358.00 rows=100000 width=156)                      ->
SeqScan on date_dim date_dim_2  (cost=0.00..2705.49 rows=73049 width=16)  ->  Sort  (cost=752685.49..752685.50 rows=1
width=132)       Sort Key: t_s_secyear.customer_id, t_s_secyear.customer_first_name, t_s_secyear.customer_last_name,
t_s_secyear.customer_email_address       ->  Hash Join  (cost=621264.91..752685.48 rows=1 width=132)              Hash
Cond:(t_s_secyear.customer_id = t_w_secyear.customer_id)              Join Filter: (CASE WHEN (t_c_firstyear.year_total
>'0'::numeric) THEN (t_c_secyear.year_total / t_c_firstyear.year_total) ELSE NULL::numeric END > CASE WHEN
(t_w_firstyear.year_total> '0'::numeric) THEN (t_w_secyear.year_total / t_w_firstyear.year_total) ELSE NULL::numeric
END)             ->  Hash Join  (cost=501790.45..633210.98 rows=1 width=308)                    Hash Cond:
(t_s_secyear.customer_id= t_c_secyear.customer_id)                    Join Filter: (CASE WHEN (t_c_firstyear.year_total
>'0'::numeric) THEN (t_c_secyear.year_total / t_c_firstyear.year_total) ELSE NULL::numeric END > CASE WHEN
(t_s_firstyear.year_total> '0'::numeric) THEN (t_s_secyear.year_total / t_s_firstyear.year_total) ELSE NULL::numeric
END)                   ->  Hash Join  (cost=382315.99..513736.47 rows=1 width=320)                          Hash Cond:
(t_s_firstyear.customer_id= t_s_secyear.customer_id)                          ->  Hash Join  (cost=262841.53..394261.97
rows=2width=156)                                Hash Cond: (t_w_firstyear.customer_id = t_s_firstyear.customer_id)
                         ->  CTE Scan on year_total t_w_firstyear  (cost=0.00..131420.27 rows=40 width=52)
                       Filter: ((year_total > '0'::numeric) AND (sale_type = 'w'::text) AND (dyear = 2001))
                  ->  Hash  (cost=262841.43..262841.43 rows=8 width=104)                                      ->  Hash
Join (cost=131420.77..262841.43 rows=8 width=104)                                            Hash Cond:
(t_s_firstyear.customer_id= t_c_firstyear.customer_id)                                            ->  CTE Scan on
year_totalt_s_firstyear  (cost=0.00..131420.27 rows=40 width=52)
Filter:((year_total > '0'::numeric) AND (sale_type = 's'::text) AND (dyear = 2001))
      ->  Hash  (cost=131420.27..131420.27 rows=40 width=52)                                                  ->  CTE
Scanon year_total t_c_firstyear  (cost=0.00..131420.27 rows=40 width=52)
       Filter: ((year_total > '0'::numeric) AND (sale_type = 'c'::text) AND (dyear = 2001))                          ->
Hash  (cost=119472.98..119472.98 rows=119 width=164)                                ->  CTE Scan on year_total
t_s_secyear (cost=0.00..119472.98 rows=119 width=164)                                      Filter: ((sale_type =
's'::text)AND (dyear = 2002))                    ->  Hash  (cost=119472.98..119472.98 rows=119 width=52)
         ->  CTE Scan on year_total t_c_secyear  (cost=0.00..119472.98 rows=119 width=52)
Filter: ((sale_type = 'c'::text) AND (dyear = 2002))              ->  Hash  (cost=119472.98..119472.98 rows=119
width=52)                   ->  CTE Scan on year_total t_w_secyear  (cost=0.00..119472.98 rows=119 width=52)
             Filter: ((sale_type = 'w'::text) AND (dyear = 2002)) 
(61 rows)
------------------------------------------------------------------

SET enable_nestloop=off performed fine in this case. However, it seems to me
this restriction is too much.

In fact, cost of HashJoin underlying Sort node is:   ->  Hash Join  (cost=621264.91..752685.48 rows=1 width=132)

On the other hands, NestedLoop on same place is:   ->  Nested Loop  (cost=0.00..752732.26 rows=1 width=132)

Probably, small GUC adjustment may make optimizer to prefer HashJoin towards
these kind of queries.
Do you have a good idea?


====== QUERY: No.04 ========================

EXPLAIN ANALYZE
with year_total as (select c_customer_id customer_id      ,c_first_name customer_first_name      ,c_last_name
customer_last_name     ,c_preferred_cust_flag customer_preferred_cust_flag      ,c_birth_country customer_birth_country
    ,c_login customer_login      ,c_email_address customer_email_address      ,d_year dyear
,sum(((ss_ext_list_price-ss_ext_wholesale_cost-ss_ext_discount_amt)+ss_ext_sales_price)/2)year_total      ,'s'
sale_typefromcustomer    ,store_sales    ,date_dimwhere c_customer_sk = ss_customer_sk  and ss_sold_date_sk =
d_date_skgroupby c_customer_id        ,c_first_name        ,c_last_name        ,c_preferred_cust_flag
,c_birth_country       ,c_login        ,c_email_address        ,d_yearunion allselect c_customer_id customer_id
,c_first_namecustomer_first_name      ,c_last_name customer_last_name      ,c_preferred_cust_flag
customer_preferred_cust_flag     ,c_birth_country customer_birth_country      ,c_login customer_login
,c_email_addresscustomer_email_address      ,d_year dyear
,sum((((cs_ext_list_price-cs_ext_wholesale_cost-cs_ext_discount_amt)+cs_ext_sales_price)/2)) year_total      ,'c'
sale_typefromcustomer    ,catalog_sales    ,date_dimwhere c_customer_sk = cs_bill_customer_sk  and cs_sold_date_sk =
d_date_skgroupby c_customer_id        ,c_first_name        ,c_last_name        ,c_preferred_cust_flag
,c_birth_country       ,c_login        ,c_email_address        ,d_year 
union allselect c_customer_id customer_id      ,c_first_name customer_first_name      ,c_last_name customer_last_name
  ,c_preferred_cust_flag customer_preferred_cust_flag      ,c_birth_country customer_birth_country      ,c_login
customer_login     ,c_email_address customer_email_address      ,d_year dyear
,sum((((ws_ext_list_price-ws_ext_wholesale_cost-ws_ext_discount_amt)+ws_ext_sales_price)/2)) year_total      ,'w'
sale_typefromcustomer    ,web_sales    ,date_dimwhere c_customer_sk = ws_bill_customer_sk  and ws_sold_date_sk =
d_date_skgroupby c_customer_id        ,c_first_name        ,c_last_name        ,c_preferred_cust_flag
,c_birth_country       ,c_login        ,c_email_address        ,d_year        ) select
t_s_secyear.customer_id               ,t_s_secyear.customer_first_name                ,t_s_secyear.customer_last_name
            ,t_s_secyear.customer_email_addressfrom year_total t_s_firstyear    ,year_total t_s_secyear    ,year_total
t_c_firstyear   ,year_total t_c_secyear    ,year_total t_w_firstyear    ,year_total t_w_secyearwhere
t_s_secyear.customer_id= t_s_firstyear.customer_id  and t_s_firstyear.customer_id = t_c_secyear.customer_id  and
t_s_firstyear.customer_id= t_c_firstyear.customer_id  and t_s_firstyear.customer_id = t_w_firstyear.customer_id  and
t_s_firstyear.customer_id= t_w_secyear.customer_id  and t_s_firstyear.sale_type = 's'  and t_c_firstyear.sale_type =
'c' and t_w_firstyear.sale_type = 'w'  and t_s_secyear.sale_type = 's'  and t_c_secyear.sale_type = 'c'  and
t_w_secyear.sale_type= 'w'  and t_s_firstyear.dyear =  2001  and t_s_secyear.dyear = 2001+1  and t_c_firstyear.dyear =
2001 and t_c_secyear.dyear =  2001+1  and t_w_firstyear.dyear = 2001  and t_w_secyear.dyear = 2001+1  and
t_s_firstyear.year_total> 0  and t_c_firstyear.year_total > 0  and t_w_firstyear.year_total > 0  and case when
t_c_firstyear.year_total> 0 then t_c_secyear.year_total / t_c_firstyear.year_total else null end          > case when
t_s_firstyear.year_total> 0 then t_s_secyear.year_total / t_s_firstyear.year_total else null end  and case when
t_c_firstyear.year_total> 0 then t_c_secyear.year_total / t_c_firstyear.year_total else null end          > case when
t_w_firstyear.year_total> 0 then t_w_secyear.year_total / t_w_firstyear.year_total else null endorder by
t_s_secyear.customer_id       ,t_s_secyear.customer_first_name        ,t_s_secyear.customer_last_name
,t_s_secyear.customer_email_address
limit 100;


--
NEC Business Creation Division / PG-Strom Project
KaiGai Kohei <kaigai@ak.jp.nec.com>




Re: Our trial to TPC-DS but optimizer made unreasonable plan

От
Greg Stark
Дата:
On Thu, Aug 13, 2015 at 2:49 AM, Kouhei Kaigai <kaigai@ak.jp.nec.com> wrote:
> In fact, cost of HashJoin underlying Sort node is:
>     ->  Hash Join  (cost=621264.91..752685.48 rows=1 width=132)
>
> On the other hands, NestedLoop on same place is:
>     ->  Nested Loop  (cost=0.00..752732.26 rows=1 width=132)
>
> Probably, small GUC adjustment may make optimizer to prefer HashJoin towards
> these kind of queries.

With that kind of discrepancy I doubt adjusting GUCs will be sufficient

> Do you have a good idea?

Do you have EXPLAIN ANALYZE from the plan that finishes? Are there any
row estimates that are way off?


-- 
greg



Re: Our trial to TPC-DS but optimizer made unreasonable plan

От
Kouhei Kaigai
Дата:
> On Thu, Aug 13, 2015 at 2:49 AM, Kouhei Kaigai <kaigai@ak.jp.nec.com> wrote:
> > In fact, cost of HashJoin underlying Sort node is:
> >     ->  Hash Join  (cost=621264.91..752685.48 rows=1 width=132)
> >
> > On the other hands, NestedLoop on same place is:
> >     ->  Nested Loop  (cost=0.00..752732.26 rows=1 width=132)
> >
> > Probably, small GUC adjustment may make optimizer to prefer HashJoin towards
> > these kind of queries.
> 
> With that kind of discrepancy I doubt adjusting GUCs will be sufficient
> 
> > Do you have a good idea?
> 
> Do you have EXPLAIN ANALYZE from the plan that finishes? Are there any
> row estimates that are way off?
>
Yes, EXPLAIN ANALYZE is attached.

According to this, CTE year_total generates 384,208 rows. It is much smaller
than estimation (4.78M rows), however, filter's selectivity of CTE Scan was
not large as expectation.
For example, the deepest CTE Scan returns 37923 rows and 26314 rows, even though
40 rows were expected. On the next level, relations join between 11324 rows and
9952 rows, towards to estimation of 40rows x 8 rows.
If NestLoop is placed instead of HashJoin, it will make an explosion of the number
of loops.

Thanks,
--
NEC Business Creation Division / PG-Strom Project
KaiGai Kohei <kaigai@ak.jp.nec.com>


Вложения

Re: Our trial to TPC-DS but optimizer made unreasonable plan

От
Kouhei Kaigai
Дата:
Here is one other thing I could learn from TPC-DS benchmark.

The attached query is Q4 of TPC-DS, and its result was towards SF=100.
It took long time to compete (about 30min), please see the attached
EXPLAIN ANALYZE output.

Its core workload is placed on CTE year_total. Its Append node has
underlying three HashAggregate nodes which also has tables join for
each.

Below shows the first HashAggregate node. It consumes 268M rows, then
generates 9M rows. Underlying GpuJoin takes 74sec to process 268M rows,
so we can guess HashAggregate consumed 400sec.

  HashAggregate  (cost=18194948.40..21477516.00 rows=262605408 width=178) (actual time=480652.856..488055.918
rows=9142442loops=1)
 
    Group Key: customer.c_customer_id, customer.c_first_name, customer.c_last_name, customer.c_preferred_cust_flag,
customer.c_birth_country,customer.c_login, customer.c_email_address, date_dim.d_year
 
    ->  Custom Scan (GpuJoin)  (cost=101342.54..9660272.64 rows=262605408 width=178) (actual time=2472.174..73908.894
rows=268562375loops=1)
 
          Bulkload: On (density: 100.00%)
          Depth 1: Logic: GpuHashJoin, HashKeys: (ss_sold_date_sk), JoinQual: (ss_sold_date_sk = d_date_sk), nrows
(287997024-> 275041999, 95.50% expected 95.47%)
 
          Depth 2: Logic: GpuHashJoin, HashKeys: (ss_customer_sk), JoinQual: (ss_customer_sk = c_customer_sk), nrows
(275041999-> 268562375, 93.25% expected 91.18%)
 
          ->  Custom Scan (BulkScan) on store_sales  (cost=0.00..9649559.60 rows=287996960 width=38) (actual
time=18.372..54522.803rows=287997024 loops=1)
 
          ->  Seq Scan on date_dim  (cost=0.00..2705.49 rows=73049 width=16) (actual time=0.015..15.533 rows=73049
loops=1)
          ->  Seq Scan on customer  (cost=0.00..87141.74 rows=2000074 width=156) (actual time=0.018..582.373
rows=2000000loops=1)
 

Other two HashAggregate nodes have similar behavior. The second one
consumed 281sec, including 30sec by underlying GpuJoin. The third one
consumed 138sec, including 25sec by underlying GpuJoin.

Apart from my custom join implementation, It seems to me HashAggregate
node consumed too much time than expectation.

One characteristics of this workload is, this aggregation takes eight
grouping-keys. I doubt cost of function invocation for hash-value and
equal-checks may be criminal.


Let's dive into nodeAgg.c.
ExecAgg() calls agg_fill_hash_table() to fill up the hash table with
rows fetched from the underlying tables. On construction of the hash
table, it calls hash functions (at TupleHashTableHash) and equal check
functions (at execTuplesMatch) repeatedly.
Both of them uses FunctionCallX() interface that exchange the argument
using FmgrInfo structure. Usually, it is not the best way from performance
perspective. Especially, this workload takes 268M input rows and
8 grouping keys, so 268M (rows) x 8 (grouping keys) x 2 (for hash/equal),
4.2B times function calls via FmgrInfo shall happen.

I think SortSupport logic provides a reasonable way to solve this
kind of problem. For example, btint4sortsupport() informs a function
pointer of the fast version of comparator (btint4fastcmp) which takes
two Datum argument without indirect memory reference.
This mechanism will also make sense for HashAggregate logic, to reduce
the cost of function invocations.

Please comment on the idea I noticed here.


And, as an aside, if HashAggregate picks up a hash-value of grouping-keys
from the target-list of underlying plan node (GpuJoin in this case), it
may be possible to off-load calculation of hash-values on GPU device. :-)

Thanks,
--
NEC Business Creation Division / PG-Strom Project
KaiGai Kohei <kaigai@ak.jp.nec.com>


> -----Original Message-----
> From: pgsql-hackers-owner@postgresql.org
> [mailto:pgsql-hackers-owner@postgresql.org] On Behalf Of Kouhei Kaigai
> Sent: Thursday, August 13, 2015 8:23 PM
> To: Greg Stark
> Cc: PostgreSQL-development
> Subject: Re: [HACKERS] Our trial to TPC-DS but optimizer made unreasonable plan
> 
> > On Thu, Aug 13, 2015 at 2:49 AM, Kouhei Kaigai <kaigai@ak.jp.nec.com> wrote:
> > > In fact, cost of HashJoin underlying Sort node is:
> > >     ->  Hash Join  (cost=621264.91..752685.48 rows=1 width=132)
> > >
> > > On the other hands, NestedLoop on same place is:
> > >     ->  Nested Loop  (cost=0.00..752732.26 rows=1 width=132)
> > >
> > > Probably, small GUC adjustment may make optimizer to prefer HashJoin towards
> > > these kind of queries.
> >
> > With that kind of discrepancy I doubt adjusting GUCs will be sufficient
> >
> > > Do you have a good idea?
> >
> > Do you have EXPLAIN ANALYZE from the plan that finishes? Are there any
> > row estimates that are way off?
> >
> Yes, EXPLAIN ANALYZE is attached.
> 
> According to this, CTE year_total generates 384,208 rows. It is much smaller
> than estimation (4.78M rows), however, filter's selectivity of CTE Scan was
> not large as expectation.
> For example, the deepest CTE Scan returns 37923 rows and 26314 rows, even though
> 40 rows were expected. On the next level, relations join between 11324 rows and
> 9952 rows, towards to estimation of 40rows x 8 rows.
> If NestLoop is placed instead of HashJoin, it will make an explosion of the number
> of loops.
> 
> Thanks,
> --
> NEC Business Creation Division / PG-Strom Project
> KaiGai Kohei <kaigai@ak.jp.nec.com>


Вложения

Re: Our trial to TPC-DS but optimizer made unreasonable plan

От
Robert Haas
Дата:
On Mon, Aug 17, 2015 at 9:40 AM, Kouhei Kaigai <kaigai@ak.jp.nec.com> wrote:
> I think SortSupport logic provides a reasonable way to solve this
> kind of problem. For example, btint4sortsupport() informs a function
> pointer of the fast version of comparator (btint4fastcmp) which takes
> two Datum argument without indirect memory reference.
> This mechanism will also make sense for HashAggregate logic, to reduce
> the cost of function invocations.
>
> Please comment on the idea I noticed here.

It's possible that this can work, but it might be a good idea to run
'perf' on this query and find out where the CPU time is actually
going.  That might give you a clearer picture of why the HashAggregate
is slow.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



Re: Our trial to TPC-DS but optimizer made unreasonable plan

От
Qingqing Zhou
Дата:
On Mon, Aug 17, 2015 at 6:40 AM, Kouhei Kaigai <kaigai@ak.jp.nec.com> wrote:
> Here is one other thing I could learn from TPC-DS benchmark.
>
> The attached query is Q4 of TPC-DS, and its result was towards SF=100.
> It took long time to compete (about 30min), please see the attached
> EXPLAIN ANALYZE output.
>
Look at this:

 ->  CTE Scan on year_total t_s_firstyear  (cost=0.00..13120715.27
rows=3976 width=52) (actual time=0.020..5425.980 rows=1816438 loops=1)
                           Filter: ((year_total > '0'::numeric) AND
(sale_type = 's'::text) AND (dyear = 2001))
                           Rows Removed by Filter: 19879897
 ->  CTE Scan on year_total t_s_secyear  (cost=0.00..11927922.98
rows=11928 width=164) (actual time=0.007..45.249 rows=46636 loops=1)
                           Filter: ((sale_type = 's'::text) AND (dyear = 2002))
                           Rows Removed by Filter: 185596

CTE expansion shall help here as we can push the filer down. I did a
quick patch to demonstrate the idea, following Tom's proposal
(38448.1430519406@sss.pgh.pa.us). I see obvious performance boost:

Turn off NLJ:
 original: Planning time: 4.391 ms
 Execution time: 77113.721 ms
 patched: Planning time: 8.429 ms
 Execution time: 18572.663 ms

+ work_mem to 1G
 original: Planning time: 4.487 ms
 Execution time: 29249.466 ms
 patched: Planning time: 11.148 ms
 Execution time: 7309.586 ms

Attached please find the WIP patch and also the ANALYZE results.
Notes: the patch may not directly apply to head as some network issue
here so my Linux box can't talk to git server.

Regards,
Qingqing

Вложения

Re: Our trial to TPC-DS but optimizer made unreasonable plan

От
Josh Berkus
Дата:
On 08/18/2015 04:40 PM, Qingqing Zhou wrote:
> Attached please find the WIP patch and also the ANALYZE results.
> Notes: the patch may not directly apply to head as some network issue
> here so my Linux box can't talk to git server.

So, one of the things we previously mentioned is that currently many
users deliberately use CTEs as an optimization barrier in order to force
the planner.  Given that, we need some kind of option to force the old
behavior; either SQL syntax or a GUC option.  Otherwise this will cause
a bunch of backwards-compatibility breakage.

Ideas?

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com



Re: Our trial to TPC-DS but optimizer made unreasonable plan

От
Tom Lane
Дата:
Josh Berkus <josh@agliodbs.com> writes:
> On 08/18/2015 04:40 PM, Qingqing Zhou wrote:
>> Attached please find the WIP patch and also the ANALYZE results.
>> Notes: the patch may not directly apply to head as some network issue
>> here so my Linux box can't talk to git server.

> So, one of the things we previously mentioned is that currently many
> users deliberately use CTEs as an optimization barrier in order to force
> the planner.  Given that, we need some kind of option to force the old
> behavior; either SQL syntax or a GUC option.

I think we already agreed what the syntax would be: ye good olde OFFSET 0
in the subquery.

We could have a GUC option too if people are sufficiently worried about
it, but I think that the need for one hasn't really been proven.
        regards, tom lane



Re: Our trial to TPC-DS but optimizer made unreasonable plan

От
Josh Berkus
Дата:
On 08/19/2015 01:32 PM, Tom Lane wrote:
> Josh Berkus <josh@agliodbs.com> writes:
>> On 08/18/2015 04:40 PM, Qingqing Zhou wrote:
>>> Attached please find the WIP patch and also the ANALYZE results.
>>> Notes: the patch may not directly apply to head as some network issue
>>> here so my Linux box can't talk to git server.
> 
>> So, one of the things we previously mentioned is that currently many
>> users deliberately use CTEs as an optimization barrier in order to force
>> the planner.  Given that, we need some kind of option to force the old
>> behavior; either SQL syntax or a GUC option.
> 
> I think we already agreed what the syntax would be: ye good olde OFFSET 0
> in the subquery.
> 
> We could have a GUC option too if people are sufficiently worried about
> it, but I think that the need for one hasn't really been proven.

Asking users to refactor their applications to add OFFSET 0 is a bit
painful, if we could take care of it via a backwards-compatibility GUC.We have many users who are specifically using
theCTE optimization
 
barrier to work around planner failures.

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com



Re: Our trial to TPC-DS but optimizer made unreasonable plan

От
Peter Geoghegan
Дата:
On Mon, Aug 17, 2015 at 6:40 AM, Kouhei Kaigai <kaigai@ak.jp.nec.com> wrote:
> I think SortSupport logic provides a reasonable way to solve this
> kind of problem. For example, btint4sortsupport() informs a function
> pointer of the fast version of comparator (btint4fastcmp) which takes
> two Datum argument without indirect memory reference.
> This mechanism will also make sense for HashAggregate logic, to reduce
> the cost of function invocations.
>
> Please comment on the idea I noticed here.

Is this a 9.5-based system? If so, then you'd benefit from the
memcmp() pre-check within varstr_cmp() by being on 9.5, since the
pre-check is not limited to cases that use text/varchar SortSupport --
this could make a big difference here. If not, then it might be
somewhat helpful to add a pre-check that considers total binary
equality only before bcTruelen() is ever called. Not so sure about the
latter idea, though.

I'm not sure if it would help with hash aggregates to use something
like SortSupport to avoid fmgr overhead. It might make enough of a
difference to matter, but maybe the easier win would come from
considering simple binary equality first, and only then using an
equality operator (think HOT style checks). That would have the
advantage of requiring no per-type/operator class support at all,
since it's safe to assume that binary equality is a proxy for
"equivalence" of sort order (or whatever we call the case where
5.00::numeric and 5.000::numeric are considered equal).

-- 
Peter Geoghegan



Re: Our trial to TPC-DS but optimizer made unreasonable plan

От
Kouhei Kaigai
Дата:
> -----Original Message-----
> From: pgsql-hackers-owner@postgresql.org
> [mailto:pgsql-hackers-owner@postgresql.org] On Behalf Of Peter Geoghegan
> Sent: Thursday, August 27, 2015 8:31 AM
> To: Kaigai Kouhei(海外 浩平)
> Cc: Greg Stark; PostgreSQL-development
> Subject: Re: [HACKERS] Our trial to TPC-DS but optimizer made unreasonable plan
> 
> On Mon, Aug 17, 2015 at 6:40 AM, Kouhei Kaigai <kaigai@ak.jp.nec.com> wrote:
> > I think SortSupport logic provides a reasonable way to solve this
> > kind of problem. For example, btint4sortsupport() informs a function
> > pointer of the fast version of comparator (btint4fastcmp) which takes
> > two Datum argument without indirect memory reference.
> > This mechanism will also make sense for HashAggregate logic, to reduce
> > the cost of function invocations.
> >
> > Please comment on the idea I noticed here.
> 
> Is this a 9.5-based system? If so, then you'd benefit from the
> memcmp() pre-check within varstr_cmp() by being on 9.5, since the
> pre-check is not limited to cases that use text/varchar SortSupport --
> this could make a big difference here. If not, then it might be
> somewhat helpful to add a pre-check that considers total binary
> equality only before bcTruelen() is ever called. Not so sure about the
> latter idea, though.
> 
My measurement is done on v9.5 based system. So, it also seems to me
replacement of CHAR(n) by VARCHAR(n) will make sense.

> I'm not sure if it would help with hash aggregates to use something
> like SortSupport to avoid fmgr overhead. It might make enough of a
> difference to matter, but maybe the easier win would come from
> considering simple binary equality first, and only then using an
> equality operator (think HOT style checks). That would have the
> advantage of requiring no per-type/operator class support at all,
> since it's safe to assume that binary equality is a proxy for
> "equivalence" of sort order (or whatever we call the case where
> 5.00::numeric and 5.000::numeric are considered equal).
>
My presumption was wrong, at least not major portion, according to
the perf result. So, I don't think elimination of fmgr overhead has
the first priority. However, shortcut pass of equality checks seems
to me a great leap, to avoid strict equality checks implemented per
data type; that often takes complicated logic.
Probably, it is more intelligent to apply this binary equality proxy
on only problematic data types, like bpchar(n). But less effective
on simple data types, like int4.

On the other hands, one other big portion of HashAggregate is
calculation of hash-value by all the grouping key.
It may be beneficial to have an option to reference the result
attribute of underlying plan. It potentially allows co-processor
to compute hash-value instead of CPU.

Thanks,
--
NEC Business Creation Division / PG-Strom Project
KaiGai Kohei <kaigai@ak.jp.nec.com>


Re: Our trial to TPC-DS but optimizer made unreasonable plan

От
Gavin Flower
Дата:
On 27/08/15 13:36, Kouhei Kaigai wrote:
[...]
> My measurement is done on v9.5 based system. So, it also seems to me 
> replacement of CHAR(n) by VARCHAR(n) will make sense.

Is there any reason to not simply use text instead of CHAR(n) or VARCHAR(n)?

[...]


-Gavin



Re: Our trial to TPC-DS but optimizer made unreasonable plan

От
Kouhei Kaigai
Дата:
> On 27/08/15 13:36, Kouhei Kaigai wrote:
> [...]
> > My measurement is done on v9.5 based system. So, it also seems to me
> > replacement of CHAR(n) by VARCHAR(n) will make sense.
> 
> Is there any reason to not simply use text instead of CHAR(n) or VARCHAR(n)?
>
Text is also welcome, of course.
--
NEC Business Creation Division / PG-Strom Project
KaiGai Kohei <kaigai@ak.jp.nec.com>


Re: Our trial to TPC-DS but optimizer made unreasonable plan

От
Andres Freund
Дата:
On 2015-08-19 15:14:03 -0700, Josh Berkus wrote:
> Asking users to refactor their applications to add OFFSET 0 is a bit
> painful, if we could take care of it via a backwards-compatibility GUC.
>  We have many users who are specifically using the CTE optimization
> barrier to work around planner failures.

Agreed. I think we'll cause a lot of problems in migrations if we do
this unconditionally. I also think CTEs are a much cleaner optimization
barrier than OFFSET 0.

Some are probably going to hate me for this, but I think it'd be better
to change the grammar to something like
name opt_name_list AS '(' PreparableStmt ')' OPTIONS '(' cte_option_list ')'

and allow to specify 'inline' 'off'/'on'. The guc would simply change
the default value.

Greetings,

Andres Freund