Re: [HACKERS] advanced partition matching algorithm forpartition-wise join

Поиск
Список
Период
Сортировка
От Mark Dilger
Тема Re: [HACKERS] advanced partition matching algorithm forpartition-wise join
Дата
Msg-id 768B0ED5-F2AA-4CB8-8DA3-41E2148CF42A@enterprisedb.com
обсуждение исходный текст
Ответ на Re: [HACKERS] advanced partition matching algorithm forpartition-wise join  (Etsuro Fujita <etsuro.fujita@gmail.com>)
Ответы Re: [HACKERS] advanced partition matching algorithm forpartition-wise join  (Etsuro Fujita <etsuro.fujita@gmail.com>)
Список pgsql-hackers

> On Feb 5, 2020, at 4:51 AM, Etsuro Fujita <etsuro.fujita@gmail.com> wrote:
>
> <v31-0001-Applying-Etsuro-Fujita-s-patches.patch><v31-0004-Consider-pruned-partitions.patch>


The patches apply and pass all tests.  A review of the patch vs. master looks reasonable.

The partition_join.sql test has multiple levels of partitioning, but when your patch extends that test with “advanced
partition-wisejoin”, none of the tables for the new section have multiple levels.  I spent a little while reviewing the
codeand inventing multiple level partitioning tests for advanced partition-wise join and did not encounter any
problems. I don’t care whether you use this particular example, but do you want to have multiple level partitioning in
thenew test section? 

CREATE TABLE alpha (a double precision, b double precision) PARTITION BY RANGE (a);
CREATE TABLE alpha_neg PARTITION OF alpha FOR VALUES FROM ('-Infinity') TO (0) PARTITION BY RANGE (b);
CREATE TABLE alpha_pos PARTITION OF alpha FOR VALUES FROM (0) TO ('Infinity') PARTITION BY RANGE (b);
CREATE TABLE alpha_nan PARTITION OF alpha FOR VALUES FROM ('Infinity') TO ('NaN');
CREATE TABLE alpha_neg_neg PARTITION OF alpha_neg FOR VALUES FROM ('-Infinity') TO (0);
CREATE TABLE alpha_neg_pos PARTITION OF alpha_neg FOR VALUES FROM (0) TO ('Infinity');
CREATE TABLE alpha_neg_nan PARTITION OF alpha_neg FOR VALUES FROM ('Infinity') TO ('NaN');
CREATE TABLE alpha_pos_neg PARTITION OF alpha_pos FOR VALUES FROM ('-Infinity') TO (0);
CREATE TABLE alpha_pos_pos PARTITION OF alpha_pos FOR VALUES FROM (0) TO ('Infinity');
CREATE TABLE alpha_pos_nan PARTITION OF alpha_pos FOR VALUES FROM ('Infinity') TO ('NaN');
INSERT INTO alpha (a, b)
    (SELECT * FROM
        (VALUES (-1.0::float8), (0.0::float8), (1.0::float8), ('Infinity'::float8)) a,
        (VALUES (-1.0::float8), (0.0::float8), (1.0::float8), ('Infinity'::float8)) b
    );
ANALYZE alpha;
ANALYZE alpha_neg;
ANALYZE alpha_pos;
ANALYZE alpha_nan;
ANALYZE alpha_neg_neg;
ANALYZE alpha_neg_pos;
ANALYZE alpha_neg_nan;
ANALYZE alpha_pos_neg;
ANALYZE alpha_pos_pos;
ANALYZE alpha_pos_nan;
CREATE TABLE beta (a double precision, b double precision) PARTITION BY RANGE (a, b);
CREATE TABLE beta_lo PARTITION OF beta FOR VALUES FROM (-5, -5) TO (0, 0);
CREATE TABLE beta_me PARTITION OF beta FOR VALUES FROM (0, 0) TO (0, 5);
CREATE TABLE beta_hi PARTITION OF beta FOR VALUES FROM (0, 5) TO (5, 5);
INSERT INTO beta (a, b)
    (SELECT * FROM
        (VALUES (-1.0::float8), (0.0::float8), (1.0::float8)) a,
        (VALUES (-1.0::float8), (0.0::float8), (1.0::float8)) b
    );
ANALYZE beta;
ANALYZE beta_lo;
ANALYZE beta_me;
ANALYZE beta_hi;
EXPLAIN SELECT * FROM alpha INNER JOIN beta ON (alpha.a = beta.a AND alpha.b = beta.b) WHERE alpha.a = 1 AND beta.b =
1;
                                  QUERY PLAN
-------------------------------------------------------------------------------
 Nested Loop  (cost=0.00..2.11 rows=1 width=32)
   ->  Seq Scan on alpha_pos_pos alpha  (cost=0.00..1.06 rows=1 width=16)
         Filter: ((b = '1'::double precision) AND (a = '1'::double precision))
   ->  Seq Scan on beta_hi beta  (cost=0.00..1.04 rows=1 width=16)
         Filter: ((b = '1'::double precision) AND (a = '1'::double precision))
(5 rows)




—
Mark Dilger
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company






В списке pgsql-hackers по дате отправления:

Предыдущее
От: Peter Geoghegan
Дата:
Сообщение: Re: Memory-Bounded Hash Aggregation
Следующее
От: Pavel Stehule
Дата:
Сообщение: Re: [Proposal] Global temporary tables