Обсуждение: BUG #15733: An insert destined at partition created after a column has been dropped from the parent table fails

Поиск
Список
Период
Сортировка

BUG #15733: An insert destined at partition created after a column has been dropped from the parent table fails

От
PG Bug reporting form
Дата:
The following bug has been logged on the website:

Bug reference:      15733
Logged by:          Petr Fedorov
Email address:      petr.fedorov@phystech.edu
PostgreSQL version: 11.2
Operating system:   Centos7
Description:

Steps to reproduce:

1. Create a table to be partitioned

create table public.partitioned ( id integer not null, level1 integer not
null, level2 integer not null, level3 integer not null,  data1 integer not
null, data2 integer not null) partition by list (level1) ;


2. Create partitions for some values:

create table public.partitioned_1 partition of public.partitioned  for
values in (1) partition by list(level2);
create table public.partitioned_1_1 partition of public.partitioned_1    for
values in (1) partition by list(level3);    
create table public.partitioned_1_1_1 partition of public.partitioned_1_1
for values in (1); 

3. Drop some column from the partitioned table:

alter table partitioned drop column data1;

4. Create partitions for another value:

create table public.partitioned_2 partition of public.partitioned    for
values in (2) partition by list(level2);
create table public.partitioned_2_1 partition of public.partitioned_2    for
values in (1) partition by list (level3);    
create table public.partitioned_2_1_1 partition of public.partitioned_2_1
for values in (1); 

5. Insert the row which would land at partitioned_1_1_1 - works:

insert into partitioned values(1,1,1,1,1);

6. FAIL: Insert the row which would land at partitioned_2_1_1:

insert into partitioned values(2,2,1, 1,1); 

ERROR: cannot extract attribute from empty tuple slot SQL state: XX000

7. Still you can insert using partition_2 - works: 

insert into partitioned_2 values(2,2,1, 1,1);


On Wed, Apr 03, 2019 at 06:22:34PM +0000, PG Bug reporting form wrote:
> 6. FAIL: Insert the row which would land at partitioned_2_1_1:
>
> insert into partitioned values(2,2,1, 1,1);
>
> ERROR: cannot extract attribute from empty tuple slot SQL state: XX000

Indeed.  I can see that for v11 but not for HEAD.  Let's keep track of
that.  (Not looking at that now though)
--
Michael

Вложения
Hi Petr, Michael,

On 2019/04/04 10:35, Michael Paquier wrote:
> On Wed, Apr 03, 2019 at 06:22:34PM +0000, PG Bug reporting form wrote:
>> 6. FAIL: Insert the row which would land at partitioned_2_1_1:
>>
>> insert into partitioned values(2,2,1, 1,1); 
>>
>> ERROR: cannot extract attribute from empty tuple slot SQL state: XX000

Thanks for the report.

> Indeed.  I can see that for v11 but not for HEAD.  Let's keep track of
> that.  (Not looking at that now though)

The problem seems to be present only in PG 11; not in PG 10 and HEAD, as
Michael already said.

It seems to have been introduced by 34295b87f in PG 11, wherein the
decision of when to add a tuple to an intermediate parent's dedicated
tuple routing slot is mistakenly getting mixed with the decision of
whether tuple conversion is required between the parent of the previous
level and the current parent.  We didn't catch that error back then,
because we only tried up to 2 levels of partitioning, whereas the test
case presented here uses 3 levels of partitioning.  In the reported test
case, the 1st level needs tuple conversion (between partitioned and
partitioned_2), but the next level does not (between partitioned_2 and
partitioned_2_1), so the slot used when routing through partitioned_2_1 to
the last level doesn't contain a tuple to extract the partition key from,
hence the error.

Attached patch fixes this.

Thanks,
Amit

Вложения
On Thu, Apr 04, 2019 at 07:31:09PM +0900, Amit Langote wrote:
> The problem seems to be present only in PG 11; not in PG 10 and HEAD, as
> Michael already said.
>
> It seems to have been introduced by 34295b87f in PG 11, wherein the
> decision of when to add a tuple to an intermediate parent's dedicated
> tuple routing slot is mistakenly getting mixed with the decision of
> whether tuple conversion is required between the parent of the previous
> level and the current parent.  We didn't catch that error back then,
> because we only tried up to 2 levels of partitioning, whereas the test
> case presented here uses 3 levels of partitioning.  In the reported test
> case, the 1st level needs tuple conversion (between partitioned and
> partitioned_2), but the next level does not (between partitioned_2 and
> partitioned_2_1), so the slot used when routing through partitioned_2_1 to
> the last level doesn't contain a tuple to extract the partition key from,
> hence the error.

Perhaps it would make sense to have this test case on HEAD as well?
It seems to me that there is little point in having a completely new
set of relations to test this issue knowing that multi-level
partitioning is already tested in the same file.  Why not extending
the part close to "more tests for certain multi-level partitioning
scenarios" in insert.sql with a mlparted111 or such a thing?  This
way, we have a three-layer partitioning and the error can be equally
triggered.
--
Michael

Вложения
Thanks for taking a look.

On 2019/04/04 19:48, Michael Paquier wrote:
> On Thu, Apr 04, 2019 at 07:31:09PM +0900, Amit Langote wrote:
>> The problem seems to be present only in PG 11; not in PG 10 and HEAD, as
>> Michael already said.
>>
>> It seems to have been introduced by 34295b87f in PG 11, wherein the
>> decision of when to add a tuple to an intermediate parent's dedicated
>> tuple routing slot is mistakenly getting mixed with the decision of
>> whether tuple conversion is required between the parent of the previous
>> level and the current parent.  We didn't catch that error back then,
>> because we only tried up to 2 levels of partitioning, whereas the test
>> case presented here uses 3 levels of partitioning.  In the reported test
>> case, the 1st level needs tuple conversion (between partitioned and
>> partitioned_2), but the next level does not (between partitioned_2 and
>> partitioned_2_1), so the slot used when routing through partitioned_2_1 to
>> the last level doesn't contain a tuple to extract the partition key from,
>> hence the error.
> 
> Perhaps it would make sense to have this test case on HEAD as well?

OK, I've divided the patch that adds tests into its own.

> It seems to me that there is little point in having a completely new
> set of relations to test this issue knowing that multi-level
> partitioning is already tested in the same file.  Why not extending
> the part close to "more tests for certain multi-level partitioning
> scenarios" in insert.sql with a mlparted111 or such a thing?  This
> way, we have a three-layer partitioning and the error can be equally
> triggered.

Done that way.

v2-0001_*, the test patch, can be applied to HEAD and PG 11 branches.  Of
course, one would want to commit 0001 and 0002 together in PG 11, because
you'll see the ERROR in the output in patch 0001, which is same as the
reported error.

I've also modified 0001 so that it can be applied to PG 10, attached as
pg10-0001*.

Both HEAD and PG 10 don't need any code changes.

Thanks,
Amit

Вложения
On 2019/04/05 12:22, Amit Langote wrote:
> On 2019/04/04 19:48, Michael Paquier wrote:
>> Perhaps it would make sense to have this test case on HEAD as well?
> 
> OK, I've divided the patch that adds tests into its own.
> 
>> It seems to me that there is little point in having a completely new
>> set of relations to test this issue knowing that multi-level
>> partitioning is already tested in the same file.  Why not extending
>> the part close to "more tests for certain multi-level partitioning
>> scenarios" in insert.sql with a mlparted111 or such a thing?  This
>> way, we have a three-layer partitioning and the error can be equally
>> triggered.
> 
> Done that way.
> 
> v2-0001_*, the test patch, can be applied to HEAD and PG 11 branches.  Of
> course, one would want to commit 0001 and 0002 together in PG 11, because
> you'll see the ERROR in the output in patch 0001, which is same as the
> reported error.
> 
> I've also modified 0001 so that it can be applied to PG 10, attached as
> pg10-0001*.
> 
> Both HEAD and PG 10 don't need any code changes.

I said v2-0001 could be applied unchanged to both HEAD and PG 11, but I
was wrong.  Tests without the code changes fail on PG 11, but they pass on
both PG 10 and HEAD as there's no bug in those branches.

Here are revised patches.

pg10-v3-0001 adds tests in PG 10 branch
pg11-v3-0001 adds tests in PG 11 branch t(tests fail which 0002 fixes)
pg11-v3-0002 fixes bug in PG 11 branch
HEAD-v3-0001 adds tests in HEAD branch

I also rewrote tests a bit too, expanding the comment, and finding even
more suitable place in insert.sql to add this test than v2.

Thanks,
Amit

Вложения
On Fri, Apr 05, 2019 at 02:07:22PM +0900, Amit Langote wrote:
> I also rewrote tests a bit too, expanding the comment, and finding even
> more suitable place in insert.sql to add this test than v2.

Finally done.  My apologies for the time it took.  I have expanded the
tests a bit more, tweaked a couple of comments and a bit the logic,
then committed the fix to v11, with the extra set of regression tests
added on HEAD.  Thanks for the patch, Amit, and thanks for the report,
Petr!
--
Michael

Вложения
Hi Michael,

On 2019/04/08 13:50, Michael Paquier wrote:
> On Fri, Apr 05, 2019 at 02:07:22PM +0900, Amit Langote wrote:
>> I also rewrote tests a bit too, expanding the comment, and finding even
>> more suitable place in insert.sql to add this test than v2.
> 
> Finally done.  My apologies for the time it took.  I have expanded the
> tests a bit more, tweaked a couple of comments and a bit the logic,
> then committed the fix to v11, with the extra set of regression tests
> added on HEAD.  Thanks for the patch, Amit, and thanks for the report,
> Petr!

Thanks a lot for taking care of this.

Regards,
Amit