Обсуждение: Strange INSERT phenomenon with key constraints

Поиск
Список
Период
Сортировка

Strange INSERT phenomenon with key constraints

От
Lukas Ertl
Дата:
Hi,

I wanted to do a little benchmark on PostgreSQL 7.1 and though about a
database filled with random stuff.

The tables look like this:

CREATE TABLE foo (
        id serial not null primary key,
        foo varchar(255) not null
);

CREATE TABLE baz (
        id serial not null primary key,
        baz varchar(255),
);

CREATE TABLE bar (
        id serial not null primary key,
        date timestamp not null,
        foo_id integer not null references foo on update cascade
               on delete cascade,
        baz_id integer not null references baz on update cascade
               on delete cascade,
        bar varchar(255) not null,
        parent integer null references bar on update cascade
               on delete cascade
);

So far, so good.

Then I wrote a perl-script, that first fills 1000 entries into foo, 500
entries into baz, and then it should fill 20000 entries into bar.

The problem is that after about 1000 rows of bar, I get a "violation of
reference integrity", because obviously the db thinks, a value that I
wanted to insert into foo_id is not a valid key in foo. (The error shows
up randomly, not exactly at the 1000th row, but always above row 1000.)

I checked the script about fifty times right now, and I printed the values
to STDOUT to see what values are inserted, so I'm quite sure the bug isn't
in my script. The strange thing is that the same value of foo_id was
already used several times in the bar table, but if the rowcount passes
1000, it doesn't get accepted anymore.

Is this a bug, or am I just blind stupid?

lg,
le

--
Lukas Ertl                          eMail: l.ertl@univie.ac.at
WWW-Redaktion                       Tel.:  (+43 1) 4277-14073
Zentraler Informatikdienst (ZID)    Fax.:  (+43 1) 4277-9140
der Universität Wien


Re: Strange INSERT phenomenon with key constraints

От
"D. Duccini"
Дата:
PERL buffers its output -- you'll need to flush it after each line


On Wed, 2 May 2001, Lukas Ertl wrote:

> Hi,
>
> I wanted to do a little benchmark on PostgreSQL 7.1 and though about a
> database filled with random stuff.
>
> The tables look like this:
>
> CREATE TABLE foo (
>         id serial not null primary key,
>         foo varchar(255) not null
> );
>
> CREATE TABLE baz (
>         id serial not null primary key,
>         baz varchar(255),
> );
>
> CREATE TABLE bar (
>         id serial not null primary key,
>         date timestamp not null,
>         foo_id integer not null references foo on update cascade
>                on delete cascade,
>         baz_id integer not null references baz on update cascade
>                on delete cascade,
>         bar varchar(255) not null,
>         parent integer null references bar on update cascade
>                on delete cascade
> );
>
> So far, so good.
>
> Then I wrote a perl-script, that first fills 1000 entries into foo, 500
> entries into baz, and then it should fill 20000 entries into bar.
>
> The problem is that after about 1000 rows of bar, I get a "violation of
> reference integrity", because obviously the db thinks, a value that I
> wanted to insert into foo_id is not a valid key in foo. (The error shows
> up randomly, not exactly at the 1000th row, but always above row 1000.)
>
> I checked the script about fifty times right now, and I printed the values
> to STDOUT to see what values are inserted, so I'm quite sure the bug isn't
> in my script. The strange thing is that the same value of foo_id was
> already used several times in the bar table, but if the rowcount passes
> 1000, it doesn't get accepted anymore.
>
> Is this a bug, or am I just blind stupid?
>
> lg,
> le
>
> --
> Lukas Ertl                          eMail: l.ertl@univie.ac.at
> WWW-Redaktion                       Tel.:  (+43 1) 4277-14073
> Zentraler Informatikdienst (ZID)    Fax.:  (+43 1) 4277-9140
> der Universit�t Wien
>
>
> ---------------------------(end of broadcast)---------------------------
> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org
>


-----------------------------------------------------------------------------
david@backpack.com            BackPack Software, Inc.        www.backpack.com
+1 651.645.7550 voice       "Life is an Adventure.
+1 651.645.9798 fax            Don't forget your BackPack!"
-----------------------------------------------------------------------------


Re: Strange INSERT phenomenon with key constraints

От
Lukas Ertl
Дата:
On Wed, 2 May 2001, D. Duccini wrote:

>
> PERL buffers its output -- you'll need to flush it after each line

I have a

$|++;

at the beginning of my script, so it shouldn't be a problem.

regards,
le

--
Lukas Ertl                          eMail: l.ertl@univie.ac.at
WWW-Redaktion                       Tel.:  (+43 1) 4277-14073
Zentraler Informatikdienst (ZID)    Fax.:  (+43 1) 4277-9140
der Universität Wien


RE: Strange INSERT phenomenon with key constraints

От
"J. Patrick Lanigan"
Дата:
I am having a similar issue. How do you flush the buffer after each line in
perl?

> D. Duccini wrote:
>
> PERL buffers its output -- you'll need to flush it after each line


Re: Strange INSERT phenomenon with key constraints

От
ghaverla@freenet.edmonton.ab.ca
Дата:
On Wed, 2 May 2001, Lukas Ertl wrote:

> On Wed, 2 May 2001, D. Duccini wrote:
>
> >
> > PERL buffers its output -- you'll need to flush it after each line
>
> I have a
>
> $|++;
>
> at the beginning of my script, so it shouldn't be a problem.

$| is a per-filehandle magic variable.  You have unbuffered
stdout (probably), but is that the filehandle you want unbuffered?

Gord

Matter Realisations     http://www.materialisations.com/
Gordon Haverland, B.Sc. M.Eng. President
101  9504 182 St. NW    Edmonton, AB, CA  T5T 3A7
780/481-8019            ghaverla @ freenet.edmonton.ab.ca
780/993-1274 (cell)


Re: Strange INSERT phenomenon with key constraints

От
Joel Burton
Дата:
On Wed, 2 May 2001, Lukas Ertl wrote:

> Is this a bug, or am I just blind stupid?

I don't know.

Why don't you post your script?

--
Joel Burton   <jburton@scw.org>
Director of Information Systems, Support Center of Washington


flushing output in PERL

От
"D. Duccini"
Дата:
here ya go

   select((select($hdl), $| = 1)[0]);      # this little magic line flushes file

-duck

On Wed, 2 May 2001, J. Patrick Lanigan wrote:

> I am having a similar issue. How do you flush the buffer after each line in
> perl?
>
> > D. Duccini wrote:
> >
> > PERL buffers its output -- you'll need to flush it after each line
>
>
> ---------------------------(end of broadcast)---------------------------
> TIP 3: if posting/reading through Usenet, please send an appropriate
> subscribe-nomail command to majordomo@postgresql.org so that your
> message can get through to the mailing list cleanly
>


-----------------------------------------------------------------------------
david@backpack.com            BackPack Software, Inc.        www.backpack.com
+1 651.645.7550 voice       "Life is an Adventure.
+1 651.645.9798 fax            Don't forget your BackPack!"
-----------------------------------------------------------------------------


Re: Strange INSERT phenomenon with key constraints

От
Lukas Ertl
Дата:
On Wed, 2 May 2001, Joel Burton wrote:

> On Wed, 2 May 2001, Lukas Ertl wrote:
>
> > Is this a bug, or am I just blind stupid?
>
> I don't know.
>
> Why don't you post your script?

Ok, here it is:

#!/usr/bin/perl -w

use DBI;
use strict;

$|++;

print "Script started: ", scalar localtime, "\n";

my %config = (
    max_foo => 1000,
    max_bar => 20000,
    max_baz => 500,
);

my $dbh = DBI->connect(
    "dbi:Pg:dbname=foobarbaz",
    "XXX",
    "XXX",
    { RaiseError => 1 }
);

&populate_foo();
&populate_baz();
&populate_bar();

$dbh->disconnect;

print "Script finished: ", scalar localtime, "\n";

sub populate_foo {
    my $sth = $dbh->prepare("INSERT INTO foo (foo) VALUES(?)");

    for (1 .. $config{max_foo}) {
        my $foo = '';
        for (1 .. 5) {
            $foo = $foo . join '', (' ', 0..9, 'A'..'Z', 'a'..'z')
            [rand 63, rand 63, rand 63, rand 63, rand 63];
        }
        $foo .= $foo;
        $sth->execute($foo);
    }
}

sub populate_baz {
    my $sth = $dbh->prepare("INSERT INTO baz (baz) VALUES(?)");

    for (1 .. $config{max_baz}) {
        my $baz = '';
        for (1 .. 5) {
            $baz = $baz . join '', (' ', 0..9, 'A'..'Z', 'a'..'z')
            [rand 63, rand 63, rand 63, rand 63, rand 63];
        }
        $baz .= $baz;
        $sth->execute($baz);
    }
}

sub populate_bar {

    for (1 .. $config{max_bar}) {

        my $foo_id = int(rand $config{max_foo});
        my $baz_id = int(rand $config{max_baz});
        my $parent = int(rand $_);

        $foo_id = ($foo_id == 0) ? 1 : $foo_id;
        $baz_id = ($baz_id == 0) ? 1 : $baz_id;
        $parent = ($parent == 0) ? 1 : $parent;

        my $bar = '';
        for (1 .. 5) {
            $bar = $bar . join '', (' ', 0..9, 'A'..'Z', 'a'..'z')
            [rand 63, rand 63, rand 63, rand 63, rand 63];
        }
        $bar .= $bar;

        if (rand > 0.3) {

            print qq(
foo_id: $foo_id
baz_id: $baz_id
parent: $parent
bar: $bar
Current: $_

            );
            $dbh->do(
                "INSERT INTO bar (date, foo_id, baz_id, bar, parent)
                VALUES(NOW(),?,?,?,?)", undef,
                $foo_id, $baz_id, $bar, $parent
            );

        } else {

            print qq(
foo_id: $foo_id
baz_id: $baz_id
bar: $bar
Current: $_

            );
            $dbh->do(
                "INSERT INTO bar (date, foo_id, baz_id, bar)
                VALUES (NOW(),?,?,?)", undef,
                $foo_id, $baz_id, $bar
            );

        }
    }
}

regards,
le

--
Lukas Ertl                          eMail: l.ertl@univie.ac.at
WWW-Redaktion                       Tel.:  (+43 1) 4277-14073
Zentraler Informatikdienst (ZID)    Fax.:  (+43 1) 4277-9140
der Universität Wien


input from a file

От
"Stephen M. Ford"
Дата:
I have a dump from a msql database into a file and would like postgre to build the tables and insert the data into
thosetables.  How do I go about getting postgre to read commands from a file?  



Re: input from a file

От
Joel Burton
Дата:
On Thu, 3 May 2001, Stephen M. Ford wrote:

> I have a dump from a msql database into a file and would like postgre to build the tables and insert the data into
thosetables.  How do I go about getting postgre to read commands from a file?  

  psql dbname < yourfile.sql

is always a likely way to pump some data into a Unix program.


--
Joel Burton   <jburton@scw.org>
Director of Information Systems, Support Center of Washington


Re: input from a file

От
"Peter J. Schoenster"
Дата:
> I have a dump from a msql database into a file and would like postgre
> to build the tables and insert the data into those tables.  How do I
> go about getting postgre to read commands from a file?

Here is a simple example (the following is in a file I would call
career_positions.psql) :

CREATE TABLE "career_positions" (
"cpid" smallint,
"f_ecid" smallint REFERENCES employment_categories (ecid),
"career_positions_name" character varying(60),
"orderby" smallint
);

COPY "career_positions"  FROM stdin;
32    8    Accountant and Auditor    32
33    8    Advertising/Marketing    33
596    20    Writers and Authors     596
\.

create index cpid_index on career_positions (cpid);
create index cp_name_index on career_positions
(career_positions_name);

_end file

and then just

psql database < career_positions.psql


Peter
datapanik  - Toronto Canada
"The user interface should be a clear and simple path through
information and tasks, not a series of gates and doors."

Re: Strange INSERT phenomenon with key constraints

От
Tom Lane
Дата:
Lukas Ertl <l.ertl@univie.ac.at> writes:
> The problem is that after about 1000 rows of bar, I get a "violation of
> reference integrity", because obviously the db thinks, a value that I
> wanted to insert into foo_id is not a valid key in foo. (The error shows
> up randomly, not exactly at the 1000th row, but always above row 1000.)

I tried to duplicate this problem, using your script, but I saw no
failures in half a dozen tries.

I wonder whether you could have been running into a bug on the Perl
side.  We've seen reports of various bugs in recent versions of DBD-Pg;
could it be that the query sent is not what you intended it to be?
You might try turning on query logging to see what queries are actually
received by the server.

For reference, I'm using
    Postgres 7.1.1
    Perl 5.005_03
    DBI 1.15
    DBD-Pg 0.95 + patch for bug with query parameters longer than 32k

The aforementioned patch shouldn't have anything to do with this;
I believe it's present in DBD-Pg 0.96 and up anyway.

            regards, tom lane