Обсуждение: Does PostgreSQL have an UPDATE Function like UNIFY?

Поиск
Список
Период
Сортировка

Does PostgreSQL have an UPDATE Function like UNIFY?

От
"John J. Boris, Sr."
Дата:
I was wondering if PostgreSQL had a function that allowed you to update a
table from a delimited file that would update matching records and insert
new records. Under Unify you could do an update with a switch (I think it
was -n) that would take input from a pipe delimited file (in the order the
fields of the table) and since the table had which field was unique it
would key on that and if it didn't find the key it would add the record. 
I have a table that has approximately 2500 records. The data for this table
(and other tables in the db) comes out of a system I manage that will
sometime in the future be transferred fully to this. So the data is updated
(dumped from the COBOL Indexed file) and then loaded into the table (about
once a week). I can delete the records and then add them but I think that
might be a bit time consuming.Will I have to write a C program or Perl script to do this? I am hoping
not. One can only dream. Thanks in advance.

John J. Boris, Sr.   Boris Computer Services/ONLine Services
email:john.boris@onlinesvc.com.com
bbs: The Bleeding Edge 609-858-9221




Re: [SQL] Does PostgreSQL have an UPDATE Function like UNIFY?

От
"D'Arcy" "J.M." Cain
Дата:
Thus spake John J. Boris, Sr.
> I was wondering if PostgreSQL had a function that allowed you to update a
> table from a delimited file that would update matching records and insert
> new records. Under Unify you could do an update with a switch (I think it
> was -n) that would take input from a pipe delimited file (in the order the
> fields of the table) and since the table had which field was unique it
> would key on that and if it didn't find the key it would add the record. 

How about a python solution? (untested)

#! /usr/bin/env python
from pg import DB   # http://www.druid.net/pygresql/
db = DB()           # or specify other than the default database

for l in open('path_to_input_filename').readlines():   dict = convert_line_to_dictionary(l)    # this could be
automatedsomewhat   try: db.insert(dict)                    # fails if duplicate   except(db.update(dict)
  # do this if insert fails
 

The insert and update can be tried in the reverse order if you expect
more lines in the input to be updating existing records than new records.

The convert_line_to_dictionary() function could be automated somewhat but
I really hate depending on the order of fields.  However, if I know the
order of fields in the input file (easier to guarantee) then I woould
do something like this.
   l = string.split(l, '|')   dict = {       'field0': l[0],       'field1': l[1],       'field2': l[2]}

There is also a module to handle comma delimited, quoted strings files.

-- 
D'Arcy J.M. Cain <darcy@{druid|vex}.net>   |  Democracy is three wolves
http://www.druid.net/darcy/                |  and a sheep voting on
+1 416 424 2871     (DoD#0082)    (eNTP)   |  what's for dinner.


Re: [SQL] Does PostgreSQL have an UPDATE Function like UNIFY?

От
Tom Lane
Дата:
"John J. Boris, Sr." <john.boris@onlinesvc.com> writes:
> I was wondering if PostgreSQL had a function that allowed you to update a
> table from a delimited file that would update matching records and insert
> new records. Under Unify you could do an update with a switch (I think it
> was -n) that would take input from a pipe delimited file (in the order the
> fields of the table) and since the table had which field was unique it
> would key on that and if it didn't find the key it would add the
> record. 

Not directly, but I think you could solve it with a temporary table and
two or three SQL statements:
CREATE TABLE temp (just like the master)COPY temp FROM delimited fileUPDATE master SET field1 = temp.field1, etc
WHEREmaster.key = temp.keyINSERT INTO master SELECT * FROM temp    WHERE NOT EXISTS (SELECT * FROM master WHERE key =
temp.key)

There's probably a cleaner/more efficient way to do the last step...
but as long as you have an index on the key field it shouldn't be
too bad.
        regards, tom lane


Re: [SQL] Does PostgreSQL have an UPDATE Function like UNIFY?

От
Herouth Maoz
Дата:
At 03:34 +0300 on 29/05/1999, John J. Boris, Sr. wrote:


> I can delete the records and then add them but I think that
> might be a bit time consuming.

Frankly, I don't think it will be more time consuming. Supposing a COPY
command had to do it itself, it would have to look up each record in the
table, and replace it. So it would probably mark the old one as deleted and
insert the new record. It shouldn't be much faster than:

1) Creating a temp table with the same schema as the main one
2) copying the records into the temp table
3) deleting from the main all the records which exist in the temp  table (DELETE FROM main WHERE main.field =
temp.field);
4) Inserting all records from the temp table using INSERT INTO... SELECT.
5) Dropping the temp table
6) Vacuuming

Herouth

--
Herouth Maoz, Internet developer.
Open University of Israel - Telem project
http://telem.openu.ac.il/~herutma




Re: [SQL] Does PostgreSQL have an UPDATE Function like UNIFY?

От
"John J. Boris, Sr."
Дата:
D'Arcy
Thanks for the quick reply. I'll look into that. I don't think I had that
interface compiled when I installed PostgreSQL so I'll check. 
Thanks again.
At 08:29 AM 5/29/99 -0400, you wrote:
>Thus spake John J. Boris, Sr.
>> I was wondering if PostgreSQL had a function that allowed you to update a
>> table from a delimited file that would update matching records and insert
>> new records. Under Unify you could do an update with a switch (I think it
>> was -n) that would take input from a pipe delimited file (in the order the
>> fields of the table) and since the table had which field was unique it
>> would key on that and if it didn't find the key it would add the record. 
>
>How about a python solution? (untested)
>
>#! /usr/bin/env python
>from pg import DB   # http://www.druid.net/pygresql/
>db = DB()           # or specify other than the default database
>
>for l in open('path_to_input_filename').readlines():
>    dict = convert_line_to_dictionary(l)    # this could be automated somewhat
>    try: db.insert(dict)                    # fails if duplicate
>    except(db.update(dict)                  # do this if insert fails
>
>The insert and update can be tried in the reverse order if you expect
>more lines in the input to be updating existing records than new records.
>
>The convert_line_to_dictionary() function could be automated somewhat but
>I really hate depending on the order of fields.  However, if I know the
>order of fields in the input file (easier to guarantee) then I woould
>do something like this.
>
>    l = string.split(l, '|')
>    dict = {
>        'field0': l[0],
>        'field1': l[1],
>        'field2': l[2]
> }
>
>There is also a module to handle comma delimited, quoted strings files.
>
>-- 
>D'Arcy J.M. Cain <darcy@{druid|vex}.net>   |  Democracy is three wolves
>http://www.druid.net/darcy/                |  and a sheep voting on
>+1 416 424 2871     (DoD#0082)    (eNTP)   |  what's for dinner.

John J. Boris, Sr.   Boris Computer Services/ONLine Services
email:john.boris@onlinesvc.com.com
bbs: The Bleeding Edge 609-858-9221