Обсуждение: slow cursor
Hi
I have a cursor that fetches 150K rows and updates or inserts a table with 150K rows.
It takes several minutes for the process to complete (about 15 minutes). The select by itself (without cursor) gets all rows in 15 seconds.
Is there a way to optimize the cursor to fetch all records and speed up the process. I still need to do the record by record processing
On Mon, Apr 17, 2006 at 07:07:54AM -0700, Sriram Dandapani wrote: > I have a cursor that fetches 150K rows and updates or inserts a table > with 150K rows. > > It takes several minutes for the process to complete (about 15 minutes). > The select by itself (without cursor) gets all rows in 15 seconds. > > Is there a way to optimize the cursor to fetch all records and speed up > the process. I still need to do the record by record processing Not likely. Are you at least doing all this inside a transaction? -- Jim C. Nasby, Sr. Engineering Consultant jnasby@pervasive.com Pervasive Software http://pervasive.com work: 512-231-6117 vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461
Yes..all of it is in one transaction as there is a window of record ids that need to be processed in 1 transaction. Data inflow is very voluminous appx 1 million every 15 minutes and the goal is to create aggregate tables on the fly (the alternative is to use nightly aggregates). -----Original Message----- From: Jim C. Nasby [mailto:jnasby@pervasive.com] Sent: Thursday, April 20, 2006 7:36 PM To: Sriram Dandapani Cc: pgsql-admin@postgresql.org Subject: Re: [ADMIN] slow cursor On Mon, Apr 17, 2006 at 07:07:54AM -0700, Sriram Dandapani wrote: > I have a cursor that fetches 150K rows and updates or inserts a table > with 150K rows. > > It takes several minutes for the process to complete (about 15 minutes). > The select by itself (without cursor) gets all rows in 15 seconds. > > Is there a way to optimize the cursor to fetch all records and speed up > the process. I still need to do the record by record processing Not likely. Are you at least doing all this inside a transaction? -- Jim C. Nasby, Sr. Engineering Consultant jnasby@pervasive.com Pervasive Software http://pervasive.com work: 512-231-6117 vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461
From: pgsql-admin-owner@postgresql.org on behalf of Sriram Dandapani
Sent: Thu 4/20/2006 9:45 PM
To: Jim C. Nasby
Cc: pgsql-admin@postgresql.org
Subject: Re: [ADMIN] slow cursor
Yes..all of it is in one transaction as there is a window of record ids
that need to be processed in 1 transaction. Data inflow is very
voluminous appx 1 million every 15 minutes and the goal is to create
aggregate tables on the fly (the alternative is to use nightly
aggregates).
-----Original Message-----
From: Jim C. Nasby [mailto:jnasby@pervasive.com]
Sent: Thursday, April 20, 2006 7:36 PM
To: Sriram Dandapani
Cc: pgsql-admin@postgresql.org
Subject: Re: [ADMIN] slow cursor
On Mon, Apr 17, 2006 at 07:07:54AM -0700, Sriram Dandapani wrote:
> I have a cursor that fetches 150K rows and updates or inserts a table
> with 150K rows.
>
> It takes several minutes for the process to complete (about 15
minutes).
> The select by itself (without cursor) gets all rows in 15 seconds.
>
> Is there a way to optimize the cursor to fetch all records and speed
up
> the process. I still need to do the record by record processing
Not likely. Are you at least doing all this inside a transaction?
--
Jim C. Nasby, Sr. Engineering Consultant jnasby@pervasive.com
Pervasive Software http://pervasive.com work: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461
---------------------------(end of broadcast)---------------------------
TIP 4: Have you searched our list archives?
http://archives.postgresql.org
Thanks Benjamin
The issue I have with my situation is tht the target table typically has millions of records (not unusual to have tens of millions) and the update is based on 80% of the columns. The update is a killer w.r.t performance. I plan to create an index on all columns and test the individual update.
Regards
Sriram
From: Benjamin Krajmalnik [mailto:kraj@illumen.com]
Sent: Thursday, April 20, 2006 9:10 PM
To: Sriram Dandapani; Jim C. Nasby
Cc: pgsql-admin@postgresql.org
Subject: RE: [ADMIN] slow cursor
Siriam,
I have no clue what it is you are trying to do, but I have a similar situation. I have a monitoring system which sends data to our psotgresql database. The system calls a single stored procedure which performs on-the-fly data aggregation for the particular test for which the data is being sent. You may want to look at some numerical methods books and see if you can do what I am doing. I keep a record with computed values representing the data up to the previous transaction, and recompute accordingly. This is done estremely quickly. Then, twice a day, I run a stored procedure which runs some other aggregations based on the raw data which was sent over. It was taking about 30 seconds against a table with about 500K records, aggregating to a table with 2K records. Since not all of the tests need this particular data aggregation, an additional field was added to the test descriptor to flag those tests which need to have the particular aggregation (essentially, computing statistical process control data). After doing this, our 30 seconds went down to about 2 seconds.
I do not have access to our aggregation dunction, but I am using one cursor and 2 record types.
First, I perform a loop on a selection into the record structure for the candidate tests. Within this loop, I create a cursor which fetches the row data into another record structure. This record structure now holds the data we need in order to update the table on which the primary loop is running.
I do not know if this is the optimum way of doing this within PostgreSQL, but the perofrmance appears to be fine. I have not goten into the fine tuning for speed just yet.
Just an idea - hope it helps.
From: pgsql-admin-owner@postgresql.org on behalf of Sriram Dandapani
Sent: Thu 4/20/2006 9:45 PM
To: Jim C. Nasby
Cc: pgsql-admin@postgresql.org
Subject: Re: [ADMIN] slow cursor
Yes..all of it is in one transaction as there is a window of record ids
that need to be processed in 1 transaction. Data inflow is very
voluminous appx 1 million every 15 minutes and the goal is to create
aggregate tables on the fly (the alternative is to use nightly
aggregates).
-----Original Message-----
From: Jim C. Nasby [mailto:jnasby@pervasive.com]
Sent: Thursday, April 20, 2006 7:36 PM
To: Sriram Dandapani
Cc: pgsql-admin@postgresql.org
Subject: Re: [ADMIN] slow cursor
On Mon, Apr 17, 2006 at 07:07:54AM -0700, Sriram Dandapani wrote:
> I have a cursor that fetches 150K rows and updates or inserts a table
> with 150K rows.
>
> It takes several minutes for the process to complete (about 15
minutes).
> The select by itself (without cursor) gets all rows in 15 seconds.
>
> Is there a way to optimize the cursor to fetch all records and speed
up
> the process. I still need to do the record by record processing
Not likely. Are you at least doing all this inside a transaction?
--
Jim C. Nasby, Sr. Engineering Consultant jnasby@pervasive.com
Pervasive Software http://pervasive.com work: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461
---------------------------(end of broadcast)---------------------------
TIP 4: Have you searched our list archives?
http://archives.postgresql.org
Ok, so why does that require row-by-row processing? On Thu, Apr 20, 2006 at 08:45:57PM -0700, Sriram Dandapani wrote: > Yes..all of it is in one transaction as there is a window of record ids > that need to be processed in 1 transaction. Data inflow is very > voluminous appx 1 million every 15 minutes and the goal is to create > aggregate tables on the fly (the alternative is to use nightly > aggregates). > > -----Original Message----- > From: Jim C. Nasby [mailto:jnasby@pervasive.com] > Sent: Thursday, April 20, 2006 7:36 PM > To: Sriram Dandapani > Cc: pgsql-admin@postgresql.org > Subject: Re: [ADMIN] slow cursor > > On Mon, Apr 17, 2006 at 07:07:54AM -0700, Sriram Dandapani wrote: > > I have a cursor that fetches 150K rows and updates or inserts a table > > with 150K rows. > > > > It takes several minutes for the process to complete (about 15 > minutes). > > The select by itself (without cursor) gets all rows in 15 seconds. > > > > Is there a way to optimize the cursor to fetch all records and speed > up > > the process. I still need to do the record by record processing > > Not likely. Are you at least doing all this inside a transaction? > -- > Jim C. Nasby, Sr. Engineering Consultant jnasby@pervasive.com > Pervasive Software http://pervasive.com work: 512-231-6117 > vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461 > > ---------------------------(end of broadcast)--------------------------- > TIP 4: Have you searched our list archives? > > http://archives.postgresql.org > -- Jim C. Nasby, Sr. Engineering Consultant jnasby@pervasive.com Pervasive Software http://pervasive.com work: 512-231-6117 vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461