BIG Data and Perl
От | Andy Lewis |
---|---|
Тема | BIG Data and Perl |
Дата | |
Msg-id | Pine.LNX.4.05.9910150935320.22435-100000@rns.roundnoon.com обсуждение исходный текст |
Ответы |
Re: [GENERAL] BIG Data and Perl
|
Список | pgsql-general |
I've got a fairly good size database that has in one table around 50,000 records in it. I'm using a perl script that is located on **another** machine in the same network to access the DB with. Once a week I have to do a "SELECT * ...." from one of the tables, get the data, open another field from disk read in some of that data and finially write it all together back to disk in small files that can be emailed out. The query looks like: SELECT * from mytable order by member_id -- cut -- $result = $conn->exec("$query"); $ntuples = $result->ntuples; print STDOUT "Total: $ntuples \n\n"; while ( @row = $result->fetchrow ) { do some stuff here...ie, open file and read } -- cut -- Here's the strange part and this could very well be anotherr part of this script(I've inherited it). It starts of and processes the first 300-400 rows fast and then gets slower in time and eventually just quits. It'll run for about 4-6 hours before it quits. Any idea what may be going on here? Thanks Andy
В списке pgsql-general по дате отправления: