Re: [GENERAL] BIG Data and Perl
От | Andy Lewis |
---|---|
Тема | Re: [GENERAL] BIG Data and Perl |
Дата | |
Msg-id | Pine.LNX.4.05.9910180840590.17163-100000@rns.roundnoon.com обсуждение исходный текст |
Ответ на | Re: [GENERAL] BIG Data and Perl (Lincoln Yeoh <lylyeoh@mecomb.com>) |
Список | pgsql-general |
I've identified the problem. Its actually with a regex that I wrote. I'm in the process of re-writting that. Thanks. Andy On Mon, 18 Oct 1999, Lincoln Yeoh wrote: > At 09:52 AM 15-10-1999 -0500, Andy Lewis wrote: > >I've got a fairly good size database that has in one table around 50,000 > >records in it. > > > >It starts of and processes the first 300-400 rows fast and then gets > >slower in time and eventually just quits. It'll run for about 4-6 hours > >before it quits. > > > >Any idea what may be going on here? > > Maybe you're running out of memory. Your perl script may be reading too > much into memory. > > When using the perl DBI module, I get the impression that the perl script > reads in all the results when you do > $cursor->execute > > I don't know if there are any ways around this. It can be a bit > inconvenient if the result is large ;). > > Cheerio, > > Link. > > > ************ >
В списке pgsql-general по дате отправления: