Re: inserting 4800 records at a time
От | Brent Wood |
---|---|
Тема | Re: inserting 4800 records at a time |
Дата | |
Msg-id | 460B38D1.6080603@niwa.co.nz обсуждение исходный текст |
Ответ на | Re: inserting 4800 records at a time ("Martin Gainty" <mgainty@hotmail.com>) |
Список | pgsql-general |
>> We use an application that generates 4800 points for the graph of a waveform. >> We capture this data and display it to the user. Now we want to save all this >> information to a database. I have tried to create a record for each point, >> but insertion/retrieval is slow. I thought that maybe I could save one record >> per graph and save all the points as a large string, but there would be 148k >> characters in the string. Then I'm still not sure what the performance would >> be like. Would the use of BLOBs a better way to go here? Any ideas on what >> the best approach would be for us? >> I strongly recommend the use of PostGIS for storing (and managing/querying) point geometries in PostGIS. If you do take this approach there are several advantages, not least the large number of supporting applications. For example, OGR now supports GMT (in SVN right now), so you can plot your spatial & timeseriesdata from the command line with data driven scripts, a simplistic example: LIST=`psql $DB -A -t -c "select distinct species from table;"` for SPP in $LIST ; do ogr2ogr -f "GMT" -nln data.gmt data PG:dbname=db -sql "select pont, catch from table where species = '$SPP';" psxy data.gmt -R -JM ... > ${SPP}.ps done this approach allows maps/plots to be generated automagically from the data, as GMT is a commandline package for plotting data & ogr2pgr can generate GMT format data from a PostGIS table... As far as loading is concerned, are you loading as separate inserts or using copy? A bulk load via copy is generally much faster. Cheers, Brent Wood
В списке pgsql-general по дате отправления: