Re: Inserting streamed data
От | Csaba Nagy |
---|---|
Тема | Re: Inserting streamed data |
Дата | |
Msg-id | 96D568DD7FAAAD428581F8B3BFD9B0F604DE58@goldmine.ecircle.de обсуждение исходный текст |
Ответ на | Inserting streamed data (Kevin Old <kold@carolina.rr.com>) |
Список | pgsql-general |
Why don't you pull out the fields with the perl script and write them to a temprary table, and use COPY to import from that one ? Perl should be fast with the files, Postgres with the COPY... Regards, Csaba. -----Ursprüngliche Nachricht----- Von: pgsql-general-owner@postgresql.org [mailto:pgsql-general-owner@postgresql.org]Im Auftrag von Greg Patnude Gesendet: Samstag, 2. November 2002 18:08 An: pgsql-general@postgresql.org Betreff: Re: [GENERAL] Inserting streamed data Does your table have an index ?? -- You can probably speed it up significantly by Preparing the datafile... Beginning a transaction... Dropping the index... Doing the 160,000 insert(s)... Rebuilding the index... Committing the transaction... Ending the transaction "Kevin Old" <kold@carolina.rr.com> wrote in message news:1036087909.3123.54.camel@oc... > Hello everyone, > > I have data that is streamed to my server and stored in a text file. I > need to get that data into my database as fast as possible. There are > approximately 160,000 rows in this text file. I understand I can use > the COPY command to insert large chunks of data from a text file, but I > can't use it in this situation. Each record in the text file has 502 > "fields". I pull out 50 of those. I haven't found a way to manipulate > the COPY command to pull out the values I need. So that solution would > be out. > > I have a perl script that goes through the file and pulls out the 50 > fields, then inserts them into the database, but it seems to be very > slow. I think I just need some minor performance tuning, but dont' know > which variables to set in the postgresql.conf file that would help with > the speed of the inserts. > > Here's my postgresql.conf file now: > > max_connections = 10 > shared_buffers = 20 > > > I'm running a Solaris 2.7 with 2GB RAM. > > Also, saw this at > http://developer.postgresql.org/docs/postgres/kernel-resources.html > > [snip...] > > Solaris > > At least in version 2.6, the default maximum size of a shared > memory segments is too low for PostgreSQL. The relevant settings > can be changed in /etc/system, for example: > > set shmsys:shminfo_shmmax=0x2000000 > set shmsys:shminfo_shmmin=1 > set shmsys:shminfo_shmmni=256 > set shmsys:shminfo_shmseg=256 > > set semsys:seminfo_semmap=256 > set semsys:seminfo_semmni=512 > set semsys:seminfo_semmns=512 > set semsys:seminfo_semmsl=32 > > [snip...] > > Should I do this? > > Thanks, > Kevin > > -- > Kevin Old <kold@carolina.rr.com> > > > ---------------------------(end of broadcast)--------------------------- > TIP 4: Don't 'kill -9' the postmaster ---------------------------(end of broadcast)--------------------------- TIP 4: Don't 'kill -9' the postmaster
В списке pgsql-general по дате отправления: