Performance
От | Steven Bradley |
---|---|
Тема | Performance |
Дата | |
Msg-id | 3.0.5.32.19990623150509.0092e990@poptop.llnl.gov обсуждение исходный текст |
Ответы |
Re: [INTERFACES] Performance
|
Список | pgsql-interfaces |
I'm having some problems achieving adequate performance from Postgres for a real-time event logging application. The way I'm interfacing to the database may be the problem: I have simplified the problem down to a single (non-indexed) table with about a half-dozen columns (int4, timestamp, varchar, etc.) I wrote a quick and dirty C program which uses the libpq interface to INSERT records into the table in real-time. The best performance I could achieve was on the order of 15 inserts per second. What I need is something much closer to 100 inserts per second. I wanted to use a prepared SQL statement, but it turns out that Postgres runs the query through the parser-planner-executor cycle on each iteration.There is no way to prevent this. The next thing I though of doing was to "bulk load" several records in one INSERT through the use of array processing. Do any of the Postgres interfaces support this? (by arrays, I don't mean array columns in the table). I'm currently running Postgres 6.4.2. I've heard that 6.5 has improved performance; does anyone have any idea what the performance improvement is like? Is it unrealistic to expect Postgres to insert on the order of 100 records per second on a Pentium 400 MHz/SCSI class machine running Linux? (Solaris on a comparable platform has about 1/2 the performance) Thanks in advance... Steven Bradley Lawrence Livermore National Laboratory sbradley@llnl.gov
В списке pgsql-interfaces по дате отправления: