Re: improving write performance for logging application
От | Steve Eckmann |
---|---|
Тема | Re: improving write performance for logging application |
Дата | |
Msg-id | 43BC656D.2090103@computer.org обсуждение исходный текст |
Ответ на | Re: improving write performance for logging application (Tom Lane <tgl@sss.pgh.pa.us>) |
Список | pgsql-performance |
Tom Lane wrote:
No, I haven't measured it. I will compare this approach with others that have been suggested. Thanks. -steveSteve Eckmann <eckmann@computer.org> writes:<>Thanks for the suggestion, Tom. Yes, I think I could do that. But I
thought what I was doing now was effectively the same, because the
PostgreSQL 8.0.0 Documentation says (section 27.3.1): "It is allowed to
include multiple SQL commands (separated by semicolons) in the command
string. Multiple queries sent in a single PQexec call are processed in a
single transaction...." Our simulation application has nearly 400 event
types, each of which is a C++ class for which we have a corresponding
database table. So every thousand events or so I issue one PQexec() call
for each event type that has unlogged instances, sending INSERT commands
for all instances. For example,PQexec(dbConn, "INSERT INTO FlyingObjectState VALUES (...); INSERT INTO FlyingObjectState VALUES (...); ...");Hmm. I'm not sure if that's a good idea or not. You're causing the server to take 1000 times the normal amount of memory to hold the command parsetrees, and if there are any O(N^2) behaviors in parsing you could be getting hurt badly by that. (I'd like to think there are not, but would definitely not swear to it.) OTOH you're reducing the number of network round trips which is a good thing. Have you actually measured to see what effect this approach has? It might be worth building a test server with profiling enabled to see if the use of such long command strings creates any hot spots in the profile. regards, tom lane
В списке pgsql-performance по дате отправления: