Re: [GENERAL] Performance
От | Jim Richards |
---|---|
Тема | Re: [GENERAL] Performance |
Дата | |
Msg-id | 199910290816.EAA62562@hub.org обсуждение исходный текст |
Ответ на | Performance ("Jason C. Leach" <jcl@mail.ocis.net>) |
Список | pgsql-general |
I don't know about the DBI specifically, but it should have this, try doing the inserts as BEGIN WORK INSERT ... INSERT ... INSERT ... INSERT ... COMMIT WORK this will wrap all the transactions in one statement, so during the inserts other process won't be able to see the changes until the commit is done. Also mean if there in an error during the insert sequence, it can all be rolled back without a problem. >I've been playing with pgsql for a few days now and am getting the hang >of it. I just did a loop that inserts a few thousand records into a >table. I did a statement, prepare, execute; it worked fine although pg >seemed to access the hd for every insert. Is there a way to cache >inserts and then write them all at once later. I'm using Perl with >DBD::Pg/DBI and see with DBI there is a prepare_cached, and a commit. >Not much in the way of docs for the modules though. > >Perhaps I should be doing statement, prepare, statement, prepare, >commit? -- Subvert the dominant paradigm http://www.cyber4.org/members/grumpy/index.html
В списке pgsql-general по дате отправления: