disabling autocommit
От | Matt Van Mater |
---|---|
Тема | disabling autocommit |
Дата | |
Msg-id | BAY9-F85GhsXRhBPnAF000199ef@hotmail.com обсуждение исходный текст |
Список | pgsql-general |
I'm looking to get a little more performance out of my database, and saw in the docs a section about disabling autocommit by using the BEGIN and COMMIT keywords. My problem is this: I enforce unique rows for all data, and occasionally there is an error where I try to insert a duplicate entry. I expect to see these duplicate entries and depend on the DB to enforce the row uniqueness. When I just run the insert statements without the begin and commit keywords the insert only fails for that single insert, but If I disable autocommit then all the inserts fail because of one error. As a test I ran about 1000 identical inserts with autocommit on and also with it off. I get roughly a 33% speed increase with the autocommit off, so it's definitely a good thing. The problem is, to parse the insert statements and ensure there are no duplicates I feel like I would be losing the advantage that disabling autocommit gives me, and simply spending the cpu cycles somewhere else. Is there a way for me to say 'only commit the successful commands and ignore the unsuccessful ones'? I know that's the point behind using this type of transaction/rollback statement but I was curious if there was a way I could fix it. Matt _________________________________________________________________ Express yourself instantly with MSN Messenger! Download today - it's FREE! http://messenger.msn.click-url.com/go/onm00200471ave/direct/01/
В списке pgsql-general по дате отправления: