Re: [GENERAL] 7.4Beta
От | Andreas Pflug |
---|---|
Тема | Re: [GENERAL] 7.4Beta |
Дата | |
Msg-id | 3F3D1D01.2080904@pse-consulting.de обсуждение исходный текст |
Ответ на | Re: [GENERAL] 7.4Beta (Stephan Szabo <sszabo@megazone.bigpanda.com>) |
Список | pgsql-hackers |
Stephan Szabo wrote: >I don't know if there will be or not, but in one case it's a single table >select with constant values, in the other it's probably some kind of scan >and subselect. I'm just not going to rule out the possibility, so we >should profile it in large transactions with say 100k single inserts and >see. > > You're talking about bulk operations, that should be handled carefully either. Usually loading all data into a temporary table, and making a INSERT INTO xxx SELECT FROM tmptable should give a better performance if indices and constraints are concerned. PostgreSQL shouldn't be considered to accept the most abusive ways of operation, but it should offer a reasonable set of tools enabling the jobs in a convenient way. Best situation available is if many small random transactions are performed good, for TPC like loads, as well as bulk operations. Nobody should expect that a database will smootly convert a bunch of single transactions into an optimized bulk one. That's the job of a programmer. >Yeah, the 5 above are pretty easy to show that it's safe, but other cases >and referential action cases won't necessarily be so easy. > So it's the programmers responsibility to offer mass data to the backend, not separate inserts that by chance might be handled in a similar way. A RDBMS is not a clairvoyant. Regards, Andreas
В списке pgsql-hackers по дате отправления: