Re: pessimal trivial-update performance
От | Jesper Krogh |
---|---|
Тема | Re: pessimal trivial-update performance |
Дата | |
Msg-id | 4C31AC43.80409@krogh.cc обсуждение исходный текст |
Ответ на | Re: pessimal trivial-update performance (Tom Lane <tgl@sss.pgh.pa.us>) |
Ответы |
Re: pessimal trivial-update performance
|
Список | pgsql-hackers |
On 2010-07-04 06:11, Tom Lane wrote: > Robert Haas<robertmhaas@gmail.com> writes: > >> CREATE OR REPLACE FUNCTION update_tab() RETURNS void AS $$ >> BEGIN >> INSERT INTO tab VALUES (0); >> FOR i IN 1..100000 LOOP >> UPDATE tab SET x = x + 1; >> END LOOP; >> END >> $$ LANGUAGE plpgsql; >> > I believe that none of the dead row versions can be vacuumed during this > test. So yes, it sucks, but is it representative of real-world cases? > > The problem can generally be written as "tuples seeing multiple updates in the same transaction"? I think that every time PostgreSQL is used with an ORM, there is a certain amount of multiple updates taking place. I have actually been reworking clientside to get around multiple updates, since they popped up in one of my profiling runs. Allthough the time I optimized away ended being both "roundtrip time" + "update time", but having the database do half of it transparently, might have been sufficient to get me to have had a bigger problem elsewhere.. To sum up. Yes I think indeed it is a real-world case. Jesper -- Jesper
В списке pgsql-hackers по дате отправления: