[Fwd: Re: how do you write aggregate function]
От | Justin |
---|---|
Тема | [Fwd: Re: how do you write aggregate function] |
Дата | |
Msg-id | 47D4364C.5020209@emproshunts.com обсуждение исходный текст |
Список | pgsql-general |
DMP you did give me an idea on how to call the append array sfunc looks like this create or replace function wcost_average_sf (numeric[], numeric, numeric) returns numeric[] as $Body$ begin return array_append(array_append($1, $2), $3); end; $Body$ LANGUAGE 'plpgsql' VOLATILE; this yanked out 140,000 ms aka 2.3 minutes on the run time, a big improvement but no where, where i want it to be. are there speed improvements in the other languages TCL dmp wrote: > Array appends are usually a performance hit, as you said. I'm not sure > though with > PostgreSQL. Why not try it with two arrays and see what happens. At > least you would > reducing the single array and the eliminating the append. > > danap. > >> I got the aggregate function for weighted average done. I finely >> left alone for more than 10 minutes to actual get it written. It >> takes 2 value input Weight and the Value. it will sum the weighted >> entries to create the constant then does the normal formula, but does >> not percentage number but averaged number. A simple change on the >> return line it can do percentages. >> >> I did a couple of things a little odd . instead of doing a multi >> dimensional array i did a one dimensional array where the 1st row is >> Weight and the 2nd row is Value. This made the loop through the >> array look stupid. >> I tested it across 50,000 records with a group by it took 3.3 seconds >> to run. >> >> without the group by clause performance is terrible taking several >> minutes just to do the sfunc part. 371,563ms >> >> The Array seems to have performance hit any advice? It could be >> the way i'm appending to the Array which has a performance hit as the >> array gets bigger and bigger ? > >
В списке pgsql-general по дате отправления: