Re: how do you write aggregate function
От | dmp |
---|---|
Тема | Re: how do you write aggregate function |
Дата | |
Msg-id | 47D429CA.8060408@ttc-cmc.net обсуждение исходный текст |
Ответ на | how do you write aggregate function (Justin <justin@emproshunts.com>) |
Список | pgsql-general |
Justin, I'm really not a expert in this area, on how to code this, or functions in PostgreSQL. All I could offer is some ideas which you might try. Sometimes this is all it takes. Perhaps someone else will respond that might be more helpful. With that said I have read in the documentation the use of other languages and if you are looking for a higher performance, that is the way I would said its going to come about perhaps. danap. > DMP you did give me an idea on changing how to call the append array > sfunc looks like this > > create or replace function wcost_average_sf (numeric[], numeric, > numeric) > returns numeric[] as > $Body$ > begin > return array_append(array_append($1, $2), $3); > end; > $Body$ > LANGUAGE 'plpgsql' VOLATILE; > > this yanked out 140,000 ms on the run time, a big improvement but > no where i want it to be. > > are there speed improvements in the other languages TCL > > dmp wrote: > >> Array appends are usually a performance hit, as you said. I'm not >> sure though with >> PostgreSQL. Why not try it with two arrays and see what happens. At >> least you would >> reducing the single array and the eliminating the append. >> >> danap. >> >>> I got the aggregate function for weighted average done. I finely >>> left alone for more than 10 minutes to actual get it written. It >>> takes 2 value input Weight and the Value. it will sum the weighted >>> entries to create the constant then does the normal formula, but >>> does not percentage number but averaged number. A simple change on >>> the return line it can do percentages. >>> >>> I did a couple of things a little odd . instead of doing a multi >>> dimensional array i did a one dimensional array where the 1st row is >>> Weight and the 2nd row is Value. This made the loop through the >>> array look stupid. >>> I tested it across 50,000 records with a group by it took 3.3 >>> seconds to run. >>> >>> without the group by clause performance is terrible taking several >>> minutes just to do the sfunc part. 371,563ms >>> >>> The Array seems to have performance hit any advice? It could be >>> the way i'm appending to the Array which has a performance hit as >>> the array gets bigger and bigger ? >> >> >>
В списке pgsql-general по дате отправления: