Optimizing numeric SUM() aggregate
От | Heikki Linnakangas |
---|---|
Тема | Optimizing numeric SUM() aggregate |
Дата | |
Msg-id | c0545351-a467-5b76-6d46-4840d1ea8aa4@iki.fi обсуждение исходный текст |
Ответы |
Re: Optimizing numeric SUM() aggregate
Re: Optimizing numeric SUM() aggregate |
Список | pgsql-hackers |
Hi, I spent some time profiling a simply query with a SUM() aggregate. We've made some big improvements in this area in recent years, but it seems there's still some room for improvement. A lot of CPU time is spent in the numeric add_var() and friends. Which isn't that surprising, I guess. I came up with the attached patch that keeps the sum in a specialized accumulator, instead of a NumericVar. The specialized accumulator has a few tricks, compared to the status quo: 1. Uses 32-bit integers to represent each base-10000 "digit". Instead of propagating carry after each new value, it's done only every 9999 values (or at the end). 2. Accumulates positive and negative values separately. They positive and negative sums are added together only at the end. This avoids the overhead in add_var(), for figuring out which value is larger and determining the result sign at each step. 3. Only allocate a new buffer when it needs to be enlarged. add_abs() allocates a new one on every call. These optimizations give a nice speedup for SUM(), and other aggregates like AVG() and STDDEV() that use the same agg state. For example, using the same test query that Hadi Moshayedi used on previous work on numeric aggregates (https://www.postgresql.org/message-id/CAK%3D1%3DWrmCkWc_xQXs_bpUyswCPr7O9zkLmm8Oa7_nT2vybvBEQ%40mail.gmail.com): CREATE TABLE avg_test AS SELECT (random() * 999)::decimal(5,2) as d FROM generate_series(1, 10000000) s; SELECT avg(d) FROM avg_test; On my laptop, with max_parallel_workers_per_gather=0, this runs in about 1.5 s without the patch, and 1.2 s with the patch. - Heikki
Вложения
В списке pgsql-hackers по дате отправления: