Re: pgsql-server/ oc/src/sgml/runtime.sgml rc/back ...
От | Tom Lane |
---|---|
Тема | Re: pgsql-server/ oc/src/sgml/runtime.sgml rc/back ... |
Дата | |
Msg-id | 6777.1037849853@sss.pgh.pa.us обсуждение исходный текст |
Ответ на | Re: pgsql-server/ oc/src/sgml/runtime.sgml rc/back ... (Christopher Kings-Lynne <chriskl@familyhealth.com.au>) |
Список | pgsql-committers |
Christopher Kings-Lynne <chriskl@familyhealth.com.au> writes: > Out of interest (since I was away while this was proposed I assume), > what's the idea with hashed aggergation? Old method: scan rows in order by the GROUP BY columns (requiring a sort, or if you're lucky an indexscan), and execute one aggregation at a time. New method: scan rows in any old order (typically a seqscan), and run all the per-group aggregates in parallel. It's a hash aggregation because we use an in-memory hashtable indexed by the values of the GROUP BY columns to keep track of the running state of each aggregate. The hash method avoids a sort before aggregation, at the cost of a sort afterwards if you want the results in non-random order. But the post-sort is only sorting one row per group, which is usually a lot less data than the input rows. One case where the old method can still win is where you have SELECT ... GROUP BY foo ORDER BY foo LIMIT n; for small n. The hash method does not produce any output till it's read all the input; the old method can produce a few rows very cheaply if foo is indexed. Also, of course, the hash method fails if you have too many groups to permit the hashtable to fit in memory. regards, tom lane
В списке pgsql-committers по дате отправления: