Recent from talks
Nothing was collected or created yet.
Aggregate function
View on WikipediaIn database management, an aggregate function or aggregation function is a function where multiple values are processed together to form a single summary statistic.

Common aggregate functions include:
Others include:
- Nanmean (mean ignoring NaN values, also known as "nil" or "null")
- Stddev
Formally, an aggregate function takes as input a set, a multiset (bag), or a list from some input domain I and outputs an element of an output domain O.[1] The input and output domains may be the same, such as for SUM, or may be different, such as for COUNT.
Aggregate functions occur commonly in numerous programming languages, in spreadsheets, and in relational algebra.
The listagg function, as defined in the SQL:2016 standard[2]
aggregates data from multiple rows into a single concatenated string.
In the entity relationship diagram, aggregation is represented as seen in Figure 1 with a rectangle around the relationship and its entities to indicate that it is being treated as an aggregate entity.[3]
Decomposable aggregate functions
[edit]Aggregate functions present a bottleneck, because they potentially require having all input values at once. In distributed computing, it is desirable to divide such computations into smaller pieces, and distribute the work, usually computing in parallel, via a divide and conquer algorithm.
Some aggregate functions can be computed by computing the aggregate for subsets, and then aggregating these aggregates; examples include COUNT, MAX, MIN, and SUM. In other cases the aggregate can be computed by computing auxiliary numbers for subsets, aggregating these auxiliary numbers, and finally computing the overall number at the end; examples include AVERAGE (tracking sum and count, dividing at the end) and RANGE (tracking max and min, subtracting at the end). In other cases the aggregate cannot be computed without analyzing the entire set at once, though in some cases approximations can be distributed; examples include DISTINCT COUNT (Count-distinct problem), MEDIAN, and MODE.
Such functions are called decomposable aggregation functions[4] or decomposable aggregate functions. The simplest may be referred to as self-decomposable aggregation functions, which are defined as those functions f such that there is a merge operator such that
where is the union of multisets (see monoid homomorphism).
For example, SUM:
- , for a singleton;
- , meaning that merge is simply addition.
COUNT:
- ,
- .
MAX:
- ,
- .
MIN:
- ,[2]
- .
Note that self-decomposable aggregation functions can be combined (formally, taking the product) by applying them separately, so for instance one can compute both the SUM and COUNT at the same time, by tracking two numbers.
More generally, one can define a decomposable aggregation function f as one that can be expressed as the composition of a final function g and a self-decomposable aggregation function h, . For example, AVERAGE=SUM/COUNT and RANGE=MAX−MIN.
In the MapReduce framework, these steps are known as InitialReduce (value on individual record/singleton set), Combine (binary merge on two aggregations), and FinalReduce (final function on auxiliary values),[5] and moving decomposable aggregation before the Shuffle phase is known as an InitialReduce step,[6]
Decomposable aggregation functions are important in online analytical processing (OLAP), as they allow aggregation queries to be computed on the pre-computed results in the OLAP cube, rather than on the base data.[7] For example, it is easy to support COUNT, MAX, MIN, and SUM in OLAP, since these can be computed for each cell of the OLAP cube and then summarized ("rolled up"), but it is difficult to support MEDIAN, as that must be computed for every view separately.
Other decomposable aggregate functions
[edit]In order to calculate the average and standard deviation from aggregate data, it is necessary to have available for each group: the total of values (Σxi = SUM(x)), the number of values (N=COUNT(x)) and the total of squares of the values (Σxi2=SUM(x2)) of each groups.[8]
AVG:
or
or, only if COUNT(X)=COUNT(Y)
SUM(x2):
The sum of squares of the values is important in order to calculate the Standard Deviation of groups
STDDEV:
For a finite population with equal probabilities at all points, we have[9][circular reference]
This means that the standard deviation is equal to the square root of the difference between the average of the squares of the values and the square of the average value.
See also
[edit]- Cross-tabulation a.k.a. Contingency table
- Data drilling
- Data mining
- Data processing
- Extract, transform, load
- Fold (higher-order function)
- Group by (SQL), SQL clause
- OLAP cube
- Online analytical processing
- Pivot table
- Relational algebra
- Utility functions on indivisible goods#Aggregates of utility functions
- XML for Analysis
- AggregateIQ
- MapReduce
References
[edit]- ^ Jesus, Baquero & Almeida 2011, 2 Problem Definition, pp. 3.
- ^ a b Winand, Markus (2017-05-15). "Big News in Databases: New SQL Standard, Cloud Wars, and ACIDRain (Spring 2017)". DZone. Archived from the original on 2017-05-27. Retrieved 2017-06-10.
In December 2016, ISO released a new version of the SQL standard. It introduces new features such as row pattern matching, listagg, date and time formatting, and JSON support.
- ^ Elmasri, Ramez (2016). Fundamentals of database systems. Sham Navathe (Seventh ed.). Hoboken, NJ. p. 133. ISBN 978-0-13-397077-7. OCLC 913842106.
{{cite book}}: CS1 maint: location missing publisher (link) - ^ Jesus, Baquero & Almeida 2011, 2.1 Decomposable functions, pp. 3–4.
- ^ Yu, Gunda & Isard 2009, 2. Distributed Aggregation, pp. 2–4.
- ^ Yu, Gunda & Isard 2009, 2. Distributed Aggregation, p. 1.
- ^ Zhang 2017, p. 1.
- ^ Ing. Óscar Bonilla, MBA
- ^ Standard deviation#Identities and mathematical properties
Literature
[edit]- Grabisch, Michel; Marichal, Jean-Luc; Mesiar, Radko; Pap, Endre (2009). Aggregation functions. Encyclopedia of Mathematics and its Applications. Vol. 127. Cambridge: Cambridge University Press. ISBN 978-0-521-51926-7. Zbl 1196.00002.
- Oracle Aggregate Functions: MAX, MIN, COUNT, SUM, AVG Examples
- Yu, Yuan; Gunda, Pradeep Kumar; Isard, Michael (2009). Distributed aggregation for data-parallel computing: interfaces and implementations. ACM SIGOPS 22nd symposium on Operating systems principles. ACM. pp. 247–260. doi:10.1145/1629575.1629600.
- Jesus, Paulo; Baquero, Carlos; Almeida, Paulo Sérgio (2011). "A Survey of Distributed Data Aggregation Algorithms". arXiv:1110.0725 [cs.DC].
- Zhang, Chao (2017). Symmetric and Asymmetric Aggregate Function in Massively Parallel Computing (Technical report).
External links
[edit]Aggregate function
View on GrokipediaDefinition and Fundamentals
Formal Definition
In mathematics, an aggregate function, also known as an aggregation function, is formally defined as a mapping that combines multiple input values into a single representative output, where each component function is nondecreasing in each argument and satisfies the boundary conditions and .[8] This definition captures the essence of summarizing a finite collection of numerical elements, typically from an interval like (representing degrees of certainty or utility in applications such as fuzzy logic), into one value that preserves key informational aspects of the inputs. The domain is the union over all finite dimensions , allowing for variable input sizes, and the function is often extended to act symmetrically on unordered collections via multisets. This formulation assumes familiarity with basic set theory, including the construction of Cartesian products for finite and the notion of monotonicity. For handling collections with potential repetitions (multisets), the notation extends naturally, such as evaluating where denotes multiset union, ensuring the aggregation respects the combined counts without regard to order.[3] Unlike general set functions, which map subsets of a universe to real numbers without imposed structural constraints (e.g., capacities used in Choquet integrals), aggregate functions incorporate monotonicity and boundary preservation to ensure interpretable summarization, facilitating efficient computation in hierarchical or recursive applications.[3] These properties distinguish them as specialized tools for data fusion rather than arbitrary valuations. The concept of aggregate functions traces its origins to ancient Greek mathematics, where means and averaging techniques were first studied, laying the groundwork for modern formalizations in pure mathematics and decision theory.[9]Basic Examples
To illustrate aggregate functions, consider simple numerical sets where these operations reduce multiple values to a single representative result, aligning with their formal definition as mappings from a collection of data to a scalar output.[10] The arithmetic mean, or average, measures central tendency by dividing the sum by the number of elements, given by . For the set {0.1, 0.2, 0.3}, it yields 0.2.[11] The minimum function identifies the smallest value in the set, denoted , while the maximum identifies the largest, denoted . For {0.1, 0.2, 0.3}, these are 0.1 and 0.3, respectively.[12][13] These computations can be summarized in the following table for the set {0.1, 0.2, 0.3}:| Set | Mean | Min | Max |
|---|---|---|---|
| {0.1,0.2,0.3} | 0.2 | 0.1 | 0.3 |
Key Properties
Decomposability
Decomposability is a fundamental property of aggregate functions that facilitates their efficient evaluation over large datasets by allowing computation on partitions that can later be combined. An aggregate function is decomposable if there exists an associative and commutative binary operator such that, for any two disjoint finite sets and , . This structure often forms a commutative monoid, where has a neutral (identity) element satisfying for all in the codomain, and ; for instance, serves as the neutral element for the sum function, as summing a singleton set containing 0 yields 0. To demonstrate, consider the sum function , a classic decomposable aggregate. For disjoint finite sets and , the sum over the union is: where is addition. This equality holds by the linearity of summation, as the total is simply the additive combination of the partial totals without overlap.[14] Such decomposability underpins the design of scalable aggregation in relational systems, as seen in early formulations for multi-dimensional data cubes.[14] The property's importance lies in its support for parallel and distributed processing: data can be partitioned into subsets, aggregated independently on each (e.g., across processors or network nodes), and the results merged via , enabling stepwise reduction while minimizing data transfer and enabling scalability for massive datasets. Decomposability extends seamlessly to multisets, where elements may have multiplicities greater than one, by incorporating a multiplicity function that accounts for duplicates in the aggregation. Duplicate-sensitive decomposable functions, such as sum, incorporate these multiplicities into the partial results (e.g., repeated values add multiple times), while duplicate-insensitive ones, like minimum, ignore multiplicity and depend only on the support set.Monotonicity and Idempotence
In aggregate functions, monotonicity refers to the behavior of the function under subset inclusions of the input data. An aggregation function is monotonically increasing if, for any two multisets and with , it holds that ; conversely, it is monotonically decreasing if under the same condition.[15] Common examples of monotonically increasing aggregates include the sum, which accumulates values and thus grows or stays the same with added elements, and the maximum, where including more elements cannot decrease the largest value.[15] However, not all aggregates are monotonic; for instance, variance is non-monotonic, as adding elements to a set can either increase or decrease the spread depending on their deviation from the mean—for example, adding a value close to the current mean may reduce variance relative to adding an outlier.[15] Idempotence describes the stability of an aggregate under repeated application to identical inputs. An aggregation function is idempotent if (applied n times for any n).[3] This property holds for the minimum and maximum functions: for the maximum, , since the operation selects the largest element regardless of repetition.[16] In contrast, the sum is not idempotent; unless n=1. The arithmetic mean is idempotent, as . For decomposable aggregates, idempotence corresponds to the combining operator satisfying .[16][3] Certain aggregate functions exhibit Schur-convexity or Schur-concavity, which capture sensitivity to the inequality in data distributions via majorization order—where a vector majorizes if is more spread out while preserving the sum. Schur-convex functions, such as variance, increase under majorization, reflecting greater dispersion.[17] The Gini coefficient, an inequality measure often treated as an aggregate of income distributions, is Schur-convex, meaning it rises with increased inequality in the majorized direction.[17]| Function | Monotonic | Idempotent |
|---|---|---|
| Sum | Yes (increasing) | No |
| Max | Yes (increasing) | Yes |
| Mean | No | Yes |
Types of Aggregate Functions
Decomposable Aggregate Functions
Decomposable aggregate functions are those that permit computation over disjoint subsets of data, with partial results combinable to yield the overall aggregate, enabling efficient parallel or distributed processing. This property, formalized as the existence of partial aggregation functions and combination function such that for disjoint sets and , supports optimizations like eager aggregation in query processing.[18] Common examples include the product function, defined as , which decomposes multiplicatively since the product over a union equals the product of partial products. Similarly, logical operations on booleans, such as AND (conjunction) and OR (disjunction), are decomposable: AND over a union is the AND of partial ANDs, and likewise for OR, treating true/false as 1/0 values analogous to multiplication or addition under modulo 2. These functions allow incremental updates via simple combination.[18] Extended means, particularly power means, exemplify decomposability through summation-based partial computations. The power mean of order is given by for and positive , which decomposes because partial power sums can be aggregated additively before normalization and rooting. As quasi-arithmetic means satisfying symmetry, continuity, strict monotonicity, idempotency, and decomposability axioms, power means enable replacement of data subsets with their partial aggregates without altering the result.[3] Decomposable functions categorize into additive types like sum (), where partial sums combine via addition; multiplicative types like product, where partial products multiply; and extremal functions like min and max, which serve as limiting cases of power means (as for min and for max) and decompose via selection of partial minima or maxima. These categories facilitate partition-tolerant computation in distributed systems.[18][3] Incremental aggregation algorithms exploit decomposability for efficiency. For sum, pseudocode initializes a partial accumulator and updates iteratively:function partial_sum(values):
partial = 0
for x in values:
partial += x
return partial
function combine_sums(p1, p2):
return p1 + p2
function partial_sum(values):
partial = 0
for x in values:
partial += x
return partial
function combine_sums(p1, p2):
return p1 + p2
function partial_product(values):
partial = 1
for x in values:
partial *= x
return partial
function combine_products(p1, p2):
return p1 * p2
function partial_product(values):
partial = 1
for x in values:
partial *= x
return partial
function combine_products(p1, p2):
return p1 * p2
function partial_min(values):
partial = infinity # or first value
for x in values:
if x < partial:
partial = x
return partial
function combine_mins(p1, p2):
return min(p1, p2)
function partial_min(values):
partial = infinity # or first value
for x in values:
if x < partial:
partial = x
return partial
function combine_mins(p1, p2):
return min(p1, p2)
