Hubbry Logo
Errors and residualsErrors and residualsMain
Open search
Errors and residuals
Community hub
Errors and residuals
logo
7 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Errors and residuals
Errors and residuals
from Wikipedia

In statistics and optimization, errors and residuals are two closely related and easily confused measures of the deviation of an observed value of an element of a statistical sample from its "true value" (not necessarily observable). The error of an observation is the deviation of the observed value from the true value of a quantity of interest (for example, a population mean). The residual is the difference between the observed value and the estimated value of the quantity of interest (for example, a sample mean). The distinction is most important in regression analysis, where the concepts are sometimes called the regression errors and regression residuals and where they lead to the concept of studentized residuals. In econometrics, "errors" are also called disturbances.[1][2][3]

Introduction

[edit]

Suppose there is a series of observations from a univariate distribution and we want to estimate the mean of that distribution (the so-called location model). In this case, the errors are the deviations of the observations from the population mean, while the residuals are the deviations of the observations from the sample mean.

A statistical error (or disturbance) is the amount by which an observation differs from its expected value, the latter being based on the whole population from which the statistical unit was chosen randomly. For example, if the mean height in a population of 21-year-old men is 1.75 meters, and one randomly chosen man is 1.80 meters tall, then the "error" is 0.05 meters; if the randomly chosen man is 1.70 meters tall, then the "error" is −0.05 meters. The expected value, being the mean of the entire population, is typically unobservable, and hence the statistical error cannot be observed either.

A residual (or fitting deviation), on the other hand, is an observable estimate of the unobservable statistical error. Consider the previous example with men's heights and suppose we have a random sample of n people. The sample mean could serve as a good estimator of the population mean. Then we have:

  • The difference between the height of each man in the sample and the unobservable population mean is a statistical error, whereas
  • The difference between the height of each man in the sample and the observable sample mean is a residual.

Note that, because of the definition of the sample mean, the sum of the residuals within a random sample is necessarily zero, and thus the residuals are necessarily not independent. The statistical errors, on the other hand, are independent, and their sum within the random sample is almost surely not zero.

One can standardize statistical errors (especially of a normal distribution) in a z-score (or "standard score"), and standardize residuals in a t-statistic, or more generally studentized residuals.

In univariate distributions

[edit]

If we assume a normally distributed population with mean μ and standard deviation σ, and choose individuals independently, then we have

and the sample mean

is a random variable distributed such that:

The statistical errors are then

with expected values of zero,[4] whereas the residuals are

The sum of squares of the statistical errors, divided by σ2, has a chi-squared distribution with n degrees of freedom:

However, this quantity is not observable as the population mean is unknown. The sum of squares of the residuals, on the other hand, is observable. The quotient of that sum by σ2 has a chi-squared distribution with only n − 1 degrees of freedom:

This difference between n and n − 1 degrees of freedom results in Bessel's correction for the estimation of sample variance of a population with unknown mean and unknown variance. No correction is necessary if the population mean is known.

Remark

[edit]

It is remarkable that the sum of squares of the residuals and the sample mean can be shown to be independent of each other, using, e.g. Basu's theorem. That fact, and the normal and chi-squared distributions given above form the basis of calculations involving the t-statistic:

where represents the errors, represents the sample standard deviation for a sample of size n, and unknown σ, and the denominator term accounts for the standard deviation of the errors according to:[5]

The probability distributions of the numerator and the denominator separately depend on the value of the unobservable population standard deviation σ, but σ appears in both the numerator and the denominator and cancels. That is fortunate because it means that even though we do not know σ, we know the probability distribution of this quotient: it has a Student's t-distribution with n − 1 degrees of freedom. We can therefore use this quotient to find a confidence interval for μ. This t-statistic can be interpreted as "the number of standard errors away from the regression line."[6]

Regressions

[edit]

In regression analysis, the distinction between errors and residuals is subtle and important, and leads to the concept of studentized residuals. Given an unobservable function that relates the independent variable to the dependent variable – say, a line – the deviations of the dependent variable observations from this function are the unobservable errors. If one runs a regression on some data, then the deviations of the dependent variable observations from the fitted function are the residuals. If the linear model is applicable, a scatterplot of residuals plotted against the independent variable should be random about zero with no trend to the residuals.[5] If the data exhibit a trend, the regression model is likely incorrect; for example, the true function may be a quadratic or higher order polynomial. If they are random, or have no trend, but "fan out" - they exhibit a phenomenon called heteroscedasticity. If all of the residuals are equal, or do not fan out, they exhibit homoscedasticity.

However, a terminological difference arises in the expression mean squared error (MSE). The mean squared error of a regression is a number computed from the sum of squares of the computed residuals, and not of the unobservable errors. If that sum of squares is divided by n, the number of observations, the result is the mean of the squared residuals. Since this is a biased estimate of the variance of the unobserved errors, the bias is removed by dividing the sum of the squared residuals by df = n − p − 1, instead of n, where df is the number of degrees of freedom (n minus the number of parameters (excluding the intercept) p being estimated - 1). This forms an unbiased estimate of the variance of the unobserved errors, and is called the mean squared error.[7]

Another method to calculate the mean square of error when analyzing the variance of linear regression using a technique like that used in ANOVA (they are the same because ANOVA is a type of regression), the sum of squares of the residuals (aka sum of squares of the error) is divided by the degrees of freedom (where the degrees of freedom equal n − p − 1, where p is the number of parameters estimated in the model (one for each variable in the regression equation, not including the intercept)). One can then also calculate the mean square of the model by dividing the sum of squares of the model minus the degrees of freedom, which is just the number of parameters. Then the F value can be calculated by dividing the mean square of the model by the mean square of the error, and we can then determine significance (which is why you want the mean squares to begin with.).[8]

However, because of the behavior of the process of regression, the distributions of residuals at different data points (of the input variable) may vary even if the errors themselves are identically distributed. Concretely, in a linear regression where the errors are identically distributed, the variability of residuals of inputs in the middle of the domain will be higher than the variability of residuals at the ends of the domain:[9] linear regressions fit endpoints better than the middle. This is also reflected in the influence functions of various data points on the regression coefficients: endpoints have more influence.

Thus to compare residuals at different inputs, one needs to adjust the residuals by the expected variability of residuals, which is called studentizing. This is particularly important in the case of detecting outliers, where the case in question is somehow different from the others in a dataset. For example, a large residual may be expected in the middle of the domain, but considered an outlier at the end of the domain.

Other uses of the word "error" in statistics

[edit]

The use of the term "error" as discussed in the sections above is in the sense of a deviation of a value from a hypothetical unobserved value. At least two other uses also occur in statistics, both referring to observable prediction errors:

The mean squared error (MSE) refers to the amount by which the values predicted by an estimator differ from the quantities being estimated (typically outside the sample from which the model was estimated). The root mean square error (RMSE) is the square root of MSE. The sum of squares of errors (SSE) is the MSE multiplied by the sample size.

Sum of squares of residuals (SSR) is the sum of the squares of the deviations of the actual values from the predicted values, within the sample used for estimation. This is the basis for the least squares estimate, where the regression coefficients are chosen such that the SSR is minimal (i.e. its derivative is zero).

Likewise, the sum of absolute errors (SAE) is the sum of the absolute values of the residuals, which is minimized in the least absolute deviations approach to regression.

The mean error (ME) is the bias. The mean residual (MR) is always zero for least-squares estimators.

See also

[edit]

References

[edit]

Further reading

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
In statistics, particularly within and statistical modeling, errors represent the true but unobservable deviations between actual observed values and the underlying population regression line, often denoted as ϵi\epsilon_i and assumed to follow a with mean zero and constant variance under standard model assumptions. In contrast, residuals, denoted as eie_i, are the observable estimates of these errors, calculated as the vertical differences between the observed data points and the values predicted by the fitted sample regression model. This distinction is fundamental, as residuals serve as practical proxies for the unknown errors, enabling empirical assessment of model adequacy. Residuals play a central role in the validation and refinement of regression models by facilitating diagnostic checks for key assumptions, including of the relationship between predictors and the response variable, independence of observations, normality of errors, and homoscedasticity (constant variance of residuals across levels of the predictors). For instance, plotting residuals against fitted values or predictors can reveal patterns such as nonlinearity or heteroscedasticity, indicating the need for model adjustments like transformations or additional terms. The sum of squared residuals (SSR), also known as the , quantifies the unexplained variation in the data and is minimized in ordinary to derive the best-fitting model. Beyond , the concepts of errors and residuals extend to generalized linear models, analysis, and other statistical frameworks, where they inform inference about parameter estimates, hypothesis testing, and prediction intervals. For example, in assessing model fit, standardized residuals or studentized residuals are often examined to identify influential outliers or leverage points that disproportionately affect the regression coefficients. Proper handling of residuals ensures reliable statistical conclusions, as violations of underlying assumptions can lead to biased estimates or invalid inferences.

Fundamental Concepts

Definition of Error

In , an error is fundamentally defined as the difference between an observed value and the of a being measured. This discrepancy arises in various contexts, such as experimental measurements or , where the observed value YY deviates from the true or μ\mu, often expressed mathematically as ε=Yμ\varepsilon = Y - \mu. In probabilistic terms, the serves as the reference point when the true value is unknown, reflecting the average outcome under ideal conditions. The concept of error traces its origins to early 19th-century astronomy and , where precise observations of celestial bodies necessitated methods to account for discrepancies in data. played a pivotal role in formalizing this through his 1809 work on the method of least squares, which treated s as deviations in astronomical measurements and assumed they followed a to minimize their impact on parameter estimation. This approach revolutionized handling by providing a mathematical framework for reconciling noisy observations with underlying truths, laying the groundwork for modern . Errors are broadly classified into systematic errors, which introduce consistent , and random errors, which represent unpredictable . Systematic errors stem from flaws in the measurement process, such as an improperly calibrated instrument that consistently overestimates a , leading to deviations in the same direction across repeated trials. In contrast, random errors arise from inherent variability or uncontrollable factors, like fluctuations in environmental conditions during natural observations, causing deviations that average out over many . Residuals in predictive models represent a specialized application of errors, focusing on deviations from model predictions rather than true values.

Definition of Residual

In statistical modeling, a residual represents the discrepancy between an observed value and the value predicted by a fitted model. It is formally defined as the difference between the observed response yiy_i and the predicted response y^i\hat{y}_i for the ii-th , expressed by the equation ei=yiy^ie_i = y_i - \hat{y}_i. This measure quantifies how well the model captures the underlying patterns in the data, serving as a key diagnostic tool for assessing model adequacy in contexts like . The concept of residuals originated in the early development of the method for fitting models to observational data, particularly in astronomy. Adrien-Marie Legendre first published the method in 1805 as an appendix to his work on comet orbits, where he proposed minimizing the sum of squared residuals to obtain optimal parameter estimates from overdetermined systems. later formalized and justified the approach probabilistically in his 1809 treatise Theoria Motus Corporum Coelestium, building on his earlier unpublished work from around 1795, and applied it to predict celestial orbits by reducing residuals between observations and model predictions. These foundational contributions established residuals as essential to model fitting, distinguishing them from broader errors that may involve unknown true values. Residuals come in several types, each scaled differently to facilitate interpretation and outlier detection. Raw residuals are the unadjusted differences ei=yiy^ie_i = y_i - \hat{y}_i, retaining the original units of the response variable and directly reflecting model fit without normalization. Standardized residuals, also known as internally studentized residuals, scale the raw residuals by dividing them by the estimated standard deviation of the residuals (typically the of the from the full model), yielding a dimensionless measure with approximate standard under model assumptions. Studentized residuals further refine this by excluding the ii-th observation from the standard deviation estimate, providing a more robust scaling that accounts for the influence of individual points and follows a t-distribution, which aids in identifying influential s.

Errors in Univariate Distributions

Deviation from the Mean

In a univariate probability distribution, errors manifest as random deviations of individual observations from the population mean, formally defined as ϵi=xiμ\epsilon_i = x_i - \mu, where xix_i represents an observation and μ\mu is the true population mean. This conceptualization treats each observation as the sum of the systematic mean μ\mu and a random error term ϵi\epsilon_i, capturing the inherent variability in the data-generating process. From a probabilistic perspective, these errors are typically assumed to be identically and independently distributed (i.i.d.), with an of zero under ideal conditions where the sample is drawn randomly from the . The i.i.d. assumption implies that each shares the same distribution, unaffected by others, and the zero ensures that positive and negative deviations balance out on , centering the distribution around μ\mu. A prominent example occurs in the normal distribution, where the errors follow a with mean zero and variance σ2\sigma^2, denoted as ϵiN(0,σ2)\epsilon_i \sim N(0, \sigma^2). This Gaussian form implies that approximately 68% of errors lie within one standard deviation of zero, providing a symmetric bell-shaped spread that facilitates analytical tractability in . This probabilistic framework for errors traces its roots to Carl Friedrich Gauss's 1809 work, where he first assumed normal errors to justify the method of for estimating parameters from observational data.

Role in Variance and Standard Deviation

In statistics, the variance of a quantifies the average squared deviation, or , of observations from the , serving as a fundamental measure of dispersion. For a XX with μ\mu, the population variance σ2\sigma^2 is defined as the of the squared : σ2=E[(Xμ)2]\sigma^2 = E[(X - \mu)^2]. This formulation captures the second central moment of the distribution, reflecting the typical magnitude of deviations around the . In practice, when working with a sample of nn observations x1,x2,,xnx_1, x_2, \dots, x_n drawn from a , the sample variance s2s^2 provides an unbiased estimate of σ2\sigma^2, calculated as s2=1n1i=1n(xixˉ)2s^2 = \frac{1}{n-1} \sum_{i=1}^n (x_i - \bar{x})^2, where xˉ\bar{x} is the sample mean. The use of n1n-1 in the denominator corrects for the bias introduced by estimating the mean from the sample itself, ensuring the estimator's equals the true population variance. The standard deviation, denoted σ=σ2\sigma = \sqrt{\sigma^2}
Add your contribution
Related Hubs
User Avatar
No comments yet.