Hubbry Logo
Coefficient of determinationCoefficient of determinationMain
Open search
Coefficient of determination
Community hub
Coefficient of determination
logo
8 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Coefficient of determination
Coefficient of determination
from Wikipedia
Ordinary least squares regression of Okun's law. Since the regression line does not miss any of the points by very much, the R2 of the regression is relatively high.

In statistics, the coefficient of determination, denoted R2 or r2 and pronounced "R squared", is the proportion of the variation in the dependent variable that is predictable from the independent variable(s). It is a statistic used in the context of statistical models whose main purpose is either the prediction of future outcomes or the testing of hypotheses, on the basis of other related information. It provides a measure of how well observed outcomes are replicated by the model, based on the proportion of total variation of outcomes explained by the model.[1][2][3]

There are several definitions of R2 that are only sometimes equivalent. In simple linear regression (which includes an intercept), r2 is simply the square of the sample correlation coefficient (r), between the observed outcomes and the observed predictor values.[4] If additional regressors are included, R2 is the square of the coefficient of multiple correlation. In both such cases, the coefficient of determination normally ranges from 0 to 1.

There are cases where R2 can yield negative values. This can arise when the predictions that are being compared to the corresponding outcomes have not been derived from a model-fitting procedure using those data. Even if a model-fitting procedure has been used, R2 may still be negative, for example when linear regression is conducted without including an intercept,[5] or when a non-linear function is used to fit the data.[6] In cases where negative values arise, the mean of the data provides a better fit to the outcomes than do the fitted function values, according to this particular criterion.

The coefficient of determination can be more intuitively informative than MAE, MAPE, MSE, and RMSE in regression analysis evaluation, as the former can be expressed as a percentage, whereas the latter measures have arbitrary ranges. It also proved more robust for poor fits compared to SMAPE on certain test datasets.[7]

When evaluating the goodness-of-fit of simulated (Ypred) versus measured (Yobs) values, it is not appropriate to base this on the R2 of the linear regression (i.e., Yobs= m·Ypred + b).[citation needed] The R2 quantifies the degree of any linear correlation between Yobs and Ypred, while for the goodness-of-fit evaluation only one specific linear correlation should be taken into consideration: Yobs = 1·Ypred + 0 (i.e., the 1:1 line).[8][9]

Definitions

[edit]

The better the linear regression (on the right) fits the data in comparison to the simple average (on the left graph), the closer the value of R2 is to 1. The areas of the blue squares represent the squared residuals with respect to the linear regression. The areas of the red squares represent the squared residuals with respect to the average value.

A data set has n values marked y1, ..., yn (collectively known as yi or as a vector y = [y1, ..., yn]T), each associated with a fitted (or modeled, or predicted) value f1, ..., fn (known as fi, or sometimes ŷi, as a vector f).

Define the residuals as ei = yifi (forming a vector e).

If is the mean of the observed data: then the variability of the data set can be measured with two sums of squares formulas:

  • The sum of squares of residuals, also called the residual sum of squares:
  • The total sum of squares (proportional to the variance of the data):

The most general definition of the coefficient of determination is

In the best case, the modeled values exactly match the observed values, which results in and R2 = 1. A baseline model, which always predicts y, will have R2 = 0.

Relation to unexplained variance

[edit]

In a general form, R2 can be seen to be related to the fraction of variance unexplained (FVU), since the second term compares the unexplained variance (variance of the model's errors) with the total variance (of the data):

As explained variance

[edit]

A larger value of R2 implies a more successful regression model.[4]: 463  Suppose R2 = 0.49. This implies that 49% of the variability of the dependent variable in the data set has been accounted for, and the remaining 51% of the variability is still unaccounted for. For regression models, the regression sum of squares, also called the explained sum of squares, is defined as

In some cases, as in simple linear regression, the total sum of squares equals the sum of the two other sums of squares defined above:

See Partitioning in the general OLS model for a derivation of this result for one case where the relation holds. When this relation does hold, the above definition of R2 is equivalent to

where n is the number of observations (cases) on the variables.

In this form R2 is expressed as the ratio of the explained variance (variance of the model's predictions, which is SSreg / n) to the total variance (sample variance of the dependent variable, which is SStot / n).

This partition of the sum of squares holds for instance when the model values ƒi have been obtained by linear regression. A milder sufficient condition reads as follows: The model has the form

where the qi are arbitrary values that may or may not depend on i or on other free parameters (the common choice qi = xi is just one special case), and the coefficient estimates and are obtained by minimizing the residual sum of squares.

This set of conditions is an important one and it has a number of implications for the properties of the fitted residuals and the modelled values. In particular, under these conditions:

As squared correlation coefficient

[edit]

In linear least squares multiple regression (with fitted intercept and slope), R2 equals the square of the Pearson correlation coefficient between the observed and modeled (predicted) data values of the dependent variable.

In a linear least squares regression with a single explanator (with fitted intercept and slope), this is also equal to the squared Pearson correlation coefficient between the dependent variable and explanatory variable .

It should not be confused with the correlation coefficient between two explanatory variables, defined as

where the covariance between two coefficient estimates, as well as their standard deviations, are obtained from the covariance matrix of the coefficient estimates, .

Under more general modeling conditions, where the predicted values might be generated from a model different from linear least squares regression, an R2 value can be calculated as the square of the correlation coefficient between the original and modeled data values. In this case, the value is not directly a measure of how good the modeled values are, but rather a measure of how good a predictor might be constructed from the modeled values (by creating a revised predictor of the form α + βƒi).[citation needed] According to Everitt,[10] this usage is specifically the definition of the term "coefficient of determination": the square of the correlation between two (general) variables.

Interpretation

[edit]

R2 is a measure of the goodness of fit of a model.[11] In regression, the R2 coefficient of determination is a statistical measure of how well the regression predictions approximate the real data points. An R2 of 1 indicates that the regression predictions perfectly fit the data.

Values of R2 outside the range 0 to 1 occur when the model fits the data worse than the worst possible least-squares predictor (equivalent to a horizontal hyperplane at a height equal to the mean of the observed data). This occurs when a wrong model was chosen, or nonsensical constraints were applied by mistake. If equation 1 of Kvålseth[12] is used (this is the equation used most often), R2 can be less than zero. If equation 2 of Kvålseth is used, R2 can be greater than one.

In all instances where R2 is used, the predictors are calculated by ordinary least-squares regression: that is, by minimizing SSres. In this case, R2 increases as the number of variables in the model is increased (R2 is monotone increasing with the number of variables included—it will never decrease). This illustrates a drawback to one possible use of R2, where one might keep adding variables (kitchen sink regression) to increase the R2 value. For example, if one is trying to predict the sales of a model of car from the car's gas mileage, price, and engine power, one can include probably irrelevant factors such as the first letter of the model's name or the height of the lead engineer designing the car because the R2 will never decrease as variables are added and will likely experience an increase due to chance alone.

This leads to the alternative approach of looking at the adjusted R2. The explanation of this statistic is almost the same as R2 but it penalizes the statistic as extra variables are included in the model. For cases other than fitting by ordinary least squares, the R2 statistic can be calculated as above and may still be a useful measure. If fitting is by weighted least squares or generalized least squares, alternative versions of R2 can be calculated appropriate to those statistical frameworks, while the "raw" R2 may still be useful if it is more easily interpreted. Values for R2 can be calculated for any type of predictive model, which need not have a statistical basis.

In a multiple linear model

[edit]

Consider a linear model with more than a single explanatory variable, of the form

where, for the ith case, is the response variable, are p regressors, and is a mean zero error term. The quantities are unknown coefficients, whose values are estimated by least squares. The coefficient of determination R2 is a measure of the global fit of the model. Specifically, R2 is an element of [0, 1] and represents the proportion of variability in Yi that may be attributed to some linear combination of the regressors (explanatory variables) in X.[13]

R2 is often interpreted as the proportion of response variation "explained" by the regressors in the model. Thus, R2 = 1 indicates that the fitted model explains all variability in , while R2 = 0 indicates no 'linear' relationship (for straight line regression, this means that the straight line model is a constant line (slope = 0, intercept = ) between the response variable and regressors). An interior value such as R2 = 0.7 may be interpreted as follows: "Seventy percent of the variance in the response variable can be explained by the explanatory variables. The remaining thirty percent can be attributed to unknown, lurking variables or inherent variability."

A caution that applies to R2, as to other statistical descriptions of correlation and association is that "correlation does not imply causation". In other words, while correlations may sometimes provide valuable clues in uncovering causal relationships among variables, a non-zero estimated correlation between two variables is not, on its own, evidence that changing the value of one variable would result in changes in the values of other variables. For example, the practice of carrying matches (or a lighter) is correlated with incidence of lung cancer, but carrying matches does not cause cancer (in the standard sense of "cause").

In case of a single regressor, fitted by least squares, R2 is the square of the Pearson product-moment correlation coefficient relating the regressor and the response variable. More generally, R2 is the square of the correlation between the constructed predictor and the response variable. With more than one regressor, the R2 can be referred to as the coefficient of multiple determination.

Inflation of R2

[edit]

In least squares regression using typical data, R2 is at least weakly increasing with an increase in number of regressors in the model. Because increases in the number of regressors increase the value of R2, R2 alone cannot be used as a meaningful comparison of models with very different numbers of independent variables. For a meaningful comparison between two models, an F-test can be performed on the residual sum of squares [citation needed], similar to the F-tests in Granger causality, though this is not always appropriate[further explanation needed]. As a reminder of this, some authors denote R2 by Rq2, where q is the number of columns in X (the number of explanators including the constant).

To demonstrate this property, first recall that the objective of least squares linear regression is

where Xi is a row vector of values of explanatory variables for case i and b is a column vector of coefficients of the respective elements of Xi.

The optimal value of the objective is weakly smaller as more explanatory variables are added and hence additional columns of (the explanatory data matrix whose ith row is Xi) are added, by the fact that less constrained minimization leads to an optimal cost which is weakly smaller than more constrained minimization does. Given the previous conclusion and noting that depends only on y, the non-decreasing property of R2 follows directly from the definition above.

The intuitive reason that using an additional explanatory variable cannot lower the R2 is this: Minimizing is equivalent to maximizing R2. When the extra variable is included, the data always have the option of giving it an estimated coefficient of zero, leaving the predicted values and the R2 unchanged. The only way that the optimization problem will give a non-zero coefficient is if doing so improves the R2.

The above gives an analytical explanation of the inflation of R2. Next, an example based on ordinary least square from a geometric perspective is shown below.[14]

This is an example of residuals of regression models in smaller and larger spaces based on ordinary least square regression.

A simple case to be considered first:

This equation describes the ordinary least squares regression model with one regressor. The prediction is shown as the red vector in the figure on the right. Geometrically, it is the projection of true value onto a model space in (without intercept). The residual is shown as the red line.

This equation corresponds to the ordinary least squares regression model with two regressors. The prediction is shown as the blue vector in the figure on the right. Geometrically, it is the projection of true value onto a larger model space in (without intercept). Noticeably, the values of and are not the same as in the equation for smaller model space as long as and are not zero vectors. Therefore, the equations are expected to yield different predictions (i.e., the blue vector is expected to be different from the red vector). The least squares regression criterion ensures that the residual is minimized. In the figure, the blue line representing the residual is orthogonal to the model space in , giving the minimal distance from the space.

The smaller model space is a subspace of the larger one, and thereby the residual of the smaller model is guaranteed to be larger. Comparing the red and blue lines in the figure, the blue line is orthogonal to the space, and any other line would be larger than the blue one. Considering the calculation for R2, a smaller value of will lead to a larger value of R2, meaning that adding regressors will result in inflation of R2.

Caveats

[edit]

R2 does not indicate whether:

  • the independent variables are a cause of the changes in the dependent variable;
  • omitted-variable bias exists;
  • the correct regression was used;
  • the most appropriate set of independent variables has been chosen;
  • there is collinearity present in the data on the explanatory variables;
  • the model might be improved by using transformed versions of the existing set of independent variables;
  • there are enough data points to make a solid conclusion;
  • there are a few outliers in an otherwise good sample.
Comparison of the Theil–Sen estimator (black) and simple linear regression (blue) for a set of points with outliers. Because of the many outliers, neither of the regression lines fits the data well, as measured by the fact that neither gives a very high R2.

Extensions

[edit]

Adjusted R2

[edit]

The use of an adjusted R2 (one common notation is , pronounced "R bar squared"; another is or ) is an attempt to account for the phenomenon of the R2 automatically increasing when extra explanatory variables are added to the model. There are many different ways of adjusting.[15] By far the most used one, to the point that it is typically just referred to as adjusted R, is the correction proposed by Mordecai Ezekiel.[15][16][17] The adjusted R2 is defined as

where dfres is the degrees of freedom of the estimate of the population variance around the model, and dftot is the degrees of freedom of the estimate of the population variance around the mean. dfres is given in terms of the sample size n and the number of variables p in the model, dfres = np − 1. dftot is given in the same way, but with p being zero for the mean (i.e., dftot = n − 1).

Inserting the degrees of freedom and using the definition of R2, it can be rewritten as:

where p is the total number of explanatory variables in the model (excluding the intercept), and n is the sample size.

The adjusted R2 can be negative, and its value will always be less than or equal to that of R2. Unlike R2, the adjusted R2 increases only when the increase in R2 (due to the inclusion of a new explanatory variable) is more than one would expect to see by chance. If a set of explanatory variables with a predetermined hierarchy of importance are introduced into a regression one at a time, with the adjusted R2 computed each time, the level at which adjusted R2 reaches a maximum, and decreases afterward, would be the regression with the ideal combination of having the best fit without excess/unnecessary terms.

Schematic of the bias and variance contribution into the total error

The adjusted R2 can be interpreted as an instance of the bias-variance tradeoff. When we consider the performance of a model, a lower error represents a better performance. When the model becomes more complex, the variance will increase whereas the square of bias will decrease, and these two metrics add up to be the total error. Combining these two trends, the bias-variance tradeoff describes a relationship between the performance of the model and its complexity, which is shown as a u-shape curve on the right. For the adjusted R2 specifically, the model complexity (i.e. number of parameters) affects the R2 and the term / frac and thereby captures their attributes in the overall performance of the model.

R2 can be interpreted as the variance of the model, which is influenced by the model complexity. A high R2 indicates a lower bias error because the model can better explain the change of Y with predictors. For this reason, we make fewer (erroneous) assumptions, and this results in a lower bias error. Meanwhile, to accommodate fewer assumptions, the model tends to be more complex. Based on bias-variance tradeoff, a higher complexity will lead to a decrease in bias and a better performance (below the optimal line). In R2, the term (1 − R2) will be lower with high complexity and resulting in a higher R2, consistently indicating a better performance.

On the other hand, the term/frac term is reversely affected by the model complexity. The term/frac will increase when adding regressors (i.e., increased model complexity) and lead to worse performance. Based on bias-variance tradeoff, a higher model complexity (beyond the optimal line) leads to increasing errors and a worse performance.

Considering the calculation of R2, more parameters will increase the R2 and lead to an increase in R2. Nevertheless, adding more parameters will increase the term/frac and thus decrease R2. These two trends construct a reverse u-shape relationship between model complexity and R2, which is in consistent with the u-shape trend of model complexity versus overall performance. Unlike R2, which will always increase when model complexity increases, R2 will increase only when the bias eliminated by the added regressor is greater than the variance introduced simultaneously. Using R2 instead of R2 could thereby prevent overfitting.

Following the same logic, adjusted R2 can be interpreted as a less biased estimator of the population R2, whereas the observed sample R2 is a positively biased estimate of the population value.[18] Adjusted R2 is more appropriate when evaluating model fit (the variance in the dependent variable accounted for by the independent variables) and in comparing alternative models in the feature selection stage of model building.[18]

The principle behind the adjusted R2 statistic can be seen by rewriting the ordinary R2 as

where and are the sample variances of the estimated residuals and the dependent variable respectively, which can be seen as biased estimates of the population variances of the errors and of the dependent variable. These estimates are replaced by statistically unbiased versions: and .

Despite using unbiased estimators for the population variances of the error and the dependent variable, adjusted R2 is not an unbiased estimator of the population R2,[18] which results by using the population variances of the errors and the dependent variable instead of estimating them. Ingram Olkin and John W. Pratt derived the minimum-variance unbiased estimator for the population R2,[19] which is known as Olkin–Pratt estimator. Comparisons of different approaches for adjusting R2 concluded that in most situations either an approximate version of the Olkin–Pratt estimator [18] or the exact Olkin–Pratt estimator [20] should be preferred over (Ezekiel) adjusted R2.

Coefficient of partial determination

[edit]

The coefficient of partial determination can be defined as the proportion of variation that cannot be explained in a reduced model, but can be explained by the predictors specified in a full model.[21][22][23] This coefficient is used to provide insight into whether or not one or more additional predictors may be useful in a more fully specified regression model.

The calculation for the partial R2 is relatively straightforward after estimating two models and generating the ANOVA tables for them. The calculation for the partial R2 is

which is analogous to the usual coefficient of determination:

Generalizing and decomposing R2

[edit]

As explained above, model selection heuristics such as the adjusted R2 criterion and the F-test examine whether the total R2 sufficiently increases to determine if a new regressor should be added to the model. If a regressor is added to the model that is highly correlated with other regressors which have already been included, then the total R2 will hardly increase, even if the new regressor is of relevance. As a result, the above-mentioned heuristics will ignore relevant regressors when cross-correlations are high.[24]

Geometric representation of r2.

Alternatively, one can decompose a generalized version of R2 to quantify the relevance of deviating from a hypothesis.[24] As Hoornweg (2018) shows, several shrinkage estimators – such as Bayesian linear regression, ridge regression, and the (adaptive) lasso – make use of this decomposition of R2 when they gradually shrink parameters from the unrestricted OLS solutions towards the hypothesized values. Let us first define the linear regression model as

It is assumed that the matrix X is standardized with Z-scores and that the column vector is centered to have a mean of zero. Let the column vector refer to the hypothesized regression parameters and let the column vector denote the estimated parameters. We can then define

An R2 of 75% means that the in-sample accuracy improves by 75% if the data-optimized b solutions are used instead of the hypothesized values. In the special case that is a vector of zeros, we obtain the traditional R2 again.

The individual effect on R2 of deviating from a hypothesis can be computed with ('R-outer'). This times matrix is given by

where . The diagonal elements of exactly add up to R2. If regressors are uncorrelated and is a vector of zeros, then the diagonal element of simply corresponds to the r2 value between and . When regressors and are correlated, might increase at the cost of a decrease in . As a result, the diagonal elements of may be smaller than 0 and, in more exceptional cases, larger than 1. To deal with such uncertainties, several shrinkage estimators implicitly take a weighted average of the diagonal elements of to quantify the relevance of deviating from a hypothesized value.[24] Click on the lasso for an example.

R2 in logistic regression

[edit]

In the case of logistic regression, usually fit by maximum likelihood, there are several choices of pseudo-R2.

One is the generalized R2 originally proposed by Cox & Snell,[25] and independently by Magee:[26]

where is the likelihood of the model with only the intercept, is the likelihood of the estimated model (i.e., the model with a given set of parameter estimates) and n is the sample size. It is easily rewritten to:

where D is the test statistic of the likelihood ratio test.

Nico Nagelkerke noted that it had the following properties:[27][22]

  1. It is consistent with the classical coefficient of determination when both can be computed;
  2. Its value is maximised by the maximum likelihood estimation of a model;
  3. It is asymptotically independent of the sample size;
  4. The interpretation is the proportion of the variation explained by the model;
  5. The values are between 0 and 1, with 0 denoting that model does not explain any variation and 1 denoting that it perfectly explains the observed variation;
  6. It does not have any unit.

However, in the case of a logistic model, where cannot be greater than 1, R2 is between 0 and : thus, Nagelkerke suggested the possibility to define a scaled R2 as R2/R2max.[22]

Comparison with residual statistics

[edit]

Occasionally, residual statistics are used for indicating goodness of fit. The norm of residuals is calculated as the square-root of the sum of squares of residuals (SSR):

Similarly, the reduced chi-square is calculated as the SSR divided by the degrees of freedom.

Both R2 and the norm of residuals have their relative merits. For least squares analysis R2 varies between 0 and 1, with larger numbers indicating better fits and 1 representing a perfect fit. The norm of residuals varies from 0 to infinity with smaller numbers indicating better fits and zero indicating a perfect fit. One advantage and disadvantage of R2 is the term acts to normalize the value. If the yi values are all multiplied by a constant, the norm of residuals will also change by that constant but R2 will stay the same. As a basic example, for the linear least squares fit to the set of data:

x 1 2 3 4 5
y 1.9 3.7 5.8 8.0 9.6

R2 = 0.998, and norm of residuals = 0.302. If all values of y are multiplied by 1000 (for example, in an SI prefix change), then R2 remains the same, but norm of residuals = 302.

Another single-parameter indicator of fit is the RMSE of the residuals, or standard deviation of the residuals. This would have a value of 0.135 for the above example given that the fit was linear with an unforced intercept.[28]

History

[edit]

The creation of the coefficient of determination has been attributed to the geneticist Sewall Wright and was first published in 1921.[29]

See also

[edit]

Notes

[edit]

Further reading

[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
The coefficient of determination, often denoted as , is a statistical measure in that quantifies the proportion of the total variance in the dependent variable that can be explained by the independent variable(s) in a model. It ranges from 0 to 1, where a value of 0 indicates that the model explains none of the variability, a value of 1 indicates a perfect fit, and intermediate values represent the of variance accounted for by the model (e.g., an of 0.75 means 75% of the variance is explained). Introduced by Sewall Wright in his 1921 paper "Correlation and Causation," the concept emerged in the context of path analysis to assess relationships in complex systems, such as agricultural and . In , is equivalent to the square of the (r) between the observed and predicted values, providing a direct link to measures of linear association. For multiple models of the form Yi=β0+β1Xi1++βpXip+ϵiY_i = \beta_0 + \beta_1 X_{i1} + \cdots + \beta_p X_{ip} + \epsilon_i, where YiY_i is the dependent variable, XijX_{ij} are predictors, βj\beta_j are coefficients, and ϵi\epsilon_i is the error term, is calculated as the ratio of the (SSR) to the (SST): R2=SSRSST=1SSESSTR^2 = \frac{SSR}{SST} = 1 - \frac{SSE}{SST}, with SSE denoting the sum of squared residuals (unexplained variance). This decomposition—SST = SSR + SSE—highlights how partitions total variability into explained and residual components, making it a key goodness-of-fit statistic. While widely used across fields like , social sciences, and to evaluate model performance, has limitations: it does not imply causation, can increase with irrelevant predictors in multiple regression (leading to the development of adjusted ), and its "good" value depends on context—for instance, values above 0.8 may be excellent in physical sciences but modest in behavioral studies. Despite these caveats, remains a foundational tool for interpreting the explanatory power of regression models in statistical .

Definitions

Proportion of explained variance

The coefficient of determination, denoted R2R^2, quantifies the proportion of the total variance in the dependent variable that is explained by the independent variables in a regression model. It is formally defined as R2=1[SSres](/page/Residualsumofsquares)[SStot](/page/Totalsumofsquares)R^2 = 1 - \frac{[SS_{res}](/page/Residual_sum_of_squares)}{[SS_{tot}](/page/Total_sum_of_squares)}, where SSresSS_{res} is the representing the unexplained variance, and SStotSS_{tot} is the capturing the overall variability in the data. This measure ranges from 0 to 1, where R2=0R^2 = 0 indicates that the model explains none of the variance (equivalent to using the as the predictor), and R2=1R^2 = 1 signifies a perfect fit with no residual variance. The total sum of squares, SStot=(yiyˉ)2SS_{tot} = \sum (y_i - \bar{y})^2, measures the total variability of the observed values yiy_i around their mean yˉ\bar{y}, serving as a baseline for the dispersion in the dependent variable before any modeling. After fitting the regression model, the residual sum of squares, SSres=(yiy^i)2SS_{res} = \sum (y_i - \hat{y}_i)^2, quantifies the remaining unexplained variability between the observed values and the predicted values y^i\hat{y}_i. Thus, R2R^2 directly reflects the fraction of SStotSS_{tot} that the model accounts for, highlighting its effectiveness in capturing patterns in the data. From the perspective of variance reduction in prediction, R2R^2 arises as the complement of the proportion of variance left unexplained by the model. In predictive terms, the variance of the prediction error is proportional to SSresSS_{res}, while the model's reduces the expected error variance from the total level SStotSS_{tot} by the amount attributable to the predictors. This decomposition underscores R2R^2 as a metric of how much the regression improves predictions over a naive mean-based approach, with higher values indicating greater reduction in prediction . To illustrate, consider a simple dataset with four observations of an independent variable xx (e.g., dosage levels: 1, 2, 3, 4) and dependent variable yy (e.g., response rates: 2, 4, 5, 4). The mean of yy is yˉ=3.75\bar{y} = 3.75, so SStot=(23.75)2+(43.75)2+(53.75)2+(43.75)2=4.75SS_{tot} = (2-3.75)^2 + (4-3.75)^2 + (5-3.75)^2 + (4-3.75)^2 = 4.75. Fitting a yields predicted values y^=2.7,3.4,4.1,4.8\hat{y} = 2.7, 3.4, 4.1, 4.8, with residuals leading to SSres=(22.7)2+(43.4)2+(54.1)2+(44.8)2=2.3SS_{res} = (2-2.7)^2 + (4-3.4)^2 + (5-4.1)^2 + (4-4.8)^2 = 2.3. Thus, R2=12.34.750.516R^2 = 1 - \frac{2.3}{4.75} \approx 0.516, meaning approximately 51.6% of the variance in yy is explained by xx.

Relation to unexplained variance

The complement of the coefficient of determination, denoted as 1R21 - R^2, quantifies the proportion of the total variance in the dependent variable that remains unexplained by the regression model. This value, sometimes called the coefficient of non-determination, directly measures the model's failure to account for variability in the response variable. The unexplained variance is formally computed as the ratio of the residual sum of squares (SSres) to the total sum of squares (SStot): 1R2=SSresSStot=(yiy^i)2(yiyˉ)2,1 - R^2 = \frac{\text{SS}_{\text{res}}}{\text{SS}_{\text{tot}}} = \frac{\sum (y_i - \hat{y}_i)^2}{\sum (y_i - \bar{y})^2}, where yiy_i are the observed values, y^i\hat{y}_i are the predicted values from the model, and yˉ\bar{y} is the mean of the observed values. This component captures irreducible error— inherent stochastic variability in the data that no model can eliminate—as well as variance attributable to omitted variables, model misspecification, or unmodeled interactions. A high value of 1R21 - R^2 signals inadequate model performance, as it reflects a large portion of the data's variability left unaccounted for, potentially indicating the need for additional predictors or a different modeling approach. For instance, in regression analyses of rates versus , an R2R^2 of approximately 0.68 implies 1R20.321 - R^2 \approx 0.32, meaning 32% of the variance in rates is unexplained by latitude alone, possibly due to factors like levels or differences. In contrast, a with a strong linear relationship might yield R2=0.80R^2 = 0.80, so 1R2=0.201 - R^2 = 0.20, where the residuals ei=yiy^ie_i = y_i - \hat{y}_i sum to squared values representing only 20% of total variability (e.g., SSres = 200 when SStot = 1000), highlighting better but still imperfect model adequacy.

Squared correlation coefficient

In simple linear regression, the coefficient of determination R2R^2 is equal to the square of the sample rr between the observed response values yy and the predicted values y^\hat{y}. This relationship holds specifically for the bivariate case with one predictor variable. The mathematical equivalence arises because R2=SSRSSTR^2 = \frac{\mathrm{SSR}}{\mathrm{SST}}, where SSR is the regression sum of squares and SST is the total sum of squares, and this simplifies to the squared correlation. To see this, note that the Pearson correlation r=cov(x,y)sxsyr = \frac{\mathrm{cov}(x, y)}{s_x s_y}, where sxs_x and sys_y are the standard deviations of the predictor xx and response yy. In simple linear regression, the slope β^1=rsysx\hat{\beta}_1 = r \frac{s_y}{s_x}, and substituting into the expression for SSR yields R2=(cov(x,y)sxsy)2=r2R^2 = \left( \frac{\mathrm{cov}(x, y)}{s_x s_y} \right)^2 = r^2. Equivalently, since the predicted values y^\hat{y} are a linear transformation of xx, rr also equals the correlation between yy and y^\hat{y}, confirming R2=[cor(y,y^)]2R^2 = [\mathrm{cor}(y, \hat{y})]^2. This equivalence is valid under the assumptions of , particularly that the relationship between the predictor and response is linear, and the analysis is limited to two variables without additional predictors. For example, consider data on college GPA (colgpa) and high school GPA (hsgpa) for n=141n = 141 students. The Pearson rr between colgpa and hsgpa is 0.4146. Squaring this gives r2=0.41462=0.1719r^2 = 0.4146^2 = 0.1719. Fitting the model yields SSR = 3.335 and SST = 19.406, so R2=3.33519.406=0.1719R^2 = \frac{3.335}{19.406} = 0.1719, matching the squared .

Interpretation

In simple linear regression

In , the coefficient of determination, denoted R2R^2, represents the proportion of the total variance in the response variable YY that is explained by the predictor variable XX. For instance, an R2R^2 value of 0.75 indicates that 75% of the variability in YY can be attributed to its linear relationship with XX, while the remaining 25% is due to other factors or random error. This measure provides a straightforward way to assess how well the captures the underlying pattern in the data. The of R2R^2 in this context reflects the degree to which the model's predictions align with the actual observed values along the fitted straight line. Higher values suggest that the data points cluster closely around the regression line, implying more reliable predictions for new observations within the range of XX. Conversely, a low R2R^2 indicates greater scatter, meaning the linear fit offers limited insight into YY's behavior. In , R2R^2 is equivalent to the square of the between XX and YY, reinforcing its role as a measure of linear association strength. To illustrate intuitively, consider a scatterplot of points representing (XX) and weight (YY) for a group of individuals, with a straight regression line fitted through them. The total deviation of points from the weight (horizontal lines) decomposes into explained deviations (vertical distances from the line to the ) and residual deviations (vertical distances from points to the line). An R2R^2 of 0.80 here would mean 80% of the spread in weights is accounted for by the linear trend with , visualized by the line passing near most points, while the residuals show the unexplained scatter. The value of R2R^2 ranges from 0 to 1, where 0 signifies no linear relationship (the line explains none of the variance, as points are randomly scattered) and 1 indicates a perfect linear fit (all points lie exactly on the line). However, this range applies specifically to linear associations; a strong nonlinear relationship may yield a low R2R^2 despite a clear , as the metric does not capture or other non-straight forms.

In multiple linear regression

In multiple linear regression, the coefficient of determination, denoted R2R^2, quantifies the collective explanatory power of all predictor variables in accounting for the variability in the response variable. It represents the fraction of the total variance in the response that the model captures through the combined effects of multiple predictors, providing a measure of overall model fit. This value always ranges between 0 and 1, where a higher R2R^2 indicates that a larger proportion of the response variance is explained by the predictors together, though the interpretation emphasizes the model's performance relative to a baseline intercept-only model that explains none of the variance beyond the mean. As additional predictors are incorporated into the model, R2R^2 will not decrease and typically increases, reflecting the added variables' contribution to reducing residual variance; however, this rise does not necessarily signify a substantial or meaningful enhancement in understanding, particularly if the new predictors overlap substantially with existing ones. For instance, in a approach where predictors are added sequentially based on their , each step can show an incremental increase in R2R^2, with the marginal contribution of a new predictor interpreted as the change in R2R^2 attributable to its inclusion, highlighting how the model's explanatory power accumulates but requires caution against overinterpretation. Multicollinearity, arising when predictors are moderately or highly correlated, can result in a high overall R2R^2 while complicating the attribution of explanatory effects to individual predictors, as it increases the variance of coefficient estimates and leads to less reliable assessments of their unique roles despite the strong combined fit. This extension from , where R2R^2 reflects the squared correlation between one predictor and the response, underscores the cumulative nature of explanation in multivariate settings.

Limitations and inflation effects

One key limitation of the coefficient of determination, R2R^2, arises in models where adding more predictor variables—even those that are irrelevant or purely noisy—will always increase (or at least not decrease) the value of R2R^2 when fitted to the sample data. This inflation occurs because the model gains flexibility to fit the specific quirks and noise in the training , rather than capturing true underlying patterns, which promotes and reduces the model's generalizability. Several caveats further underscore the risks of over-relying on R2R^2. A high R2R^2 does not imply causation between predictors and the response variable; it only measures association, and spurious correlations can yield misleadingly strong fits. Similarly, R2R^2 can appear elevated in misspecified models, such as those omitting key variables or assuming incorrect functional forms, masking structural flaws in the . Moreover, R2R^2 is computed solely from in-sample data and provides no insight into out-of-sample error, potentially overestimating a model's for new observations. To illustrate the inflation effect, consider a simulated with 50 observations and an initial simple model using one relevant predictor, yielding an R2R^2 of around 0.3; upon adding nine irrelevant variables (randomly generated), the R2R^2 can inflate to 0.9 or higher due to , as the model interpolates the rather than the signal—though this fit fails to hold on unseen data. To mitigate these issues, R2R^2 should be interpreted alongside other diagnostics, such as p-values to assess predictor significance and cross-validation techniques to evaluate out-of-sample performance and detect .

Extensions

Adjusted coefficient of determination

The adjusted coefficient of determination, denoted Rˉ2\bar{R}^2, modifies the ordinary coefficient of determination R2R^2 by incorporating a penalty for the number of predictors in the model, yielding a less biased estimate of the of explained variance. Unlike R2R^2, which monotonically increases or stays the same when additional predictors are included regardless of their relevance, Rˉ2\bar{R}^2 decreases if the added predictors do not sufficiently improve the model fit, thereby discouraging . The formula for the adjusted coefficient of determination is Rˉ2=1(1R2)n1nk1,\bar{R}^2 = 1 - (1 - R^2) \frac{n-1}{n - k - 1}, where nn is the sample size and kk is the number of predictors (excluding ). This adjustment arises from a derivation that accounts for in variance estimation: the (TSS) is divided by its n1n-1 to obtain an unbiased estimate of the total variance, while the (RSS) is divided by nk1n - k - 1 for an unbiased estimate of the variance; Rˉ2\bar{R}^2 then represents the ratio of these adjusted variances, equivalent to 1 minus the ratio of the unbiased variance to the unbiased total variance. To illustrate, consider a with n=30n = 30 observations where the unadjusted R2=0.60R^2 = 0.60. For a model with k=1k = 1 predictor, Rˉ2=1(10.60)29280.586\bar{R}^2 = 1 - (1 - 0.60) \frac{29}{28} \approx 0.586, indicating a slight downward adjustment. If the same R2=0.60R^2 = 0.60 holds for a model with k=5k = 5 predictors, Rˉ2=1(10.60)29240.517\bar{R}^2 = 1 - (1 - 0.60) \frac{29}{24} \approx 0.517, demonstrating how the penalty grows with model complexity even without improvement in fit.

Partial coefficient of determination

The partial coefficient of determination, often denoted as RYjXj2R^2_{Y_j | \mathbf{X}_{-j}}, quantifies the marginal contribution of a specific predictor variable XjX_j to explaining the variance in the response variable YY in a multiple linear regression model, after controlling for the effects of all other predictors Xj\mathbf{X}_{-j}. It is defined as the proportional reduction in the residual sum of squares (SSE) when XjX_j is added to the model containing the other predictors: RYjXj2=1SSE(Xj,Xj)SSE(Xj)=SSR(XjXj)SSE(Xj),R^2_{Y_j | \mathbf{X}_{-j}} = 1 - \frac{\text{SSE}(\mathbf{X}_{-j}, X_j)}{\text{SSE}(\mathbf{X}_{-j})} = \frac{\text{SSR}(X_j | \mathbf{X}_{-j})}{\text{SSE}(\mathbf{X}_{-j})}, where SSR(XjXj)\text{SSR}(X_j | \mathbf{X}_{-j}) is the extra sum of squares due to XjX_j, SSE(Xj)\text{SSE}(\mathbf{X}_{-j}) is the error sum of squares for the reduced model excluding XjX_j, and SSE(Xj,Xj)\text{SSE}(\mathbf{X}_{-j}, X_j) is the error sum of squares for the full model. This measure isolates the unique explanatory power of XjX_j, ranging from 0 (no additional contribution) to 1 (complete explanation of remaining variance). In interpretation, the partial R2R^2 represents the proportion of the variance in YY that remains unexplained by the other predictors and is subsequently accounted for by adding XjX_j. Unlike the overall coefficient of determination, which assesses the full model's fit, the partial version highlights the incremental benefit of an individual predictor, making it valuable for identifying redundant variables or effects where predictors overlap in their explanations. For instance, a partial R2R^2 near 0 indicates that XjX_j adds little unique information beyond the other variables already in the model. The partial coefficient of determination can also be expressed in terms of correlations, specifically relating to the squared partial correlation prYjXj2pr^2_{Y_j \cdot \mathbf{X}_{-j}}, which equals the partial R2R^2, and involving the squared semi-partial correlation srYj(Xj)2sr^2_{Y_j (\mathbf{X}_{-j})}: RYjXj2=prYjXj2=srYj(Xj)21RYXj2,R^2_{Y_j | \mathbf{X}_{-j}} = pr^2_{Y_j \cdot \mathbf{X}_{-j}} = \frac{sr^2_{Y_j (\mathbf{X}_{-j})}}{1 - R^2_{Y | \mathbf{X}_{-j}}}, where srYj(Xj)2=RYXj,Xj2RYXj2sr^2_{Y_j (\mathbf{X}_{-j})} = R^2_{Y | \mathbf{X}_{-j}, X_j} - R^2_{Y | \mathbf{X}_{-j}} is the semi-partial squared correlation, measuring the unique contribution to total variance, while the denominator adjusts for the variance already explained by the reduced model. This formulation underscores how partial R2R^2 normalizes the semi-partial contribution to the unexplained variance. Consider an example from a multiple of (YY) predicted by triceps skinfold thickness (X1X_1) and thigh circumference (X2X_2). The reduced model with only X1X_1 yields RYX12=0.71R^2_{Y | X_1} = 0.71 and SSE = 143.12, while the full model gives SSE = 109.95. The partial R2R^2 for X2X_2 given X1X_1 is then RY2X12=(143.12109.95)/143.12=0.232R^2_{Y_2 | X_1} = (143.12 - 109.95)/143.12 = 0.232, indicating that X2X_2 explains an additional 23.2% of the variance in body fat not accounted for by X1X_1 alone. This value is modest compared to the overall R20.78R^2 \approx 0.78 for the full model, illustrating how predictor overlap can diminish an individual variable's partial contribution despite a strong total fit.

Generalizations and decompositions

In regression models with orthogonal predictors, the coefficient of determination decomposes additively into the sum of the individual R² values (or squared partial correlations) contributed by each predictor, reflecting their independent effects on explained variance. This follows from the of the , where the projection onto the fitted values is the sum of orthogonal projections onto each predictor space, yielding R2=j=1pRj2R^2 = \sum_{j=1}^p R_j^2, with Rj2=Pjy2yyˉ2R_j^2 = \frac{\| P_j y \|^2}{\| y - \bar{y} \|^2} for the PjP_j of the j-th predictor. When predictors are correlated (non-orthogonal cases), such additive decomposition no longer holds directly, but hierarchical partitioning addresses this by evaluating all possible subsets of predictors and allocating variance based on the average independent contribution of each across models, thus providing a measure of relative importance while accounting for . Alternatively, the method from decomposes R² by computing the average marginal contribution of each predictor (or group) over all possible combinations, ensuring an equitable partition that sums to the total R² and handles shared variance. For instance, in a multiple regression with environmental and socioeconomic predictors, this approach might attribute 0.15 of an overall R² = 0.45 to climate variables and 0.20 to income factors, after averaging marginal gains across coalitions. A broader geometric generalization interprets R² within the of centered observations, where it equals the squared cosine of θ* between the observed response y - \bar{y} and the fitted values \hat{y} - \bar{y}, i.e., R2=cos2(θ)=(y^yˉ)(yyˉ)yyˉy^yˉR^2 = \cos^2(\theta^*) = \frac{(\hat{y} - \bar{y})' (y - \bar{y}) }{ \| y - \bar{y} \| \cdot \| \hat{y} - \bar{y} \| }, emphasizing the directional alignment between actual and predicted data. Extensions to nonlinear models introduce pseudo-R² forms to approximate goodness-of-fit. McFadden's pseudo-R², commonly used for models like , is given by ρ2=1lnL(M)lnL(M0)\rho^2 = 1 - \frac{\ln L(M)}{\ln L(M_0)}, where L(M) is the likelihood of the full model and L(M_0) that of the intercept-only null model; values near 0.2–0.4 often indicate reasonable fit, though it understates compared to linear R².

Application in

In , which models binary outcomes, the coefficient of determination cannot be directly applied as in due to the non-linear nature of the link and the absence of a straightforward variance decomposition. Instead, pseudo-R² measures are used to assess model fit by quantifying the improvement in the over a baseline null model. These measures are derived from and provide a way to evaluate how well the predictors explain the observed data relative to an intercept-only model. One common pseudo-R² variant is the Cox and Snell measure, defined as RCS2=1(L0L1)2/n,R^2_{CS} = 1 - \left( \frac{L_0}{L_1} \right)^{2/n}, where L0L_0 is the likelihood of the null (intercept-only) model, L1L_1 is the likelihood of the fitted model, and nn is the sample size. This measure, proposed by Cox and Snell, ranges between 0 and less than 1, reflecting the proportional reduction in the deviance but bounded by the null model's likelihood. To address the limitation that Cox and Snell's R² cannot reach 1 even for a perfect model, Nagelkerke introduced a scaled version: RN2=RCS21L02/n.R^2_N = \frac{R^2_{CS}}{1 - L_0^{2/n}}. This adjustment normalizes the measure so its maximum value is 1, making it more intuitive for comparing fit across models while still based on likelihood ratios. Nagelkerke's formulation is widely adopted in statistical software for binary logistic regression. Interpreting these pseudo-R² values presents challenges distinct from linear regression. Unlike the ordinary R², which represents the proportion of total variance explained by the model, pseudo-R² measures indicate the relative improvement in predictive likelihood rather than variance reduction. For instance, a value of 0.10 does not mean 10% of the "variance" is explained but rather that the full model improves the log-likelihood by about 10% relative to the null, adjusted for sample size; values are typically lower than in linear models for similar data. These measures are most useful for comparing nested models rather than assessing absolute explanatory power. Consider a logistic regression example predicting binary income (1 for above-median, 0 otherwise) from years of education, with a sample of n=500n = 500. The null model's log-likelihood is -346.574, while the fitted model's is -322.489. The resulting Cox and Snell R² is 0.092, and Nagelkerke's R² is 0.122. In a corresponding on the same data, the ordinary R² is 0.11, showing rough concordance in scale but highlighting that pseudo-R² values remain modest and emphasize likelihood gains over variance fit. Despite their utility, pseudo-R² measures in have limitations: they are not directly comparable to the linear R² due to differing underlying assumptions about error distributions and cannot be interpreted as proportions of in the binary outcome. Instead, they serve primarily for relative model comparison within the same , such as evaluating whether adding predictors meaningfully improves fit beyond the null. Over-reliance on a single pseudo-R² can mislead, so complementary diagnostics like AIC or Hosmer-Lemeshow tests are recommended.

Comparisons

With other goodness-of-fit measures

The coefficient of determination, R2R^2, quantifies the proportion of variance in the response variable explained by the model in linear regression, but it does not penalize model complexity and tends to increase with additional predictors, potentially leading to overfitting. In contrast, information criteria such as the Akaike Information Criterion (AIC) and Bayesian Information Criterion (BIC) provide alternative goodness-of-fit measures that balance explanatory power against model complexity, making them suitable for model selection. These criteria are particularly useful when comparing models for predictive accuracy rather than just in-sample fit, as R2R^2 emphasizes. The AIC is defined as AIC=2logL+2k\text{AIC} = -2 \log L + 2k, where LL is the maximized likelihood of the model and kk is the number of , imposing a fixed penalty of 2 per parameter to estimate relative predictive . Unlike the adjusted R2R^2, which penalizes complexity proportionally to the ratio of unexplained variance adjusted for , AIC derives from and asymptotically approximates the expected Kullback-Leibler divergence, favoring models with lower values for out-of-sample prediction. The BIC, formulated as BIC=2logL+klogn\text{BIC} = -2 \log L + k \log n with nn as the sample size, applies a stronger penalty that grows with nn, making it more conservative in selecting parsimonious models, especially in large datasets. This logarithmic penalty in BIC contrasts with AIC's constant one, leading BIC to favor simpler models more aggressively than AIC or adjusted R2R^2. In generalized linear models (GLMs), the deviance serves as a goodness-of-fit measure analogous to the in , defined as D=2(logLmlogLs)D = -2 (\log L_m - \log L_s), where LmL_m is the likelihood of the fitted model and LsL_s is the likelihood. Lower deviance indicates better fit, and reductions in deviance can test model improvements, much like changes in 1R21 - R^2. For instance, in —a common GLM application—deviance assesses fit similarly to pseudo-R2R^2 measures, though it focuses on likelihood rather than variance explained. R2R^2 is preferred for interpreting explanatory power within the training data, particularly in simple linear contexts, while AIC and BIC are favored for model selection aimed at prediction, as they incorporate penalties to avoid overfitting. Deviance is ideal for GLMs where likelihood-based inference is central, offering a direct parallel to R2R^2's role in ordinary least squares. Consider a linear regression example with n=100n = 100 observations: a parsimonious model might yield R2=0.70R^2 = 0.70 and AIC = 180, while adding two extraneous predictors increases R2R^2 to 0.72 but raises AIC to 185 and BIC to 192 due to the penalties, illustrating the trade-off where AIC/BIC select the simpler model despite the modest fit gain.

Relation to residual statistics

The coefficient of determination, denoted R2R^2, directly relates to residual statistics through its foundational formula, which quantifies the proportion of variance explained by the model in terms of (MSE) and total (MSTot). Specifically, R2=1MSEMSTotR^2 = 1 - \frac{\text{MSE}}{\text{MSTot}}, where MSE represents the average squared residual (the difference between observed and predicted values), and MSTot is the total variance in the dependent variable. This connection highlights how R2R^2 measures error reduction: a higher R2R^2 indicates a lower MSE relative to the total variability, implying the model's predictions deviate less from actual outcomes. The standard error of the estimate, defined as s=MSEs = \sqrt{\text{MSE}}
Add your contribution
Related Hubs
User Avatar
No comments yet.