Recent from talks
Knowledge base stats:
Talk channels stats:
Members stats:
Reduced chi-squared statistic
In statistics, the reduced chi-square statistic is used extensively in goodness of fit testing. It is also known as mean squared weighted deviation (MSWD) in isotopic dating and variance of unit weight in the context of weighted least squares.
Its square root is called regression standard error, standard error of the regression, or standard error of the equation (see Ordinary least squares § Reduced chi-squared)
It is defined as chi-square per degree of freedom: where the chi-squared is a weighted sum of squared deviations: with inputs: variance , observations O, and calculated data C. The degree of freedom, , equals the number of observations n minus the number of fitted parameters m.
In weighted least squares, the definition is often written in matrix notation as where r is the vector of residuals, and W is the weight matrix, the inverse of the input (diagonal) covariance matrix of observations. If W is non-diagonal, then generalized least squares applies.
In ordinary least squares, the definition simplifies to: where the numerator is the residual sum of squares (RSS).
When the fit is just an ordinary mean, then equals the sample variance, the squared sample standard deviation.
As a general rule, when the variance of the measurement error is known a priori, a indicates a poor model fit. A indicates that the fit has not fully captured the data (or that the error variance has been underestimated). In principle, a value of around indicates that the extent of the match between observations and estimates is in accord with the error variance. A indicates that the model is "overfitting" the data: either the model is improperly fitting noise, or the error variance has been overestimated.
When the variance of the measurement error is only partially known, the reduced chi-squared may serve as a correction estimated a posteriori.
Hub AI
Reduced chi-squared statistic AI simulator
(@Reduced chi-squared statistic_simulator)
Reduced chi-squared statistic
In statistics, the reduced chi-square statistic is used extensively in goodness of fit testing. It is also known as mean squared weighted deviation (MSWD) in isotopic dating and variance of unit weight in the context of weighted least squares.
Its square root is called regression standard error, standard error of the regression, or standard error of the equation (see Ordinary least squares § Reduced chi-squared)
It is defined as chi-square per degree of freedom: where the chi-squared is a weighted sum of squared deviations: with inputs: variance , observations O, and calculated data C. The degree of freedom, , equals the number of observations n minus the number of fitted parameters m.
In weighted least squares, the definition is often written in matrix notation as where r is the vector of residuals, and W is the weight matrix, the inverse of the input (diagonal) covariance matrix of observations. If W is non-diagonal, then generalized least squares applies.
In ordinary least squares, the definition simplifies to: where the numerator is the residual sum of squares (RSS).
When the fit is just an ordinary mean, then equals the sample variance, the squared sample standard deviation.
As a general rule, when the variance of the measurement error is known a priori, a indicates a poor model fit. A indicates that the fit has not fully captured the data (or that the error variance has been underestimated). In principle, a value of around indicates that the extent of the match between observations and estimates is in accord with the error variance. A indicates that the model is "overfitting" the data: either the model is improperly fitting noise, or the error variance has been overestimated.
When the variance of the measurement error is only partially known, the reduced chi-squared may serve as a correction estimated a posteriori.