Hubbry Logo
search
logo

Explained variation

logo
Community Hub0 Subscribers
Write something...
Be the first to start a discussion here.
Be the first to start a discussion here.
See all
Explained variation

In statistics, explained variation measures the proportion to which a mathematical model accounts for the variation (dispersion) of a given data set. Often, variation is quantified as variance; then, the more specific term explained variance can be used.

The complementary part of the total variation is called unexplained or residual variation; likewise, when discussing variance as such, this is referred to as unexplained or residual variance.

Following Kent (1983), we use the Fraser information (Fraser 1965)

where is the probability density of a random variable , and with () are two families of parametric models. Model family 0 is the simpler one, with a restricted parameter space .

Parameters are determined by maximum likelihood estimation,

The information gain of model 1 over model 0 is written as

where a factor of 2 is included for convenience. Γ is always nonnegative; it measures the extent to which the best model of family 1 is better than the best model of family 0 in explaining g(r).

Assume a two-dimensional random variable where X shall be considered as an explanatory variable, and Y as a dependent variable. Models of family 1 "explain" Y in terms of X,

See all
User Avatar
No comments yet.