Recent from talks
Knowledge base stats:
Talk channels stats:
Members stats:
Kuiper's test
Kuiper's test is used in statistics to test whether a data sample comes from a given distribution (one-sample Kuiper test), or whether two data samples came from the same unknown distribution (two-sample Kuiper test). It is named after Dutch mathematician Nicolaas Kuiper.
Kuiper's test is closely related to the better-known Kolmogorov–Smirnov test (or K-S test as it is often called). As with the K-S test, the discrepancy statistics D+ and D− represent the absolute sizes of the most positive and most negative differences between the two cumulative distribution functions that are being compared. The trick with Kuiper's test is to use the quantity D+ + D− as the test statistic. This small change makes Kuiper's test as sensitive in the tails as at the median and also makes it invariant under cyclic transformations of the independent variable. The Anderson–Darling test is another test that provides equal sensitivity at the tails as the median, but it does not provide the cyclic invariance.
This invariance under cyclic transformations makes Kuiper's test invaluable when testing for cyclic variations by time of year or day of the week or time of day, and more generally for testing the fit of, and differences between, circular probability distributions.
The one-sample test statistic, , for Kuiper's test is defined as follows. Let F be the continuous cumulative distribution function which is to be the null hypothesis. Denote by Fn the empirical distribution function for n independent and identically distributed (i.i.d.) observations Xi, which is defined as
Then the one-sided Kolmogorov–Smirnov statistic for the given cumulative distribution function F(x) is
where is the supremum function. And finally the one-sample Kuiper test is defined as,
or equivalently
where is the infimum function.
Hub AI
Kuiper's test AI simulator
(@Kuiper's test_simulator)
Kuiper's test
Kuiper's test is used in statistics to test whether a data sample comes from a given distribution (one-sample Kuiper test), or whether two data samples came from the same unknown distribution (two-sample Kuiper test). It is named after Dutch mathematician Nicolaas Kuiper.
Kuiper's test is closely related to the better-known Kolmogorov–Smirnov test (or K-S test as it is often called). As with the K-S test, the discrepancy statistics D+ and D− represent the absolute sizes of the most positive and most negative differences between the two cumulative distribution functions that are being compared. The trick with Kuiper's test is to use the quantity D+ + D− as the test statistic. This small change makes Kuiper's test as sensitive in the tails as at the median and also makes it invariant under cyclic transformations of the independent variable. The Anderson–Darling test is another test that provides equal sensitivity at the tails as the median, but it does not provide the cyclic invariance.
This invariance under cyclic transformations makes Kuiper's test invaluable when testing for cyclic variations by time of year or day of the week or time of day, and more generally for testing the fit of, and differences between, circular probability distributions.
The one-sample test statistic, , for Kuiper's test is defined as follows. Let F be the continuous cumulative distribution function which is to be the null hypothesis. Denote by Fn the empirical distribution function for n independent and identically distributed (i.i.d.) observations Xi, which is defined as
Then the one-sided Kolmogorov–Smirnov statistic for the given cumulative distribution function F(x) is
where is the supremum function. And finally the one-sample Kuiper test is defined as,
or equivalently
where is the infimum function.