Hubbry Logo
search
logo

Frequentist inference

logo
Community Hub0 Subscribers
Write something...
Be the first to start a discussion here.
Be the first to start a discussion here.
See all
Frequentist inference

Frequentist inference is a type of statistical inference based in frequentist probability, which treats "probability" in equivalent terms to "frequency" and draws conclusions from sample-data by means of emphasizing the frequency or proportion of findings in the data. Frequentist inference underlies frequentist statistics, in which the well-established methodologies of statistical hypothesis testing and confidence intervals are founded.

Frequentism is based on the presumption that statistics represent probabilistic frequencies. This view was primarily developed by Ronald Fisher and the team of Jerzy Neyman and Egon Pearson. Ronald Fisher contributed to frequentist statistics by developing the frequentist concept of "significance testing", which is the study of the significance of a measure of a statistic when compared to the hypothesis.

Neyman-Pearson extended Fisher's ideas to apply to multiple hypotheses. They posed that the ratio of probabilities of two given hypotheses, when maximizing the difference between them, leads to a maximization of exceeding a given p-value. This relationship serves as the basis of type I and type II errors and confidence intervals.

For statistical inference, the statistic about which we want to make inferences is , where the random vector is a function of an unknown parameter, .

The parameter , in turn, is partitioned into (), where is the parameter of interest, and is the nuisance parameter. For concreteness, might be the population mean, , and the nuisance parameter the standard deviation of the population mean, .

Thus, statistical inference is concerned with the expectation of random vector , .

To construct areas of uncertainty in frequentist inference, a pivot is used which defines the area around that can be used to provide an interval to estimate uncertainty. The pivot is a probability such that for a pivot, , which is a function, that is strictly increasing in , where is a random vector.

This allows that, for some , we can define , which is the probability that the pivot function is less than some well-defined value. This implies , where is a upper limit for .

See all
User Avatar
No comments yet.