Hubbry Logo
Statistical significanceStatistical significanceMain
Open search
Statistical significance
Community hub
Statistical significance
logo
7 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Statistical significance
Statistical significance
from Wikipedia

In statistical hypothesis testing,[1][2] a result has statistical significance when a result at least as "extreme" would be very infrequent if the null hypothesis were true.[3] More precisely, a study's defined significance level, denoted by , is the probability of the study rejecting the null hypothesis, given that the null hypothesis is true;[4] and the p-value of a result, , is the probability of obtaining a result at least as extreme, given that the null hypothesis is true.[5] The result is said to be statistically significant, by the standards of the study, when .[6][7][8][9][10][11][12] The significance level for a study is chosen before data collection, and is typically set to 5%[13] or much lower—depending on the field of study.[14]

In any experiment or observation that involves drawing a sample from a population, there is always the possibility that an observed effect would have occurred due to sampling error alone.[15][16] But if the p-value of an observed effect is less than (or equal to) the significance level, an investigator may conclude that the effect reflects the characteristics of the whole population,[1] thereby rejecting the null hypothesis.[17]

This technique for testing the statistical significance of results was developed in the early 20th century. The term significance does not imply importance here, and the term statistical significance is not the same as research significance, theoretical significance, or practical significance.[1][2][18][19] For example, the term clinical significance refers to the practical importance of a treatment effect.[20]

History

[edit]

Statistical significance dates to the 18th century, in the work of John Arbuthnot and Pierre-Simon Laplace, who computed the p-value for the human sex ratio at birth, assuming a null hypothesis of equal probability of male and female births; see p-value § History for details.[21][22][23][24][25][26][27]

In 1925, Ronald Fisher advanced the idea of statistical hypothesis testing, which he called "tests of significance", in his publication Statistical Methods for Research Workers.[28][29][30] Fisher suggested a probability of one in twenty (0.05) as a convenient cutoff level to reject the null hypothesis.[31] In a 1933 paper, Jerzy Neyman and Egon Pearson called this cutoff the significance level, which they named . They recommended that be set ahead of time, prior to any data collection.[31][32]

Despite his initial suggestion of 0.05 as a significance level, Fisher did not intend this cutoff value to be fixed. In his 1956 publication Statistical Methods and Scientific Inference, he recommended that significance levels be set according to specific circumstances.[31]

[edit]

The significance level is the threshold for below which the null hypothesis is rejected even though by assumption it were true, and something else is going on. This means that is also the probability of mistakenly rejecting the null hypothesis, if the null hypothesis is true.[4] This is also called false positive and type I error.

Sometimes researchers talk about the confidence level γ = (1 − α) instead. This is the probability of not rejecting the null hypothesis given that it is true.[33][34] Confidence levels and confidence intervals were introduced by Neyman in 1937.[35]

Role in statistical hypothesis testing

[edit]
In a two-tailed test, the rejection region for a significance level of α = 0.05 is partitioned to both ends of the sampling distribution and makes up 5% of the area under the curve (white areas).

Statistical significance plays a pivotal role in statistical hypothesis testing. It is used to determine whether the null hypothesis should be rejected or retained. The null hypothesis is the hypothesis that no effect exists in the phenomenon being studied.[36] For the null hypothesis to be rejected, an observed result has to be statistically significant, i.e. the observed p-value is less than the pre-specified significance level .

To determine whether a result is statistically significant, a researcher calculates a p-value, which is the probability of observing an effect of the same magnitude or more extreme given that the null hypothesis is true.[5][12] The null hypothesis is rejected if the p-value is less than (or equal to) a predetermined level, . is also called the significance level, and is the probability of rejecting the null hypothesis given that it is true (a type I error). It is usually set at or below 5%.

For example, when is set to 5%, the conditional probability of a type I error, given that the null hypothesis is true, is 5%,[37] and a statistically significant result is one where the observed p-value is less than (or equal to) 5%.[38] When drawing data from a sample, this means that the rejection region comprises 5% of the sampling distribution.[39] These 5% can be allocated to one side of the sampling distribution, as in a one-tailed test, or partitioned to both sides of the distribution, as in a two-tailed test, with each tail (or rejection region) containing 2.5% of the distribution.

The use of a one-tailed test is dependent on whether the research question or alternative hypothesis specifies a direction such as whether a group of objects is heavier or the performance of students on an assessment is better.[3] A two-tailed test may still be used but it will be less powerful than a one-tailed test, because the rejection region for a one-tailed test is concentrated on one end of the null distribution and is twice the size (5% vs. 2.5%) of each rejection region for a two-tailed test. As a result, the null hypothesis can be rejected with a less extreme result if a one-tailed test was used.[40] The one-tailed test is only more powerful than a two-tailed test if the specified direction of the alternative hypothesis is correct. If it is wrong, however, then the one-tailed test has no power.

Significance thresholds in specific fields

[edit]

In specific fields such as particle physics and manufacturing, statistical significance is often expressed in multiples of the standard deviation or sigma (σ) of a normal distribution, with significance thresholds set at a much stricter level (for example 5σ).[41][42] For instance, the certainty of the Higgs boson particle's existence was based on the 5σ criterion, which corresponds to a p-value of about 1 in 3.5 million.[42][43]

In other fields of scientific research such as genome-wide association studies, significance levels as low as 5×10−8 are not uncommon[44][45]—as the number of tests performed is extremely large.

Limitations

[edit]

Researchers focusing solely on whether their results are statistically significant might report findings that are not substantive[46] and not replicable.[47][48] There is also a difference between statistical significance and practical significance. A study that is found to be statistically significant may not necessarily be practically significant.[49][19]

Effect size

[edit]

Effect size is a measure of a study's practical significance.[49] A statistically significant result may have a weak effect. To gauge the research significance of their result, researchers are encouraged to always report an effect size along with p-values. An effect size measure quantifies the strength of an effect, such as the distance between two means in units of standard deviation (cf. Cohen's d), the correlation coefficient between two variables or its square, and other measures.[50]

Reproducibility

[edit]

A statistically significant result may not be easy to reproduce.[48] In particular, some statistically significant results will in fact be false positives. Each failed attempt to reproduce a result increases the likelihood that the result was a false positive.[51]

Challenges

[edit]

Overuse in some journals

[edit]

Starting in the 2010s, some journals began questioning whether significance testing, and particularly using a threshold of α=5%, was being relied on too heavily as the primary measure of validity of a hypothesis.[52] Some journals encouraged authors to do more detailed analysis than just a statistical significance test. In social psychology, the journal Basic and Applied Social Psychology banned the use of significance testing altogether from papers it published,[53] requiring authors to use other measures to evaluate hypotheses and impact.[54][55]

Other editors, commenting on this ban have noted: "Banning the reporting of p-values, as Basic and Applied Social Psychology recently did, is not going to solve the problem because it is merely treating a symptom of the problem. There is nothing wrong with hypothesis testing and p-values per se as long as authors, reviewers, and action editors use them correctly."[56] Some statisticians prefer to use alternative measures of evidence, such as likelihood ratios or Bayes factors.[57] Using Bayesian statistics can avoid confidence levels, but also requires making additional assumptions,[57] and may not necessarily improve practice regarding statistical testing.[58]

The widespread abuse of statistical significance represents an important topic of research in metascience.[59]

Redefining significance

[edit]

In 2016, the American Statistical Association (ASA) published a statement on p-values, saying that "the widespread use of 'statistical significance' (generally interpreted as 'p ≤ 0.05') as a license for making a claim of a scientific finding (or implied truth) leads to considerable distortion of the scientific process".[57] In 2017, a group of 72 authors proposed to enhance reproducibility by changing the p-value threshold for statistical significance from 0.05 to 0.005.[60] Other researchers responded that imposing a more stringent significance threshold would aggravate problems such as data dredging; alternative propositions are thus to select and justify flexible p-value thresholds before collecting data,[61] or to interpret p-values as continuous indices, thereby discarding thresholds and statistical significance.[62] Additionally, the change to 0.005 would increase the likelihood of false negatives, whereby the effect being studied is real, but the test fails to show it.[63]

In 2019, over 800 statisticians and scientists signed a message calling for the abandonment of the term "statistical significance" in science,[64] and the ASA published a further official statement [65] declaring (page 2):

We conclude, based on our review of the articles in this special issue and the broader literature, that it is time to stop using the term "statistically significant" entirely. Nor should variants such as "significantly different," "," and "nonsignificant" survive, whether expressed in words, by asterisks in a table, or in some other way.

See also

[edit]

References

[edit]

Further reading

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
Statistical significance is a fundamental concept in statistical hypothesis testing that assesses whether observed results in a are unlikely to have arisen from random variation alone, thereby providing evidence against the . It is quantified primarily through the , which represents the probability of observing data at least as extreme as that obtained, given that the is true; a below a conventional threshold, such as 0.05, is interpreted as indicating statistical significance. The origins of statistical significance trace back to the early , pioneered by British statistician and geneticist Ronald A. Fisher during his work at the Rothamsted Experimental Station. In his seminal 1925 book Statistical Methods for Research Workers, Fisher introduced the idea of using p-values to evaluate the strength of evidence against a of no effect, selecting the 0.05 level as a practical benchmark because it approximates the point where results lie beyond two standard deviations from the mean in a —roughly a 1-in-20 chance occurrence. This threshold gained widespread adoption in fields like , biology, and medicine, though Fisher himself viewed it as a guideline rather than a rigid rule, emphasizing the need for judgment in interpretation. Subsequent developments by and in the 1930s refined the framework into the Neyman-Pearson lemma, which formalized testing with explicit rates (Type I and Type II s), contrasting with Fisher's more inductive approach focused on s. Today, statistical significance is computed using various tests (e.g., t-tests, chi-square tests) that derive the from the test statistic's distribution under the , often assuming normality or other conditions. Despite its ubiquity, the concept has faced for potential misinterpretation; for instance, a statistically significant result does not quantify , practical relevance, or the probability that the is true. In response to widespread misuse, the issued a 2016 statement outlining six key principles for p-values, stressing that they indicate incompatibility with a model but should not drive decisions alone, and advocating for complementary approaches like confidence intervals and estimates to ensure robust scientific inference. Recent discussions, including a 2021 ASA task force report, further urge moving beyond binary "significant/non-significant" dichotomies to improve and emphasize the distinction between statistical evidence and substantive importance across disciplines like , , and clinical trials.

Fundamentals

Definition

Statistical significance refers to the determination that an observed in a is unlikely to have occurred due to random chance alone, serving as a key criterion in testing to infer whether the results reflect a genuine underlying relationship or difference in the . This assessment evaluates the consistency of the with a of no effect, providing against the possibility that the observed outcome arose merely from sampling variability. A critical distinction exists between statistical significance and practical or : the former addresses whether an effect is detectable beyond chance, while the latter evaluates the effect's magnitude and its meaningfulness in real-world applications. For instance, a pharmaceutical study might demonstrate statistical significance for a that extends average by just 10 minutes, yet this tiny gain may lack practical value for patients or healthcare systems due to its negligible impact on or . Such cases highlight how large sample sizes can yield statistical significance for trivially small effects, underscoring the need to consider both types of significance together. At its core, statistical significance builds on foundational concepts of probability, which measures the likelihood of specific outcomes in uncertain processes, and sampling distributions, which model the expected range of variation in sample statistics drawn from a . These elements enable researchers to quantify and determine whether sample evidence reliably points to a population-level .

Key Components in Hypothesis Testing

Hypothesis testing begins with the formulation of two competing hypotheses: the and the . The , denoted as H0H_0, represents the default assumption of no effect, no difference, or no association between variables in the population. It serves as the baseline against which evidence is evaluated, positing that any observed variation in sample data arises solely from random rather than a systematic effect. For instance, in testing whether a is fair, H0H_0 would state that the probability of heads is exactly 0.5. This concept was introduced by Ronald A. Fisher as a tool for assessing the improbability of observed data under the assumption of no real effect. The alternative hypothesis, denoted as H1H_1 or HaH_a, posits the existence of an effect, difference, or association that contradicts the null. It encapsulates the researcher's claim or expectation, guiding the direction of the inquiry. Alternatives can be one-sided, specifying the direction of the effect (e.g., the coin is biased toward heads, with probability > 0.5), or two-sided, allowing for deviation in either direction (e.g., the coin is biased, with probability ≠ 0.5). This framework for specifying alternatives was formalized by Jerzy Neyman and Egon Pearson to enable the design of tests that maximize the detection of true effects while controlling error rates. Central to hypothesis testing are the risks associated with decision-making under uncertainty, embodied in Type I and Type II errors. A Type I error occurs when the null hypothesis is true but is incorrectly rejected, representing a false positive conclusion. The probability of committing a Type I error is denoted by α\alpha, often set at a conventional level like 0.05, which defines the significance threshold for rejecting H0H_0. Conversely, a Type II error happens when the null hypothesis is false but fails to be rejected, resulting in a false negative. Its probability, β\beta, depends on factors such as sample size and the magnitude of the true effect. Neyman and Pearson introduced these error types to quantify the reliability of tests, emphasizing the trade-off between controlling α\alpha and minimizing β\beta. To balance these risks, the concept of statistical power is employed, defined as 1β1 - \beta, which measures the probability of correctly rejecting a false null hypothesis. Higher power indicates greater ability to detect true effects, achieved through larger samples or more sensitive tests. This metric underscores the importance of designing studies that not only limit false positives but also enhance detection of meaningful differences. These components collectively frame the hypothesis testing process as a structured decision framework preceding the assessment of statistical significance. The null and alternative hypotheses delineate the question, error probabilities set the boundaries for , and power ensures interpretability. By assuming H0H_0 initially, researchers collect data to evaluate whether the is sufficiently improbable under H0H_0 to warrant rejection, thereby informing conclusions about the alternative. This logical sequence, integrating Fisher's with Neyman and Pearson's error-controlled approach, forms the foundation for rigorous .

Historical Context

Origins in Early Statistics

The concept of statistical significance traces its roots to early probabilistic reasoning in the , with providing one of the first explicit applications to empirical data. In his 1710 paper published in the Philosophical Transactions of the Royal Society, Arbuthnot examined christening records in from 1629 to 1710, observing a consistent excess of male births over females each year. Assuming an equal probability of male and female births under random chance (a binomial model with p=0.5), he calculated the probability of this pattern occurring by chance alone as extraordinarily low—specifically, less than 1 in 2^{82} for the 82 years of data. Arbuthnot concluded that such regularity could not be attributed to chance but rather to , marking an early use of probability to assess the implausibility of random variation in observed outcomes. Building on these foundations, advanced the theoretical underpinnings of assessing evidence against chance in the late 18th and early 19th centuries through his development of . In his 1774 memoir "Mémoire sur la probabilité des causes par les événements," Laplace formalized the idea of inferring the probability of underlying causes from observed events, using Bayesian-like principles to update beliefs based on . This work, expanded in his 1812 Théorie Analytique des Probabilités, applied to astronomical observations and demographic , enabling quantitative judgments about whether deviations from expected patterns were likely due to chance or systematic causes. Laplace's approach emphasized the ratio of likelihoods under competing hypotheses, providing a framework for what would later evolve into significance testing by quantifying the improbability of under a null assumption of . By the early , synthesized these ideas into more structured statistical tools, notably through his development of the and the concept of . In his 1900 paper in the Philosophical Magazine, Pearson introduced the chi-squared statistic as a measure to determine whether observed deviations in categorical from an expected distribution could reasonably be ascribed to random sampling. The test computes the sum of squared differences between observed and expected frequencies, scaled by expected values, to yield a quantity whose distribution under the approximates a chi-squared form for large samples. Concurrently, Pearson's work in the Philosophical Transactions elaborated on —a measure, rooted in Gaussian theory, representing the deviation within which half the estimates would fall, offering a practical way to gauge the reliability of statistical constants like means and correlations against sampling variability. These contributions formalized probabilistic assessments of fit, bridging early calculations to systematic . Discussions on these emerging significance-like ideas gained prominence at the International Statistical Congress in , where statisticians including Pearson debated the role of probability in distinguishing systematic patterns from random fluctuations in social and . Convened amid the Exposition Universelle, the congress featured presentations on probabilistic methods for census and vital statistics, highlighting the need for criteria to evaluate whether observed discrepancies warranted rejection of chance-based explanations. These exchanges underscored the growing consensus on using threshold probabilities to guide scientific inference, setting the stage for broader adoption in empirical research.

Evolution in the 20th Century

The concept of statistical significance began to take formal shape in the early 1920s through the work of Ronald A. Fisher. In a 1921 presentation to the Royal Society of London, Fisher outlined foundational ideas for theoretical statistics, emphasizing the role of probability in assessing deviations from expected results under a , which laid the groundwork for modern significance testing. Fisher's ideas gained wider traction with the publication of his seminal book Statistical Methods for Research Workers in 1925, where he introduced the as a measure of the probability of observing data as extreme as that obtained, assuming the null hypothesis is true, and advocated for significance levels around 0.05 to guide scientific inference in fields like and . This approach formalized significance testing as a tool for research workers, promoting its use without rigid decision rules, and rapidly influenced experimental practices by providing accessible methods for evaluating evidence against hypotheses. In 1933, and advanced the framework by developing the Neyman-Pearson lemma, which established the as the most powerful method for distinguishing between two simple while controlling the error rate (alpha level) at a fixed threshold, such as 0.05. This lemma shifted the focus toward a decision-theoretic , emphasizing Type I and Type II errors and fixed significance levels to optimize test power, contrasting with Fisher's more inductive, p-value-based approach. The Neyman-Pearson formulation provided a structured alternative that complemented and sometimes rivaled Fisher's methods, becoming integral to hypothesis testing theory. Following , statistical significance testing saw widespread adoption in experimental design across sciences, particularly in , , and social sciences, as institutions like Rothamsted Experimental Station under Fisher's influence integrated these methods into randomized trials and data analysis protocols. In the 1950s, critiqued Fisher's significance testing for its lack of emphasis on power and alternative hypotheses, arguing in publications that it failed to adequately address under and should incorporate explicit error control, intensifying the philosophical divide between the two schools. By the 1960s, Bayesian approaches began challenging both Fisherian and Neyman-Pearson frameworks, with statisticians like Dennis Lindley and Leonard Savage promoting prior probabilities and posterior inference as superior for incorporating subjective knowledge and avoiding arbitrary significance thresholds in evaluation.

Computation and Interpretation

Calculating P-Values

The p-value represents the probability of observing sample data at least as extreme as the data actually obtained, assuming the H0H_0 is true. This measure quantifies the against H0H_0 provided by the sample, serving as the foundation for assessing statistical significance in testing. In general, the p-value is computed from the of a under H0H_0. The formula is given by p=P(TtobsH0)p = P(T \geq t_{\text{obs}} \mid H_0) for a one-tailed test in the upper tail, where TT denotes the for the test statistic and tobst_{\text{obs}} is its observed value; for a two-tailed test, the p-value accounts for both tails by doubling the one-tailed probability or integrating over the relevant regions. The exact form depends on the distribution assumed under H0H_0, such as the normal or t-distribution. Common test statistics include the z-score for scenarios with large sample sizes or known population variance, calculated as z=xˉμ0σ/n,z = \frac{\bar{x} - \mu_0}{\sigma / \sqrt{n}},
Add your contribution
Related Hubs
User Avatar
No comments yet.