Recent from talks
Nothing was collected or created yet.
Effect size
View on WikipediaThis article has multiple issues. Please help improve it or discuss these issues on the talk page. (Learn how and when to remove these messages)
|
In statistics, an effect size is a value measuring the strength of the relationship between two variables in a population, or a sample-based estimate of that quantity. It can refer to the value of a statistic calculated from a sample of data, the value of one parameter for a hypothetical population, or the equation that operationalizes how statistics or parameters lead to the effect size value.[1] Examples of effect sizes include the correlation between two variables,[2] the regression coefficient in a regression, the mean difference, and the risk of a particular event (such as a heart attack). Effect sizes are a complementary tool for statistical hypothesis testing, and play an important role in statistical power analyses to assess the sample size required for new experiments.[3] Effect size calculations are fundamental to meta-analysis, which aims to provide the combined effect size based on data from multiple studies. The group of data-analysis methods concerning effect sizes is referred to as estimation statistics.
Effect size is an essential component in the evaluation of the strength of a statistical claim, and it is the first item (magnitude) in the MAGIC criteria. The standard deviation of the effect size is of critical importance, as it indicates how much uncertainty is included in the observed measurement. A standard deviation that is too large will make the measurement nearly meaningless. In meta-analysis, which aims to summarize multiple effect sizes into a single estimate, the uncertainty in studies' effect sizes is used to weight the contribution of each study, so larger studies are considered more important than smaller ones. The uncertainty in the effect size is calculated differently for each type of effect size, but generally only requires knowing the study's sample size (N), or the number of observations (n) in each group.
Reporting effect sizes or estimates thereof (effect estimate [EE], estimate of effect) is considered good practice when presenting empirical research findings in many fields.[4][5] The reporting of effect sizes facilitates the interpretation of the importance of a research result, in contrast to its statistical significance.[6] Effect sizes are particularly significant in social science and medical research, with the latter emphasizing the importance of the magnitude of the average treatment effect.
Effect sizes may be measured in relative or absolute terms. In relative effect sizes, two groups are directly compared with each other, as in odds ratios and relative risks. A larger absolute value always indicates a stronger effect for absolute effect sizes. Many types of measurements can be expressed as either absolute or relative, and these can be used together because they convey different information. A prominent task force in the psychology research community made the following recommendation:
Always present effect sizes for primary outcomes...If the units of measurement are meaningful on a practical level (e.g., number of cigarettes smoked per day), then we usually prefer an unstandardized measure (regression coefficient or mean difference) to a standardized measure (r or d).[4]
Overview
[edit]Population and sample effect sizes
[edit]As in statistical estimation, the true effect size is distinguished from the observed effect size. For example, to measure the risk of disease in a population (the population effect size) one can measure the risk within a sample of that population (the sample effect size). Conventions for describing true and observed effect sizes follow standard statistical practices—one common approach is to use Greek letters like ρ [rho] to denote population parameters and Latin letters like r to denote the corresponding statistic. Alternatively, a "hat" can be placed over the population parameter to denote the statistic, e.g. with being the estimate of the parameter .
As in any statistical setting, effect sizes are estimated with sampling error, and may be biased unless the effect size estimator that is used is appropriate for the manner in which the data were sampled and the manner in which the measurements were made. An example of this is publication bias, which occurs when scientists report results only when the estimated effect sizes are large or are statistically significant. As a result, if many researchers carry out studies with low statistical power, the reported effect sizes will tend to be larger than the true (population) effects, if any.[7] Another example where effect sizes may be distorted is in a multiple-trial experiment, where the effect size calculation is based on the averaged or aggregated response across the trials.[8]
Smaller studies sometimes show different, often larger, effect sizes than larger studies. This phenomenon is known as the small-study effect, which may signal publication bias.[9]
Relationship to test statistics
[edit]Sample-based effect sizes are distinguished from test statistics used in hypothesis testing, in that they estimate the strength (magnitude) of, for example, an apparent relationship, rather than assigning a significance level reflecting whether the magnitude of the relationship observed could be due to chance. The effect size does not directly determine the significance level, or vice versa. Given a sufficiently large sample size, a non-null statistical comparison will always show a statistically significant result unless the population effect size is exactly zero (and even there it will show statistical significance at the rate of the Type I error used). For example, a sample Pearson correlation coefficient of 0.01 is statistically significant if the sample size is 1000. Reporting only the significant p-value from this analysis could be misleading if a correlation of 0.01 is too small to be of interest in a particular application.
Standardized and unstandardized effect sizes
[edit]The term effect size can refer to a standardized measure of effect (such as r, Cohen's d, or the odds ratio), or to an unstandardized measure (e.g., the difference between group means or the unstandardized regression coefficients). Standardized effect size measures are typically used when:
- the metrics of variables being studied do not have intrinsic meaning (e.g., a score on a personality test on an arbitrary scale),
- results from multiple studies are being combined,
- some or all of the studies use different scales, or
- it is desired to convey the size of an effect relative to the variability in the population.
In meta-analyses, standardized effect sizes are used as a common measure that can be calculated for different studies and then combined into an overall summary.
Interpretation
[edit]The interpretation of an effect size of being small, medium, or large depends on its substantive context and its operational definition. Jacob Cohen[10] suggested interpretation guidelines that are near ubiquitous across many fields. However, Cohen also cautioned:
"The terms 'small,' 'medium,' and 'large' are relative, not only to each other, but to the area of behavioral science or even more particularly to the specific content and research method being employed in any given investigation... In the face of this relativity, there is a certain risk inherent in offering conventional operational definitions for these terms for use in power analysis in as diverse a field of inquiry as behavioral science. This risk is nevertheless accepted in the belief that more is to be gained than lost by supplying a common conventional frame of reference which is recommended for use only when no better basis for estimating the ES index is available." (p. 25)
Sawilowsky[11] recommended that the rules of thumb for effect sizes should be revised, and expanded the descriptions to include very small, very large, and huge. Funder and Ozer [12] suggested that effect sizes should be interpreted based on benchmarks and consequences of findings, resulting in adjustment of guideline recommendations.
Lenth[13] noted for a medium effect size, "you'll choose the same n regardless of the accuracy or reliability of your instrument, or the narrowness or diversity of your subjects. Clearly, important considerations are being ignored here. Researchers should interpret the substantive significance of their results by grounding them in a meaningful context or by quantifying their contribution to knowledge, and Cohen's effect size descriptions can be helpful as a starting point."[6] Similarly, a U.S. Dept of Education sponsored report argued that the widespread indiscriminate use of Cohen's interpretation guidelines can be inappropriate and misleading.[14] They instead suggested that norms should be based on distributions of effect sizes from comparable studies. Thus a small effect (in absolute numbers) could be considered large if the effect is larger than similar studies in the field. See Abelson's paradox and Sawilowsky's paradox for related points.[15][16][17]
The table below contains descriptors for various magnitudes of d, r, f and omega, as initially suggested by Jacob Cohen,[10] and later expanded by Sawilowsky,[11] and by Funder & Ozer.[12]
| Effect size | d | r | f | omega |
|---|---|---|---|---|
| Very small | 0.01[11] | 0.005[11] | 0.005[11] | |
| Small | 0.20[10][11] | 0.10[10][11] | 0.10[10][11] | 0.10[10] |
| Medium | 0.41,[12] 0.50[10] | 0.20,[12] 0.24[10] | 0.20,[12] 0.31[10] | 0.30[10] |
| Large | 0.63,[12] 0.80[10] | 0.30,[12] 0.37[10] | 0.32,[12] 0.40[10] | 0.50[10] |
| Very large | 0.87,[12] 1.20[11] | 0.40,[12] 0.51[11] | 0.44,[11] 0.60[12] | |
| Huge | 2.0[11] | 0.71[11] | 1.0[11] |
Types
[edit]About 50 to 100 different measures of effect size are known. Many effect sizes of different types can be converted to other types, as many estimate the separation of two distributions, so are mathematically related. For example, a correlation coefficient can be converted to a Cohen's d and vice versa.
Correlation family: Effect sizes based on "variance explained"
[edit]These effect sizes estimate the amount of the variance within an experiment that is "explained" or "accounted for" by the experiment's model (Explained variation).
Pearson r or correlation coefficient
[edit]Pearson's correlation, often denoted r and introduced by Karl Pearson, is widely used as an effect size when paired quantitative data are available; for instance if one were studying the relationship between birth weight and longevity. The correlation coefficient can also be used when the data are binary. Pearson's r can vary in magnitude from −1 to 1, with −1 indicating a perfect negative linear relation, 1 indicating a perfect positive linear relation, and 0 indicating no linear relation between two variables.
Coefficient of determination (r2 or R2)
[edit]A related effect size is r2, the coefficient of determination (also referred to as R2 or "r-squared"), calculated as the square of the Pearson correlation r. In the case of paired data, this is a measure of the proportion of variance shared by the two variables, and varies from 0 to 1. For example, with an r of 0.21 the coefficient of determination is 0.0441, meaning that 4.4% of the variance of either variable is shared with the other variable. The r2 is always positive, so does not convey the direction of the correlation between the two variables.
Eta-squared (η2)
[edit]Eta-squared describes the ratio of variance explained in the dependent variable by a predictor while controlling for other predictors, making it analogous to the r2. Eta-squared is a biased estimator of the variance explained by the model in the population (it estimates only the effect size in the sample). This estimate shares the weakness with r2 that each additional variable will automatically increase the value of η2. In addition, it measures the variance explained of the sample, not the population, meaning that it will always overestimate the effect size, although the bias grows smaller as the sample grows larger.
Omega-squared (ω2)
[edit]A less biased estimator of the variance explained in the population is ω2[18]
This form of the formula is limited to between-subjects analysis with equal sample sizes in all cells.[18] Since it is less biased (although not unbiased), ω2 is preferable to η2; however, it can be more inconvenient to calculate for complex analyses. A generalized form of the estimator has been published for between-subjects and within-subjects analysis, repeated measure, mixed design, and randomized block design experiments.[19] In addition, methods to calculate partial ω2 for individual factors and combined factors in designs with up to three independent variables have been published.[19]
Cohen's f2
[edit]Cohen's f2 is one of several effect size measures to use in the context of an F-test for ANOVA or multiple regression. Its amount of bias (overestimation of the effect size for the ANOVA) depends on the bias of its underlying measurement of variance explained (e.g., R2, η2, ω2).
The f2 effect size measure for multiple regression is defined as:
Likewise, f2 can be defined as: or for models described by those effect size measures.[20]
The effect size measure for sequential multiple regression and also common for PLS modeling[21] is defined as: where R2A is the variance accounted for by a set of one or more independent variables A, and R2AB is the combined variance accounted for by A and another set of one or more independent variables of interest B. By convention, f2 effect sizes of , , and are termed small, medium, and large, respectively.[10]
Cohen's can also be found for factorial analysis of variance (ANOVA) working backwards, using:
In a balanced design (equivalent sample sizes across groups) of ANOVA, the corresponding population parameter of is wherein μj denotes the population mean within the jth group of the total K groups, and σ the equivalent population standard deviations within each groups. SS is the sum of squares in ANOVA.
Cohen's q
[edit]Another measure that is used with correlation differences is Cohen's q. This is the difference between two Fisher transformed Pearson regression coefficients. In symbols this is
where r1 and r2 are the regressions being compared. The expected value of q is zero and its variance is where N1 and N2 are the number of data points in the first and second regression respectively.
Difference family: Effect sizes based on differences between means
[edit]The raw effect size pertaining to a comparison of two groups is inherently calculated as the differences between the two means. However, to facilitate interpretation it is common to standardise the effect size; various conventions for statistical standardisation are presented below.
Standardized mean difference
[edit]
A (population) effect size θ based on means usually considers the standardized mean difference (SMD) between two populations[22]: 78 where μ1 is the mean for one population, μ2 is the mean for the other population, and σ is a standard deviation based on either or both populations.
In the practical setting the population values are typically not known and must be estimated from sample statistics. The several versions of effect sizes based on means differ with respect to which statistics are used.
This form for the effect size resembles the computation for a t-test statistic, with the critical difference that the t-test statistic includes a factor of . This means that for a given effect size, the significance level increases with the sample size. Unlike the t-test statistic, the effect size aims to estimate a population parameter and is not affected by the sample size.
SMD values of 0.2 to 0.5 are considered small, 0.5 to 0.8 are considered medium, and greater than 0.8 are considered large.[23]
Cohen's d
[edit]Cohen's d is defined as the difference between two means divided by a standard deviation for the data, i.e.
Jacob Cohen defined s, the pooled standard deviation, as (for two independent samples):[10]: 67 where the variance for one of the groups is defined as and similarly for the other group.
Other authors choose a slightly different computation of the standard deviation when referring to "Cohen's d" where the denominator is without "-2"[24][25]: 14 This definition of "Cohen's d" is termed the maximum likelihood estimator by Hedges and Olkin,[22] and it is related to Hedges' g by a scaling factor (see below).
With two paired samples, an approach is to look at the distribution of the difference scores. In that case, s is the standard deviation of this distribution of difference scores (of note, the standard deviation of difference scores is dependent on the correlation between paired samples). This creates the following relationship between the t-statistic to test for a difference in the means of the two paired groups and Cohen's d' (computed with difference scores): and However, for paired samples, Cohen states that d' does not provide the correct estimate to obtain the power of the test for d, and that before looking the values up in the tables provided for d, it should be corrected for r as in the following formula:[26] where r is the correlation between paired measurements. Given the same sample size, the higher r, the higher the power for a test of paired difference.
Since d' depends on r, as a measure of effect size it is difficult to interpret; therefore, in the context of paired analyses, since it is possible to compute d' or d (estimated with a pooled standard deviation or that of a group or time-point), it is necessary to explicitly indicate which one is being reported. As a measure of effect size, d (estimated with a pooled standard deviation or that of a group or time-point) is more appropriate, for instance in meta-analysis.[27]
Cohen's d is frequently used in estimating sample sizes for statistical testing. A lower Cohen's d indicates the necessity of larger sample sizes, and vice versa, as can subsequently be determined together with the additional parameters of desired significance level and statistical power.[28]
Glass' Δ
[edit]In 1976, Gene V. Glass proposed an estimator of the effect size that uses only the standard deviation of the second group[22]: 78
The second group may be regarded as a control group, and Glass argued that if several treatments were compared to the control group it would be better to use just the standard deviation computed from the control group, so that effect sizes would not differ under equal means and different variances.
Under a correct assumption of equal population variances a pooled estimate for σ is more precise.
Hedges' g
[edit]Hedges' g, suggested by Larry Hedges in 1981,[29] is like the other measures based on a standardized difference[22]: 79 where the pooled standard deviation is computed as:
However, as an estimator for the population effect size θ it is biased. Nevertheless, this bias can be approximately corrected through multiplication by a factor Hedges and Olkin refer to this less-biased estimator as d,[22] but it is not the same as Cohen's d. The exact form for the correction factor J() involves the gamma function[22]: 104 There are also multilevel variants of Hedges' g, e.g., for use in cluster randomised controlled trials (CRTs).[30] CRTs involve randomising clusters, such as schools or classrooms, to different conditions and are frequently used in education research.
Ψ, root-mean-square standardized effect
[edit]A similar effect size estimator for multiple comparisons (e.g., ANOVA) is the Ψ root-mean-square standardized effect:[20] where k is the number of groups in the comparisons.
This essentially presents the omnibus difference of the entire model adjusted by the root mean square, analogous to d or g.
In addition, a generalization for multi-factorial designs has been provided.[20]
Distribution of effect sizes based on means
[edit]Provided that the data is Gaussian distributed a scaled Hedges' g, , follows a noncentral t-distribution with the noncentrality parameter and (n1 + n2 − 2) degrees of freedom. Likewise, the scaled Glass' Δ is distributed with n2 − 1 degrees of freedom.
From the distribution it is possible to compute the expectation and variance of the effect sizes.
In some cases large sample approximations for the variance are used. One suggestion for the variance of Hedges' unbiased estimator is[22] : 86
Strictly standardized mean difference (SSMD)
[edit]As a statistical parameter, SSMD (denoted as ) is defined as the ratio of mean to standard deviation of the difference of two random values respectively from two groups. Assume that one group with random values has mean and variance and another group has mean and variance . The covariance between the two groups is Then, the SSMD for the comparison of these two groups is defined as[31]
If the two groups are independent,
If the two independent groups have equal variances ,
Other metrics
[edit]Mahalanobis distance (D) is a multivariate generalization of Cohen's d, which takes into account the relationships between the variables.[32]
Categorical family: Effect sizes for associations among categorical variables
[edit]|
|
|
| Phi (φ) | Cramér's V (φc) |
|---|
Commonly used measures of association for the chi-squared test are the Phi coefficient and Cramér's V (sometimes referred to as Cramér's phi and denoted as φc). Phi is related to the point-biserial correlation coefficient and Cohen's d and estimates the extent of the relationship between two variables (2 × 2).[33] Cramér's V may be used with variables having more than two levels.
Phi can be computed by finding the square root of the chi-squared statistic divided by the sample size.
Similarly, Cramér's V is computed by taking the square root of the chi-squared statistic divided by the sample size and the length of the minimum dimension (k is the smaller of the number of rows r or columns c).
φc is the intercorrelation of the two discrete variables[34] and may be computed for any value of r or c. However, as chi-squared values tend to increase with the number of cells, the greater the difference between r and c, the more likely V will tend to 1 without strong evidence of a meaningful correlation.
Cohen's omega (ω)
[edit]Another measure of effect size used for chi-squared tests is Cohen's omega (). This is defined as where p0i is the proportion of the ith cell under H0, p1i is the proportion of the ith cell under H1 and m is the number of cells.
Odds ratio
[edit]The odds ratio (OR) is another useful effect size. It is appropriate when the research question focuses on the degree of association between two binary variables. For example, consider a study of spelling ability. In a control group, two students pass the class for every one who fails, so the odds of passing are two to one (or 2/1 = 2). In the treatment group, six students pass for every one who fails, so the odds of passing are six to one (or 6/1 = 6). The effect size can be computed by noting that the odds of passing in the treatment group are three times higher than in the control group (because 6 divided by 2 is 3). Therefore, the odds ratio is 3. Odds ratio statistics are on a different scale than Cohen's d, so this '3' is not comparable to a Cohen's d of 3.
Relative risk
[edit]The relative risk (RR), also called risk ratio, is simply the risk (probability) of an event relative to some independent variable. This measure of effect size differs from the odds ratio in that it compares probabilities instead of odds, but asymptotically approaches the latter for small probabilities. Using the example above, the probabilities for those in the control group and treatment group passing is 2/3 (or 0.67) and 6/7 (or 0.86), respectively. The effect size can be computed the same as above, but using the probabilities instead. Therefore, the relative risk is 1.28. Since rather large probabilities of passing were used, there is a large difference between relative risk and odds ratio. Had failure (a smaller probability) been used as the event (rather than passing), the difference between the two measures of effect size would not be so great.
While both measures are useful, they have different statistical uses. In medical research, the odds ratio is commonly used for case-control studies, as odds, but not probabilities, are usually estimated.[35] Relative risk is commonly used in randomized controlled trials and cohort studies, but relative risk contributes to overestimations of the effectiveness of interventions.[36]
Risk difference
[edit]The risk difference (RD), sometimes called absolute risk reduction, is simply the difference in risk (probability) of an event between two groups. It is a useful measure in experimental research, since RD tells you the extent to which an experimental interventions changes the probability of an event or outcome. Using the example above, the probabilities for those in the control group and treatment group passing is 2/3 (or 0.67) and 6/7 (or 0.86), respectively, and so the RD effect size is 0.86 − 0.67 = 0.19 (or 19%). RD is the superior measure for assessing effectiveness of interventions.[36]
Cohen's h
[edit]One measure used in power analysis when comparing two independent proportions is Cohen's h. This is defined as follows where p1 and p2 are the proportions of the two samples being compared and arcsin is the arcsine transformation.
Probability of superiority
[edit]To more easily describe the meaning of an effect size to people outside statistics, the common language effect size, as the name implies, was designed to communicate it in plain English. It is used to describe a difference between two groups and was proposed, as well as named, by Kenneth McGraw and S. P. Wong in 1992.[37] They used the following example (about heights of men and women): "in any random pairing of young adult males and females, the probability of the male being taller than the female is .92, or in simpler terms yet, in 92 out of 100 blind dates among young adults, the male will be taller than the female",[37] when describing the population value of the common language effect size.
Effect size for ordinal data
[edit]Cliff's delta or , originally developed by Norman Cliff for use with ordinal data,[38][dubious – discuss] is a measure of how often the values in one distribution are larger than the values in a second distribution. Crucially, it does not require any assumptions about the shape or spread of the two distributions.
The sample estimate is given by: where the two distributions are of size and with items and , respectively, and is the Iverson bracket, which is 1 when the contents are true and 0 when false.
is linearly related to the Mann–Whitney U statistic; however, it captures the direction of the difference in its sign. Given the Mann–Whitney , is:
Cohen's g
[edit]One of simplest effect sizes for measuring how much a proportion differs from 50% is Cohen's g.[10]: 147 It measures how much a proportion differs from 50%. For example, if 85.2% of arrests for car theft are males, then effect size of sex on arrest when measured with Cohen's g is . In general:
Units of Cohen's g are more intuitive (proportion) than in some other effect sizes. It is sometime used in combination with Binomial test.
Confidence intervals by means of noncentrality parameters
[edit]Confidence intervals of standardized effect sizes, especially Cohen's and , rely on the calculation of confidence intervals of noncentrality parameters (ncp). A common approach to construct the confidence interval of ncp is to find the critical ncp values to fit the observed statistic to tail quantiles α/2 and (1 − α/2). The SAS and R-package MBESS provides functions to find critical values of ncp.
t-test for mean difference of single group or two related groups
[edit]For a single group, M denotes the sample mean, μ the population mean, SD the sample's standard deviation, σ the population's standard deviation, and n is the sample size of the group. The t value is used to test the hypothesis on the difference between the mean and a baseline μbaseline. Usually, μbaseline is zero. In the case of two related groups, the single group is constructed by the differences in pair of samples, while SD and σ denote the sample's and population's standard deviations of differences rather than within original two groups. and Cohen's
is the point estimate of
So,
t-test for mean difference between two independent groups
[edit]n1 or n2 are the respective sample sizes.
wherein
and Cohen's is the point estimate of
So,
One-way ANOVA test for mean difference across multiple independent groups
[edit]One-way ANOVA test applies noncentral F distribution. While with a given population standard deviation , the same test question applies noncentral chi-squared distribution.
For each j-th sample within i-th group Xi,j, denote
While,
So, both ncp(s) of F and equate
In case of for K independent groups of same size, the total sample size is N := n·K.
The t-test for a pair of independent groups is a special case of one-way ANOVA. Note that the noncentrality parameter of F is not comparable to the noncentrality parameter of the corresponding t. Actually, , and .
See also
[edit]- Estimation statistics
- Statistical significance
- Z-factor, an alternative measure of effect size
References
[edit]- ^ Kelley, Ken; Preacher, Kristopher J. (2012). "On Effect Size". Psychological Methods. 17 (2): 137–152. doi:10.1037/a0028086. PMID 22545595. S2CID 34152884.
- ^ Rosenthal, Robert, H. Cooper, and L. Hedges. "Parametric measures of effect size." The handbook of research synthesis 621 (1994): 231–244. ISBN 978-0871541635
- ^ Cohen, J. (2016). "A power primer". In A. E. Kazdin (ed.). Methodological issues and strategies in clinical research (4th ed.). American Psychological Association. pp. 279–284. doi:10.1037/14805-018. ISBN 978-1-4338-2091-5.
- ^ a b Wilkinson, Leland (1999). "Statistical methods in psychology journals: Guidelines and explanations". American Psychologist. 54 (8): 594–604. doi:10.1037/0003-066X.54.8.594. S2CID 428023.
- ^ Nakagawa, Shinichi; Cuthill, Innes C (2007). "Effect size, confidence interval and statistical significance: a practical guide for biologists". Biological Reviews of the Cambridge Philosophical Society. 82 (4): 591–605. doi:10.1111/j.1469-185X.2007.00027.x. PMID 17944619. S2CID 615371.
- ^ a b Ellis, Paul D. (2010). The Essential Guide to Effect Sizes: Statistical Power, Meta-Analysis, and the Interpretation of Research Results. Cambridge University Press. ISBN 978-0-521-14246-5.[page needed]
- ^ Brand A, Bradley MT, Best LA, Stoica G (2008). "Accuracy of effect size estimates from published psychological research" (PDF). Perceptual and Motor Skills. 106 (2): 645–649. doi:10.2466/PMS.106.2.645-649. PMID 18556917. S2CID 14340449. Archived from the original (PDF) on 2008-12-17. Retrieved 2008-10-31.
- ^ Brand A, Bradley MT, Best LA, Stoica G (2011). "Multiple trials may yield exaggerated effect size estimates" (PDF). The Journal of General Psychology. 138 (1): 1–11. doi:10.1080/00221309.2010.520360. PMID 21404946. S2CID 932324. Archived from the original on July 13, 2011.
- ^ Sterne, Jonathan A. C.; Gavaghan, David; Egger, Matthias (2000-11-01). "Publication and related bias in meta-analysis: Power of statistical tests and prevalence in the literature". Journal of Clinical Epidemiology. 53 (11): 1119–1129. doi:10.1016/S0895-4356(00)00242-0. ISSN 0895-4356. PMID 11106885.
- ^ a b c d e f g h i j k l m n o p q Cohen, Jacob (1988). Statistical Power Analysis for the Behavioral Sciences. Routledge. ISBN 978-1-134-74270-7.
- ^ a b c d e f g h i j k l m n Sawilowsky, S (2009). "New effect size rules of thumb". Journal of Modern Applied Statistical Methods. 8 (2): 467–474. doi:10.22237/jmasm/1257035100. http://digitalcommons.wayne.edu/jmasm/vol8/iss2/26/
- ^ a b c d e f g h i j k Funder, D.C.; Ozer, D.J. (2019). "Evaluating effect size in psychological research: Sense and nonsense". Advances in Methods and Practices in Psychological Science. 2 (2): 156–168. doi:10.1177/2515245919847202.
- ^ Russell V. Lenth. "Java applets for power and sample size". Division of Mathematical Sciences, the College of Liberal Arts or The University of Iowa. Retrieved 2008-10-08.
- ^ Lipsey, M.W.; et al. (2012). Translating the Statistical Representation of the Effects of Education Interventions Into More Readily Interpretable Forms (PDF). United States: U.S. Dept of Education, National Center for Special Education Research, Institute of Education Sciences, NCSER 2013–3000.
- ^ Sawilowsky, S. S. (2005). "Abelson's paradox and the Michelson-Morley experiment". Journal of Modern Applied Statistical Methods. 4 (1): 352. doi:10.22237/jmasm/1114907520.
- ^ Sawilowsky, S.; Sawilowsky, J.; Grissom, R. J. (2010). "Effect Size". In Lovric, M. (ed.). International Encyclopedia of Statistical Science. Springer.
- ^ Sawilowsky, S. (2003). "Deconstructing Arguments from the Case Against Hypothesis Testing". Journal of Modern Applied Statistical Methods. 2 (2): 467–474. doi:10.22237/jmasm/1067645940.
- ^ a b Tabachnick, B.G. & Fidell, L.S. (2007). Chapter 4: "Cleaning up your act. Screening data prior to analysis", p. 55 In B.G. Tabachnick & L.S. Fidell (Eds.), Using Multivariate Statistics, Fifth Edition. Boston: Pearson Education, Inc. / Allyn and Bacon.
- ^ a b Olejnik, S.; Algina, J. (2003). "Generalized Eta and Omega Squared Statistics: Measures of Effect Size for Some Common Research Designs" (PDF). Psychological Methods. 8 (4): 434–447. doi:10.1037/1082-989x.8.4.434. PMID 14664681. S2CID 6931663. Archived from the original (PDF) on 2010-06-10. Retrieved 2011-10-24.
- ^ a b c Steiger, J. H. (2004). "Beyond the F test: Effect size confidence intervals and tests of close fit in the analysis of variance and contrast analysis" (PDF). Psychological Methods. 9 (2): 164–182. doi:10.1037/1082-989x.9.2.164. PMID 15137887.
- ^ Hair, J.; Hult, T. M.; Ringle, C. M. and Sarstedt, M. (2014) A Primer on Partial Least Squares Structural Equation Modeling (PLS-SEM), Sage, pp. 177–178. ISBN 1452217440
- ^ a b c d e f g Larry V. Hedges & Ingram Olkin (1985). Statistical Methods for Meta-Analysis. Orlando: Academic Press. ISBN 978-0-12-336380-0.
- ^ Andrade, Chittaranjan (22 September 2020). "Mean Difference, Standardized Mean Difference (SMD), and Their Use in Meta-Analysis". The Journal of Clinical Psychiatry. 81 (5). doi:10.4088/JCP.20f13681. eISSN 1555-2101. PMID 32965803. S2CID 221865130.
SMD values of 0.2-0.5 are considered small, values of 0.5-0.8 are considered medium, and values > 0.8 are considered large. In psychopharmacology studies that compare independent groups, SMDs that are statistically significant are almost always in the small to medium range. It is rare for large SMDs to be obtained.
- ^ Robert E. McGrath; Gregory J. Meyer (2006). "When Effect Sizes Disagree: The Case of r and d" (PDF). Psychological Methods. 11 (4): 386–401. CiteSeerX 10.1.1.503.754. doi:10.1037/1082-989x.11.4.386. PMID 17154753. Archived from the original (PDF) on 2013-10-08. Retrieved 2014-07-30.
- ^ Hartung, Joachim; Knapp, Guido; Sinha, Bimal K. (2008). Statistical Meta-Analysis with Applications. John Wiley & Sons. ISBN 978-1-118-21096-3.
- ^ Cohen 1988, p. 49.
- ^ Dunlap, William P.; Cortina, Jose M.; Vaslow, Joel B.; Burke, Michael J. (1996). "Meta-analysis of experiments with matched groups or repeated measures designs". Psychological Methods. 1 (2): 170–177. doi:10.1037/1082-989X.1.2.170. ISSN 1082-989X. doi:10.1037//1082-989X.1.2.170
- ^ Kenny, David A. (1987). "Chapter 13" (PDF). Statistics for the Social and Behavioral Sciences. Little, Brown. ISBN 978-0-316-48915-7.
- ^ Larry V. Hedges (1981). "Distribution theory for Glass' estimator of effect size and related estimators". Journal of Educational Statistics. 6 (2): 107–128. doi:10.3102/10769986006002107. S2CID 121719955.
- ^ Hedges, L. V. (2011). Effect sizes in three-level cluster-randomized experiments. Journal of Educational and Behavioral Statistics, 36(3), 346-380.
- ^ Zhang, XHD (2007). "A pair of new statistical parameters for quality control in RNA interference high-throughput screening assays". Genomics. 89 (4): 552–61. doi:10.1016/j.ygeno.2006.12.014. PMID 17276655.
- ^ Del Giudice, Marco (2013-07-18). "Multivariate Misgivings: Is D a Valid Measure of Group and Sex Differences?". Evolutionary Psychology. 11 (5): 1067–1076. doi:10.1177/147470491301100511. PMC 10434404. PMID 24333840.
- ^ Aaron, B., Kromrey, J. D., & Ferron, J. M. (1998, November). Equating r-based and d-based effect-size indices: Problems with a commonly recommended formula. Paper presented at the annual meeting of the Florida Educational Research Association, Orlando, FL. (ERIC Document Reproduction Service No. ED433353)
- ^ Sheskin, David J. (2003). Handbook of Parametric and Nonparametric Statistical Procedures (Third ed.). CRC Press. ISBN 978-1-4200-3626-8.
- ^ Deeks J (1998). "When can odds ratios mislead? : Odds ratios should be used only in case-control studies and logistic regression analyses". BMJ. 317 (7166): 1155–6. doi:10.1136/bmj.317.7166.1155a. PMC 1114127. PMID 9784470.
- ^ a b Stegenga, J. (2015). "Measuring Effectiveness". Studies in History and Philosophy of Biological and Biomedical Sciences. 54: 62–71. doi:10.1016/j.shpsc.2015.06.003. PMID 26199055.
- ^ a b McGraw KO, Wong SP (1992). "A common language effect size statistic". Psychological Bulletin. 111 (2): 361–365. doi:10.1037/0033-2909.111.2.361.
- ^ Cliff, Norman (1993). "Dominance statistics: Ordinal analyses to answer ordinal questions". Psychological Bulletin. 114 (3): 494–509. doi:10.1037/0033-2909.114.3.494.
Further reading
[edit]- Aaron, B., Kromrey, J. D., & Ferron, J. M. (1998, November). Equating r-based and d-based effect-size indices: Problems with a commonly recommended formula. Paper presented at the annual meeting of the Florida Educational Research Association, Orlando, FL. (ERIC Document Reproduction Service No. ED433353)
- Bonett, D. G. (2008). "Confidence intervals for standardized linear contrasts of means". Psychological Methods. 13 (2): 99–109. doi:10.1037/1082-989x.13.2.99. PMID 18557680.
- Bonett, D. G. (2009). "Estimating standardized linear contrasts of means with desired precision". Psychological Methods. 14 (1): 1–5. doi:10.1037/a0014270. PMID 19271844.
- Brooks, M.E.; Dalal, D.K.; Nolan, K.P. (2013). "Are common language effect sizes easier to understand than traditional effect sizes?". Journal of Applied Psychology. 99 (2): 332–340. doi:10.1037/a0034745. PMID 24188393.
- Cumming, G.; Finch, S. (2001). "A primer on the understanding, use, and calculation of confidence intervals that are based on central and noncentral distributions". Educational and Psychological Measurement. 61 (4): 530–572. doi:10.1177/0013164401614002. S2CID 120672914.
- Kelley, K (2007). "Confidence intervals for standardized effect sizes: Theory, application, and implementation". Journal of Statistical Software. 20 (8): 1–24. doi:10.18637/jss.v020.i08.
- Lipsey, M. W., & Wilson, D. B. (2001). Practical meta-analysis. Sage: Thousand Oaks, CA.
External links
[edit]Further explanations
- Effect Size (ES)
- EffectSizeFAQ.com
- EstimationStats.com Web app for generating effect-size plots.
- Measuring Effect Size
- Computing and Interpreting Effect size Measures with ViSta Archived 2014-12-27 at the Wayback Machine
- effsize package for the R Project for Statistical Computing
Effect size
View on GrokipediaIntroduction
Definition and Purpose
Effect size is a quantitative measure that assesses the magnitude of a phenomenon, such as the strength of a relationship between variables or the size of a difference between groups, in a way that is independent of sample size.[5] Specifically, it represents the degree to which the null hypothesis is false, indexed by the discrepancy between the null and an alternative hypothesis, and is typically scale-free and continuous, ranging from zero upward.[5] This focus on magnitude allows researchers to evaluate the substantive importance of findings beyond whether they meet a threshold for statistical significance.[3] The primary purpose of effect size is to distinguish practical significance—the real-world relevance of an effect—from statistical significance, which only indicates the unlikelihood that an observed result occurred by chance.[3] It enables the comparison of results across different studies by providing a standardized metric, facilitating meta-analyses that synthesize evidence from multiple sources.[3] In fields such as psychology, medicine, and social sciences, effect sizes support evidence-based decision-making by quantifying the potential impact of interventions or associations, helping practitioners determine if findings warrant application in practice.[6] A basic example of an effect size for mean differences takes the general form , where and are the population means of two groups and is the population standard deviation; this illustrates how effect size normalizes differences relative to variability.[5] The concept originated in the 1960s–1970s amid growing concerns over overreliance on p-values in hypothesis testing, with Jacob Cohen leading the development through his 1962 review of power in psychological research and his 1969 book on statistical power analysis.[7][6]Historical Development
The origins of effect size concepts trace back to late 19th-century statistical innovations, where Karl Pearson developed the correlation coefficient as a measure of linear association between variables, formalized in his 1896 publication. This coefficient, ranging from -1 to 1, offered an early quantitative indicator of relationship strength beyond mere significance, influencing subsequent measures of association.[8] Building on this, Ronald Fisher in the 1920s introduced analysis of variance techniques, emphasizing the partitioning of total variance into components attributable to experimental factors, which provided a foundation for assessing practical magnitude in group comparisons.[9] However, these early contributions focused primarily on descriptive and inferential statistics rather than standardized indices of effect magnitude, with formal emphasis on effect sizes emerging in the behavioral sciences during the 1960s.[10] A pivotal advancement occurred in 1969 with Jacob Cohen's seminal book, Statistical Power Analysis for the Behavioral Sciences, which introduced practical guidelines for interpreting effect sizes and coined conventional benchmarks such as "small," "medium," and "large" effects to aid researchers in evaluating substantive importance alongside statistical significance. Cohen's standardized mean difference (d) became a cornerstone for mean-comparison studies, promoting power analysis to detect meaningful effects and critiquing overreliance on p-values.[11] Following Cohen, the 1980s and 1990s saw expansions addressing methodological limitations, notably Larry V. Hedges' 1981 development of a bias-corrected estimator (g) to mitigate small-sample inflation in standardized mean differences, enhancing accuracy in meta-analytic syntheses.[12] These refinements, detailed in Hedges' work on distribution theory for effect size estimators, facilitated more robust aggregation across studies.[13] In the 2000s, effect size reporting gained institutional momentum through the American Psychological Association's (APA) 1999 Task Force on Statistical Inference guidelines, which mandated inclusion of effect size estimates in publications to complement null hypothesis significance testing and improve comparability.[14] This shift reflected broader calls for transparent, cumulative science. As of 2025, effect sizes have increasingly integrated into open science practices, emphasizing preregistration and replication to contextualize magnitudes, while Bayesian approaches offer probabilistic effect size estimation, prioritizing context-specific benchmarks over universal conventions to address variability in fields like psychology and education.[15][16]Core Concepts
Population and Sample Effect Sizes
In statistics, the population effect size refers to a fixed but unknown parameter that quantifies the magnitude of a phenomenon or relationship in the entire target population. For instance, in the context of correlation, this parameter is denoted by ρ (the Greek letter rho), representing the true linear association between two variables across all members of the population. Similarly, for standardized mean differences, the population parameter δ captures the true difference between population means expressed in standard deviation units.[6] These parameters are theoretical ideals, estimated from data but not directly observable, and they provide a benchmark for assessing the substantive importance of effects independent of sampling considerations.[17] The sample effect size, in contrast, is a data-derived estimator of the population parameter, subject to sampling variability and potential bias. For correlations, the sample estimator r is computed directly from the observed data pairs, serving as an unbiased estimate of ρ under normal distribution assumptions, though its sampling distribution is skewed for small samples. For mean differences, the common sample estimator is Cohen's d, defined as the difference between group sample means divided by the pooled standard deviation: where and are the sample means of the two groups, and , with , as sample sizes and , as group standard deviations.[6] This estimator approximates the population δ but tends to overestimate it in small samples due to positive bias, particularly when the true effect is moderate to large.[17] To address this bias, Hedges' g applies a correction factor to Cohen's d, yielding an unbiased estimator suitable for small samples. The formula is , where N is the total sample size; this multiplier, often denoted J, approaches 1 as N increases and reduces the upward bias by a few percentage points for N around 20.[17] Hedges derived this correction through distributional analysis, ensuring g more accurately estimates δ in meta-analytic contexts.[17] The sampling distribution of these estimators influences their reliability, with variance decreasing as sample size grows. For Cohen's d under equal group sizes n, the approximate variance is , though a common practical approximation ignores the true δ and uses . The standard error, the square root of this variance, quantifies the precision of the estimate and is essential for constructing confidence intervals or weighting in meta-analyses. These properties highlight that while sample effect sizes provide practical insights, their variability underscores the need for larger samples to achieve stable estimates of population parameters.[6]Standardized versus Unstandardized Measures
Unstandardized measures of effect size, often referred to as raw or simple effect sizes, quantify the magnitude of an effect using the original units of the variables involved. For instance, a mean difference in height between two groups might be expressed as 5 centimeters, providing a direct and contextually meaningful interpretation within the study's domain. These measures retain the substantive scale of the data, making them particularly useful for practical applications where the original units hold inherent significance, such as clinical or educational settings.[18] In contrast, standardized effect sizes transform the raw effect by dividing it by a measure of variability, typically the standard deviation, to yield a unitless index that facilitates comparison across diverse studies and measurement scales. This scaling produces metrics akin to z-scores, allowing researchers to gauge the effect's size relative to the variability in the data. The general standardization process can be represented as: where is the unstandardized effect and denotes the standard deviation or an equivalent variability metric.[19][18] Researchers select unstandardized measures when domain-specific interpretability is paramount, such as reporting treatment effects in familiar units like points on a psychological scale, whereas standardized measures are favored for meta-analytic syntheses or cross-study comparisons where units differ. Unstandardized approaches offer robustness against distortions from factors like measurement reliability or range restriction and support versatile reporting with confidence intervals, but they hinder direct comparability across heterogeneous datasets. Standardized measures promote universality in evaluating effect magnitude, yet they may assume distributional normality and require corrections for biases introduced by study design or imperfect reliability to maintain accuracy.[18][19]Relationship to Statistical Power and Test Statistics
Effect sizes play a central role in hypothesis testing by informing the noncentrality parameter of the sampling distribution under the alternative hypothesis. In the context of t-tests, for a two-sample t-test with equal group sizes n (total N = 2n), the noncentrality parameter is given by , where is the standardized effect size (such as Cohen's ). This parameter quantifies the shift in the distribution of the test statistic away from the null hypothesis, directly influencing the probability of detecting a true effect.[20] Statistical power, defined as where is the probability of a Type II error, depends on the effect size, the significance level , and the sample size. For a two-sided two-sample t-test, power increases as the effect size grows or as the sample size enlarges, since larger or amplifies the noncentrality parameter and shifts the distribution toward values exceeding the critical threshold. To determine the required sample size for achieving a desired power, the approximate formula for sample size per group is , where denotes the standard normal quantile (total N = 2n); this derivation assumes large samples and equal group sizes.[21] The p-value from a significance test is independent of effect size magnitude in isolation, as it primarily reflects sample size and variability rather than practical importance. A small effect size may fail to produce a significant p-value (e.g., p > 0.05) when the sample size is modest, limiting detection; however, with sufficiently large samples, even trivial effects can yield statistical significance, potentially misleading interpretations without effect size consideration. In contrast, a large effect size reliably produces low p-values and high power across a range of sample sizes, ensuring the effect is detectable if present. This underscores the limitations of relying solely on p-values, as they do not convey the substantive scale of the phenomenon.[3] In practice, effect sizes enable a priori power analysis to design studies with adequate resources, specifying a minimum detectable , desired power (often 0.80), and (typically 0.05) to compute necessary . Post-hoc power analysis, applied after data collection, uses observed effect sizes to evaluate whether the study was sufficiently powered to detect the estimated effect, aiding interpretation of non-significant results without assuming they prove the null hypothesis. These applications promote more robust research planning and reduce the risks of underpowered studies.Interpretation Guidelines
Magnitude Benchmarks
In interpreting effect sizes, Jacob Cohen proposed conventional benchmarks to classify the magnitude of effects as small, medium, or large, providing a rule-of-thumb framework primarily for behavioral and social sciences. For Cohen's d (standardized mean difference), these are d = 0.2 (small), 0.5 (medium), and 0.8 (large); for the correlation coefficient r, they are r = 0.1 (small), 0.3 (medium), and 0.5 (large); and for Cohen's f2 (used in ANOVA and regression), they are f2 = 0.02 (small), 0.15 (medium), and 0.35 (large). These guidelines aim to offer intuitive anchors but are not absolute thresholds, as Cohen himself emphasized their arbitrary nature and context-dependence.[22]| Effect Size Measure | Small | Medium | Large |
|---|---|---|---|
| Cohen's d | 0.2 | 0.5 | 0.8 |
| Correlation r | 0.1 | 0.3 | 0.5 |
| Cohen's f2 | 0.02 | 0.15 | 0.35 |
Contextual Factors Influencing Interpretation
The interpretation of effect sizes is profoundly shaped by the design and characteristics of the study itself, particularly the composition of the sample and the precision of measurements employed. Sample heterogeneity, which refers to variability in participant characteristics such as demographics, baseline traits, or environmental factors, can introduce unmodeled moderators that alter observed effect sizes, often leading to underestimation of the true effect in aggregate analyses if subgroup differences are not accounted for. For instance, in studies where participants come from diverse subpopulations with differing responses to an intervention, the pooled effect size may appear diluted due to increased within-group variance, complicating direct comparisons across studies. Similarly, measurement precision directly impacts the reliability of effect size estimates; random measurement error attenuates observed effects by inflating the denominator in standardized metrics like Cohen's d, thereby reducing the apparent magnitude and potentially masking meaningful relationships. High-precision instruments or validated scales mitigate this attenuation, enhancing the trustworthiness of interpretations. Disciplinary norms further contextualize what constitutes a "large" or "small" effect size, as conventions vary based on the typical variability and measurement accuracy inherent to each field. In psychology and behavioral sciences, Jacob Cohen's benchmarks classify a Cohen's d of 0.2 as small, 0.5 as medium, and 0.8 as large, reflecting the inherent noisiness of human behavior and subjective measures. In contrast, fields like medicine often yield larger effect sizes due to more precise instrumentation and controlled conditions that minimize extraneous variance, making even moderate psychological effects appear modest by comparison. Clinical contexts, such as medical interventions, may prioritize even smaller effects (e.g., d < 0.2) as meaningful when aligned with disease prevalence or patient outcomes, whereas experimental settings in laboratory sciences demand larger magnitudes to demonstrate robustness. Beyond numerical benchmarks, practical significance evaluates whether an effect size translates to real-world utility, often through cost-benefit analyses that weigh scalability, implementation feasibility, and potential impact. A small effect size in a drug trial, such as a hazard ratio reduction of 10-20% (d ≈ 0.1-0.2), may justify widespread adoption if the intervention is low-cost, has minimal side effects, and affects a large population, as seen in preventive therapies for chronic conditions where aggregate benefits outweigh marginal per-individual gains. Conversely, the same magnitude in a high-stakes, resource-intensive context like surgical innovations might be deemed negligible, highlighting how economic and logistical factors reframe interpretive value. Cultural and ethical considerations add layers of nuance, urging caution against overgeneralizing effect sizes from homogeneous or Western, Educated, Industrialized, Rich, and Democratic (WEIRD) samples to diverse global populations. Cross-cultural research reveals that effect sizes for psychological constructs can vary due to differing cultural norms, emphasizing the need for culturally sensitive interpretations to avoid implying inherent superiority or inferiority. Ethically, overgeneralization risks perpetuating biases, such as applying intervention effects validated in one cultural context to underrepresented groups without validation, potentially leading to ineffective policies or stigmatization; researchers must prioritize inclusive sampling and subgroup analyses to ensure equitable application.Effect Size Families
Variance-Explained Measures
Variance-explained measures quantify the proportion of variability in one or more outcome variables that can be attributed to one or more predictor variables, providing insight into the practical significance of relationships in statistical analyses. These metrics are particularly useful in contexts involving continuous variables, where the focus is on the overlap in variance rather than differences in central tendency. Common examples include correlation-based indices and those derived from regression or analysis of variance (ANOVA) models, which express effect sizes as ratios of explained to unexplained variance. Pearson's correlation coefficient, , assesses the strength and direction of the linear association between two continuous variables.[23] It is computed as where denotes the covariance between variables and , and and are their standard deviations.[11] The value of ranges from -1 (perfect negative linear relationship) to +1 (perfect positive linear relationship), with 0 indicating no linear association.[11] The squared correlation, , represents the proportion of variance in one variable explained by the other, serving as a direct measure of shared variance.[11] Interpretation guidelines proposed by Cohen classify |r| = 0.10 as small (explaining 1% of variance), |r| = 0.30 as medium (9% of variance), and |r| = 0.50 as large (25% of variance).[11] In multiple regression and ANOVA contexts, Cohen's extends this approach to evaluate the effect size for a set of predictors or factors.[11] Defined as where is the coefficient of determination, quantifies the ratio of explained variance to unexplained variance, highlighting the incremental contribution of predictors beyond a null model.[11] This measure is especially valuable for assessing overall model fit or the impact of specific terms in complex designs. Cohen's benchmarks are for small effects, for medium effects, and for large effects.[11] For ANOVA specifically, eta squared () measures the proportion of total variance in the dependent variable attributable to group differences.[11] It is calculated as where is the sum of squares between groups and is the total sum of squares.[11] The partial eta squared () adjusts for other factors in the model by using providing a more precise estimate of an individual factor's unique contribution while controlling for covariates or other terms.[11] Cohen recommended thresholds of for small effects, for medium effects, and for large effects, applicable to both general and partial forms.[11] When comparing the strengths of two independent correlations, Cohen's serves as an effect size for their difference.[11] The formula is which transforms the correlations into angles for a standardized difference, analogous to Cohen's in mean comparisons.[11] This metric facilitates tests of whether one association is meaningfully stronger than another, with interpretation scaled similarly to (e.g., |q| ≈ 0.2 small, 0.5 medium, 0.8 large).[11] Conversions within and across effect size families enhance comparability; for instance, Pearson's can be related to Cohen's (a mean-difference measure) using approximations derived from Fisher's z-transformation for stabilizing the sampling distribution of correlations.[11] Fisher's z is given by , allowing averaged correlations in meta-analyses before back-transformation and conversion to for certain contexts like dichotomous predictors. These transformations underscore the interconnectedness of variance-explained metrics with other effect size families, though benchmarks remain context-specific to the variance overlap paradigm.[11]Mean-Difference Measures
Mean-difference measures quantify the magnitude of the difference between the means of two groups, typically standardized by a measure of variability to facilitate interpretation and comparison across studies with different scales or units. These measures are particularly useful in experimental designs comparing treatment and control groups, providing a scale-free indicator of practical significance independent of sample size. Unlike unstandardized differences, standardized versions allow researchers to gauge whether the observed mean separation is small, medium, or large relative to the variability in the data. The most widely adopted mean-difference effect size is Cohen's d, which standardizes the mean difference using the pooled standard deviation assuming equal population variances across groups. Here, and are the sample means of the two groups, and is calculated as where and are the sample sizes, and and are the standard deviations of the respective groups. Cohen introduced this measure to emphasize effect magnitude in behavioral sciences research, where it assumes homogeneity of variances for valid pooling. When variances are unequal, Glass' provides an alternative by standardizing the mean difference using only the control group's standard deviation, avoiding assumptions about variance equality and focusing on changes relative to baseline variability. This approach is recommended in meta-analyses of interventions where treatment may alter variability, as proposed by Glass in his foundational work on integrating primary, secondary, and meta-analytic research. To address positive bias in Cohen's d for small samples, Hedges' g applies a correction factor, yielding an unbiased estimator particularly valuable when total sample size is low (e.g., N < 50). where df = . This bias correction, derived from the sampling distribution of d, improves accuracy in meta-analyses by reducing overestimation of the population effect size.[17] Other variants account for unequal variances without pooling. The strictly standardized mean difference (SSMD) uses the square root of the sum of population variances in the denominator, providing a robust measure for comparing group separation in contexts like biopharmaceutical quality control. A related parameter, , expresses the standardized difference similarly at the population level: SSMD and are equivalent in magnitude and are applied in high-throughput screening and statistical comparisons where variance heterogeneity is expected, offering interpretability without normality assumptions beyond the central limit theorem.[24] The sampling distribution of these standardized mean differences, such as Cohen's d, follows a non-central t distribution, with the non-centrality parameter reflecting the population effect size scaled by sample sizes; this property enables power calculations and confidence interval estimation.[17] For interpretation, Cohen proposed benchmarks where indicates a small effect, $0.5 a medium effect, and $0.8 a large effect, though these are context-dependent guidelines rather than universal thresholds. Similar conventions apply to , g, SSMD, and , emphasizing relative rather than absolute magnitude.Association Measures for Categorical Variables
Association measures quantify the strength and direction of relationships between categorical variables, such as binary outcomes in contingency tables, and are essential for interpreting dependencies beyond mere statistical significance. These measures are particularly prominent in fields like epidemiology and social sciences, where they help assess risks, odds, or proportional associations in discrete data. Unlike variance-explained metrics that parallel correlation coefficients, association measures for categorical data focus on risk-based or chi-square-derived indices tailored to nominal or ordinal structures.[25] The odds ratio (OR) is a widely used measure for binary categorical variables, calculated from a 2×2 contingency table as OR = (a/b) / (c/d), where a and b represent counts in the exposed row and c and d in the unexposed row, or equivalently OR = ad/bc. It represents the multiplicative change in odds of an outcome given exposure, with OR > 1 indicating increased odds and log(OR) providing a symmetric scale for analysis. In meta-analyses, OR serves as an effect size when converted via ln(OR)/1.81 to align with standardized metrics, emphasizing its role in summarizing dichotomous associations.[26][1][27] Relative risk (RR), also known as the risk ratio, compares the probability of an outcome in exposed versus unexposed groups, computed as RR = [a/(a+b)] / [c/(c+d)] from the same 2×2 table. An RR > 1 signifies elevated risk in the exposed group, making it ideal for prospective studies where incidence rates are directly observable; for instance, RR = 2 implies the exposed group is twice as likely to experience the outcome. This measure is favored in epidemiology for its intuitive interpretation of relative probabilities, though it approximates the OR when outcomes are rare.[28][25][29] The risk difference (RD), or absolute risk reduction, captures the absolute change in probability, defined as RD = p₁ - p₂, where p₁ and p₂ are the proportions of the outcome in the two groups. Unlike relative measures, RD highlights the net probability shift, such as a 0.10 value indicating a 10% absolute increase in risk, which is crucial for public health decisions on intervention impacts. It is particularly useful when baseline risks vary, providing a direct gauge of effect magnitude without multiplicative assumptions.[25][30] For comparing two independent proportions, Cohen's h standardizes the difference on an arcsine scale:where p₁ and p₂ are the proportions; this transformation stabilizes variance across proportion levels. Cohen proposed benchmarks of h ≈ 0.2 for small effects, 0.5 for medium, and 0.8 for large, facilitating power analyses and cross-study comparisons in behavioral research.[31][32] The phi coefficient (φ) measures association strength in 2×2 contingency tables as φ = √(χ² / N), where χ² is the chi-square statistic and N the total sample size, yielding values from -1 to 1 akin to a correlation. For larger nominal tables, Cohen's w generalizes this as w = √(χ² / N), with guidelines of w = 0.1 (small), 0.3 (medium), and 0.5 (large) to interpret nominal dependencies. These indices derive directly from chi-square tests, emphasizing proportional deviation from independence in categorical data.[33][32] Ordinal extensions like Somers' d and gamma address ranked categorical data by accounting for order. Gamma assesses symmetric association as the ratio of concordant to discordant pairs minus ties, ranging from -1 to 1, suitable for concordant ordinal scales. Somers' d, an asymmetric variant, treats one variable as dependent: d = (concordant - discordant) / [total pairs - ties on dependent], measuring predictive improvement for ordinal outcomes. Both are nonparametric, with values near ±1 indicating strong monotonic relationships.[34][35]
Advanced Applications
Confidence Intervals via Noncentrality Parameters
Confidence intervals for effect sizes, such as Cohen's d, can be constructed using noncentral distributions, which account for the sampling variability of the effect size estimator under the assumption of a true nonzero effect in the population. This approach, introduced by Hedges, leverages the noncentral t-distribution to derive intervals that are more accurate than those based on the central t-distribution, particularly for small samples, as the noncentral distribution is asymmetric and centered away from zero.[13] The noncentrality parameter, denoted λ, quantifies the shift in the distribution caused by the population effect size δ (where δ corresponds to the population value of d), and its value determines the width and location of the confidence interval. For the independent two-group t-test, the noncentrality parameter is given by λ = δ √(n₁ n₂ / (n₁ + n₂)), where n₁ and n₂ are the sample sizes in each group; when group sizes are equal (n₁ = n₂ = n), this simplifies to λ = δ √(n / 2).[36] The observed t-statistic follows a noncentral t-distribution with degrees of freedom ν = n₁ + n₂ - 2 and noncentrality λ. To obtain a 95% confidence interval for δ, solve for the lower bound λ_L and upper bound λ_U such that the cumulative probability of the observed |t| under the noncentral t-distribution equals 0.025 in each tail: Lower bound: δ_L = λ_L / √(n₁ n₂ / (n₁ + n₂)), Upper bound: δ_U = λ_U / √(n₁ n₂ / (n₁ + n₂)), where λ_L and λ_U are found by inverting the noncentral t quantile function (e.g., using numerical methods to satisfy P(T ≥ t | λ_L, ν) = 0.025 and P(T ≤ -t | λ_U, ν) = 0.025 for the observed t). This inversion ensures the interval captures the plausible range of the population effect size consistent with the data. For the one-sample t-test or paired (two related groups) t-test, the procedure is analogous, with the noncentral t-distribution parameterized by degrees of freedom ν = n - 1 (where n is the sample size) and λ = δ √n for the one-sample case, or adjusted for correlation in paired designs as λ = δ √(n / (2(1 - r))) where r is the pre-post correlation.[36] The confidence interval bounds for δ are then δ_L = λ_L / √n (one-sample) or similarly scaled by the denominator involving r for paired tests, obtained via the same noncentral quantile inversion on the observed t-statistic.[37] In general, the approach involves first converting the test statistic to an estimate of the effect size, then using the relationship between the effect size and λ to invert the noncentral distribution's quantiles and solve for the interval bounds numerically. When estimating effect sizes with Hedges' g (an unbiased correction to d for small samples), the confidence interval can be similarly derived but scaled by the bias-correction factor.[13] Software implementations facilitate these calculations; for example, in R, theci.smd function from the MBESS package computes noncentral t-based confidence intervals for standardized mean differences in both independent and dependent designs.
