Recent from talks
Contribute something
Nothing was collected or created yet.
Observational error
View on WikipediaThis article needs additional citations for verification. (September 2016) |
Observational error (or measurement error) is the difference between a measured value of a quantity and its unknown true value.[1] Such errors are inherent in the measurement process; for example lengths measured with a ruler calibrated in whole centimeters will have a measurement error of several millimeters. The error or uncertainty of a measurement can be estimated, and is specified with the measurement as, for example, 32.3 ± 0.5 cm.
Scientific observations are marred by two distinct types of errors, systematic errors on the one hand, and random, on the other hand. The effects of random errors can be mitigated by the repeated measurements. Constant or systematic errors on the contrary must be carefully avoided, because they arise from one or more causes which constantly act in the same way, and have the effect of always altering the result of the experiment in the same direction. They therefore alter the value observed and repeated identical measurements do not reduce such errors.[2]
Measurement errors can be summarized in terms of accuracy and precision. For example, length measurements with a ruler accurately calibrated in whole centimeters will be subject to random error with each use on the same distance giving a slightly different value resulting limited precision; a metallic ruler the temperature of which is not controlled will be affected by thermal expansion causing an additional systematic error resulting in limited accuracy.[3]
Science and experiments
[edit]When either randomness or uncertainty modeled by probability theory is attributed to such errors, they are "errors" in the sense in which that term is used in statistics; see errors and residuals in statistics.

Every time a measurement is repeated, slightly different results are obtained. The common statistical model used is that the error has two additive parts:[4]
- Random error which may vary from observation to another.
- Systematic error which always occurs, with the same value, when we use the instrument in the same way and in the same case.
Some errors are not clearly random or systematic such as the uncertainty in the calibration of an instrument.[4]
Random errors or statistical errors in measurement lead to measurable values being inconsistent between repeated measurements of a constant attribute or quantity are taken. Random errors create measurement uncertainty. These errors are uncorrelated between measurements. Repeated measurements will fall in a pattern and in a large set of such measurements a standard deviation can be calculated as an estimate of the amount of statistical error.[4]: 147
Systematic errors are errors that are not determined by chance but are introduced by repeatable processes inherent to the system.[5] Sources of systematic errors include errors in equipment calibration, uncertainty in correction terms applied during experimental analysis, errors due the use of approximate theoretical models.[4]: supl Systematic error is sometimes called statistical bias. It may often be reduced with standardized procedures.
Part of the learning process in the various sciences is learning how to use standard instruments and protocols so as to minimize systematic error. Over a long period of time, systematic errors in science can be resolved and become a form of "negative knowledge": scientist build up an understanding of how to avoid specific kinds of systematic errors.[6]
Propagation of errors
[edit]When two or more observations or two or more instruments are combined, the errors in each combine. Estimates of the error in the result of such combinations depend upon the statistical characteristics of each individual measurement and on the possible statistical correlation between them.[7]: 92
Characterization
[edit]Measurement errors can be divided into two components: random error and systematic error.[2]
Random error is always present in a measurement. It is caused by inherently unpredictable fluctuations in the readings of a measurement apparatus or in the experimenter's interpretation of the instrumental reading. Additionally, these fluctuations may be in part due to interference of the environment with the measurement process. Random errors show up as different results for ostensibly the same repeated measurement. They can be estimated by comparing multiple measurements and reduced by averaging multiple measurements. The concept of random error is closely related to the concept of precision. The higher the precision of a measurement instrument, the smaller the variability (standard deviation) of the fluctuations in its readings.
Systematic error is predictable and typically constant or proportional to the true value. If the cause of the systematic error can be identified, then it usually can be eliminated. Systematic errors are caused by imperfect calibration of measurement instruments or imperfect methods of observation, or interference of the environment with the measurement process, and always affect the results of an experiment in a predictable direction. Incorrect zeroing of an instrument is an example of systematic error in instrumentation.
The Performance Test Standard PTC 19.1-2005 "Test Uncertainty", published by the American Society of Mechanical Engineers (ASME), discusses systematic and random errors in considerable detail. In fact, it conceptualizes its basic uncertainty categories in these terms.
Sources
[edit]Sources of systematic error
[edit]Imperfect calibration
[edit]Sources of systematic error may be imperfect calibration of measurement instruments (zero error), changes in the environment which interfere with the measurement process and sometimes imperfect methods of observation can be either zero error or percentage error. If you consider an experimenter taking a reading of the time period of a pendulum swinging past a fiducial marker: If their stop-watch or timer starts with 1 second on the clock then all of their results will be off by 1 second (zero error). If the experimenter repeats this experiment twenty times (starting at 1 second each time), then there will be a percentage error in the calculated average of their results; the final result will be slightly larger than the true period.
Distance measured by radar will be systematically overestimated if the slight slowing down of the waves in air is not accounted for. Incorrect zeroing of an instrument is an example of systematic error in instrumentation.
Systematic errors may also be present in the result of an estimate based upon a mathematical model or physical law. For instance, the estimated oscillation frequency of a pendulum will be systematically in error if slight movement of the support is not accounted for.
Quantity
[edit]Systematic errors can be either constant, or related (e.g. proportional or a percentage) to the actual value of the measured quantity, or even to the value of a different quantity (the reading of a ruler can be affected by environmental temperature). When it is constant, it is simply due to incorrect zeroing of the instrument. When it is not constant, it can change its sign. For instance, if a thermometer is affected by a proportional systematic error equal to 2% of the actual temperature, and the actual temperature is 200°, 0°, or −100°, the measured temperature will be 204° (systematic error = +4°), 0° (null systematic error) or −102° (systematic error = −2°), respectively. Thus the temperature will be overestimated when it will be above zero and underestimated when it will be below zero.
Drift
[edit]Systematic errors which change during an experiment (drift) are easier to detect. Measurements indicate trends with time rather than varying randomly about a mean. Drift is evident if a measurement of a constant quantity is repeated several times and the measurements drift one way during the experiment. If the next measurement is higher than the previous measurement as may occur if an instrument becomes warmer during the experiment then the measured quantity is variable and it is possible to detect a drift by checking the zero reading during the experiment as well as at the start of the experiment (indeed, the zero reading is a measurement of a constant quantity). If the zero reading is consistently above or below zero, a systematic error is present. If this cannot be eliminated, potentially by resetting the instrument immediately before the experiment then it needs to be allowed by subtracting its (possibly time-varying) value from the readings, and by taking it into account while assessing the accuracy of the measurement.
If no pattern in a series of repeated measurements is evident, the presence of fixed systematic errors can only be found if the measurements are checked, either by measuring a known quantity or by comparing the readings with readings made using a different apparatus, known to be more accurate. For example, if you think of the timing of a pendulum using an accurate stopwatch several times you are given readings randomly distributed about the mean. Hopings systematic error is present if the stopwatch is checked against the 'speaking clock' of the telephone system and found to be running slow or fast. Clearly, the pendulum timings need to be corrected according to how fast or slow the stopwatch was found to be running.
Measuring instruments such as ammeters and voltmeters need to be checked periodically against known standards.
Systematic errors can also be detected by measuring already known quantities. For example, a spectrometer fitted with a diffraction grating may be checked by using it to measure the wavelength of the D-lines of the sodium electromagnetic spectrum which are at 600 nm and 589.6 nm. The measurements may be used to determine the number of lines per millimetre of the diffraction grating, which can then be used to measure the wavelength of any other spectral line.
Constant systematic errors are very difficult to deal with as their effects are only observable if they can be removed. Such errors cannot be removed by repeating measurements or averaging large numbers of results. A common method to remove systematic error is through calibration of the measurement instrument.
Sources of random error
[edit]The random or stochastic error in a measurement is the error that is random from one measurement to the next. Stochastic errors tend to be normally distributed when the stochastic error is the sum of many independent random errors because of the central limit theorem. Stochastic errors added to a regression equation account for the variation in Y that cannot be explained by the included Xs.
Surveys
[edit]The term "observational error" is also sometimes used to refer to response errors and some other types of non-sampling error.[1] In survey-type situations, these errors can be mistakes in the collection of data, including both the incorrect recording of a response and the correct recording of a respondent's inaccurate response. These sources of non-sampling error are discussed in Salant and Dillman (1994) and Bland and Altman (1996).[8][9]
These errors can be random or systematic. Random errors are caused by unintended mistakes by respondents, interviewers and/or coders. Systematic error can occur if there is a systematic reaction of the respondents to the method used to formulate the survey question. Thus, the exact formulation of a survey question is crucial, since it affects the level of measurement error.[10] Different tools are available for the researchers to help them decide about this exact formulation of their questions, for instance estimating the quality of a question using MTMM experiments. This information about the quality can also be used in order to correct for measurement error.[11][12]
Effect on regression analysis
[edit]If the dependent variable in a regression is measured with error, regression analysis and associated hypothesis testing are unaffected, except that the R2 will be lower than it would be with perfect measurement.
However, if one or more independent variables is measured with error, then the regression coefficients and standard hypothesis tests are invalid.[13] This is known as attenuation bias.[14]
See also
[edit]- Bias (statistics)
- Cognitive bias
- Correction for measurement error (for Pearson correlations)
- Error
- Errors and residuals in statistics
- Errors-in-variables models
- Instrument error
- Measurement uncertainty
- Metrology
- Outlier
- Propagation of uncertainty
- Regression dilution
- Replication (statistics)
- Statistical theory
- Systemic bias
- Test method
References
[edit]- ^ a b Dodge, Y. (2003) The Oxford Dictionary of Statistical Terms, OUP. ISBN 978-0-19-920613-1
- ^ a b John Robert Taylor (1999). An Introduction to Error Analysis: The Study of Uncertainties in Physical Measurements. University Science Books. p. 94, §4.1. ISBN 978-0-935702-75-0.
- ^ Ritter, Elie. Manuel théorique et pratique de l'application de la méthode des moindres carrés au calcul des observations. Mallet-Bachelier. p. 7. Retrieved 16 February 2025.
- ^ a b c d Heinrich, Joel; Lyons, Louis (2007-11-01). "Systematic Errors". Annual Review of Nuclear and Particle Science. 57 (1): 145–169. doi:10.1146/annurev.nucl.57.090506.123052. ISSN 0163-8998.
- ^ "Systematic error". Merriam-webster.com. Retrieved 2016-09-10.
- ^ Allchin, Douglas (March 2001). "Error Types". Perspectives on Science. 9 (1): 38–58. doi:10.1162/10636140152947786. ISSN 1063-6145.
- ^ Young, Hugh D. (1996). Statistical treatment of experimental data: an introduction to statistical methods (Repr ed.). Long Grove, Ill: Waveland Press. ISBN 978-0-88133-913-0.
- ^ Salant, P.; Dillman, D. A. (1994). How to conduct your survey. New York: John Wiley & Sons. ISBN 0-471-01273-4.
- ^ Bland, J. Martin; Altman, Douglas G. (1996). "Statistics Notes: Measurement Error". BMJ. 313 (7059): 744. doi:10.1136/bmj.313.7059.744. PMC 2352101. PMID 8819450.
- ^ Saris, W. E.; Gallhofer, I. N. (2014). Design, Evaluation and Analysis of Questionnaires for Survey Research (Second ed.). Hoboken: Wiley. ISBN 978-1-118-63461-5.
- ^ DeCastellarnau, A. and Saris, W. E. (2014). A simple procedure to correct for measurement errors in survey research. European Social Survey Education Net (ESS EduNet). Available at: http://essedunet.nsd.uib.no/cms/topics/measurement Archived 2019-09-15 at the Wayback Machine
- ^ Saris, W. E.; Revilla, M. (2015). "Correction for measurement errors in survey research: necessary and possible" (PDF). Social Indicators Research. 127 (3): 1005–1020. doi:10.1007/s11205-015-1002-x. hdl:10230/28341. S2CID 146550566.
- ^ Hayashi, Fumio (2000). Econometrics. Princeton University Press. p. 187. ISBN 978-0-691-01018-2.
- ^ Angrist, Joshua David; Pischke, Jörn-Steffen (2015). Mastering 'metrics : the path from cause to effect. Princeton, New Jersey: Princeton University Press. p. 221. ISBN 978-0-691-15283-7. OCLC 877846199.
The bias generated by this sort of measurement error in regressors is called attenuation bias.
Further reading
[edit]- Cochran, W. G. (1968). "Errors of Measurement in Statistics". Technometrics. 10 (4): 637–666. doi:10.2307/1267450. JSTOR 1267450. S2CID 120645541.
Observational error
View on GrokipediaFundamentals
Definition
Observational error, also known as measurement error, refers to the difference between the value obtained from an observation or measurement and the true value of the quantity being measured.[1] This discrepancy arises because no measurement process is perfect, and the true value is typically unknown, requiring statistical methods to estimate and quantify the error.[3] In scientific, engineering, and statistical contexts, observational error is a fundamental concept that underscores the limitations of empirical data collection and influences the reliability of conclusions drawn from observations.[1] The theory of observational errors emerged in the late 18th and early 19th centuries as astronomers and mathematicians grappled with inaccuracies in celestial observations, particularly in predicting planetary positions.[4] Carl Friedrich Gauss played a pivotal role in formalizing this theory through his development of the method of least squares, detailed in his seminal work Theoria Combinationis Observationum Erroribus Minimis Obnoxiae (1821–1823), which provides a mathematical framework for combining multiple observations to minimize the impact of errors by assuming they follow a normal distribution around the true value.[5] This approach revolutionized error handling by treating errors not as mistakes but as random deviations amenable to probabilistic analysis, enabling more accurate estimates in fields like geodesy and astronomy.[4] In practice, observational errors are characterized by their magnitude and distribution, often modeled using probability distributions such as the Gaussian (normal) distribution, where the error is the deviation such that the observed value , with as the true value and having mean zero for unbiased measurements.[1] While the exact true value remains elusive, repeated measurements allow for estimation of error properties like variance, which quantifies the spread of observations around the expected value.[3] Recognizing observational error is essential for designing robust experiments and interpreting results, as unaccounted errors can lead to biased inferences or overstated precision in scientific findings.[1]Classification
Observational errors, defined as the discrepancy between a measured value and the true value of a quantity, are primarily classified into three broad categories: gross errors, systematic errors, and random errors. This classification is fundamental in fields such as physics, engineering, and statistics, allowing researchers to identify, mitigate, and account for deviations in observations. Gross errors, also known as blunders, arise from human mistakes or procedural lapses, such as misreading an instrument scale, incorrect data transcription, or computational oversights; these are not inherent to the measurement process but can be minimized through careful repetition and verification.[6][7] Systematic errors produce consistent biases that affect all measurements in a predictable direction, often stemming from instrumental imperfections, environmental influences, or methodological flaws. For instance, a poorly calibrated thermometer might consistently underreport temperature, leading to offsets in all readings. These errors can be subclassified further—such as instrumental (e.g., zero error in a scale), environmental (e.g., temperature-induced expansion of equipment), observational (e.g., parallax in visual readings), or theoretical (e.g., approximations in models)—but their key characteristic is repeatability, making them correctable once identified through calibration or control experiments.[7][1] Random errors, in contrast, are unpredictable fluctuations that vary irregularly around the true value, typically due to uncontrollable factors like thermal noise, slight vibrations, or inherent instrument resolution limits; they tend to follow a statistical distribution, such as the normal distribution, and can be reduced by averaging multiple observations. Unlike systematic errors, random errors cannot be eliminated but their effects diminish with increased sample size, as quantified by standard deviation or variance in statistical analysis.[1][8] In modern metrology, particularly under the Guide to the Expression of Uncertainty in Measurement (GUM), the evaluation of uncertainty components arising from these errors is classified into Type A and Type B methods. Type A evaluations rely on statistical analysis of repeated observations to characterize random effects, yielding estimates like standard deviations from experimental data. Type B evaluations address systematic effects or other non-statistical sources, such as manufacturer specifications or expert judgment, providing bounds or distributions based on prior knowledge. This framework shifts focus from raw error classification to quantifiable uncertainty propagation, ensuring rigorous assessment in scientific measurements.[9]Sources
Systematic Errors
Systematic errors, also known as biases, are consistent and repeatable deviations in observational data that shift measurements or estimates away from the true value in a predictable direction, rather than varying randomly around it.[10] These errors arise from flaws in the measurement process, instrumentation, or study design, and they do not diminish with increased sample size or repeated trials, unlike random errors. In observational contexts, such as scientific experiments or epidemiological studies, systematic errors can lead to overestimation or underestimation of effects, compromising the validity of conclusions.[11] Common sources of systematic errors include imperfections in measuring instruments, such as poor calibration or drift over time, which introduce offsets in all readings.[12] Observer-related biases, like consistent misinterpretation of data due to preconceived notions or improper techniques, also contribute significantly. Environmental factors, including uncontrolled variables like temperature fluctuations affecting sensor performance, or methodological issues such as non-representative sampling in observational studies, further propagate these errors.[13] In epidemiology, information bias occurs when exposure or outcome data are systematically misclassified, often due to differential recall between groups, while selection bias arises from non-random inclusion of participants, skewing associations.[14] For example, in physical measurements, a thermometer with a fixed calibration error of +2°C would systematically overreport temperatures in all observations, regardless of replication.[15] In astronomical observations, parallax errors from improper instrument alignment can consistently displace star positions.[16] In survey-based studies, interviewer bias—where question phrasing influences responses predictably—exemplifies how human factors introduce systematic distortion.[17] These errors are theoretically identifiable and correctable through calibration, blinding, or design adjustments, but their persistence requires vigilant assessment to ensure accurate inference.[18]Random Errors
Random errors, also referred to as random measurement errors, constitute the component of overall measurement error that varies unpredictably in replicate measurements of the same measurand under stated measurement conditions.[19] This variability arises from temporal or spatial fluctuations in influence quantities that affect the measurement process, such as minor changes in environmental conditions, instrument sensitivity, or operator actions that cannot be fully controlled or anticipated.[20] In contrast to systematic errors, which consistently bias results in one direction, random errors are unbiased, with their expectation value equal to zero over an infinite number of measurements, leading to scatter around the true value.[19] The primary causes of random errors include inherent noise in detection systems, like thermal fluctuations in electronic sensors or photon shot noise in optical measurements, as well as uncontrollable variations in the sample or surroundings, such as slight pressure changes in a gas volume determination.[20] Human factors, such as inconsistent reaction times in timing experiments, also contribute, as do limitations in the resolution of measuring instruments when interpolating between scale marks. These errors are inherent to the observational process and cannot be eliminated entirely but can be quantified through statistical analysis of repeated observations. Random errors are typically characterized by their dispersion, often assuming a Gaussian (normal) distribution centered on the mean value, which allows for probabilistic confidence intervals—approximately 68% of measurements fall within one standard deviation, 95% within two, and 99.7% within three.[15] In metrology, the standard uncertainty associated with random effects is evaluated using Type A methods, involving the experimental standard deviation of the mean from replicate measurements: where is the sample standard deviation calculated as and is the arithmetic mean.[20] This approach provides a measure of precision, reflecting the agreement among repeated measurements rather than absolute accuracy. To mitigate the impact of random errors, multiple replicate measurements are averaged, reducing the uncertainty proportionally to , thereby improving the reliability of the result without altering the true value. For instance, in timing a free-fall experiment with a stopwatch, averaging ten trials minimizes variations due to reaction time, yielding a more precise estimate of gravitational acceleration. In broader observational contexts, such as astronomical imaging, random errors from atmospheric turbulence are averaged out through longer exposure times or multiple frames, enhancing signal-to-noise ratios.[20] Overall, while random errors limit precision, their statistical treatment enables robust inference in scientific observations.Characterization
Bias Assessment
Bias assessment in observational error evaluation focuses on identifying and quantifying systematic deviations that cause observed values to consistently differ from true values, often due to flaws in data collection, instrumentation, or study design. In observational contexts, such as scientific measurements or surveys, bias arises from sources like selection processes, measurement inaccuracies, or confounding factors, leading to distorted inferences. Assessing bias involves both qualitative judgment and quantitative techniques to determine the direction and magnitude of these errors, enabling researchers to adjust estimates or evaluate study validity.[21] Qualitative risk of bias (RoB) tools provide structured frameworks for appraising potential biases in non-randomized observational studies. The ROBINS-I tool, developed for assessing bias in interventions based on non-randomized studies of interventions, evaluates seven domains including confounding, selection of participants, and measurement of outcomes, rating each as low, moderate, serious, critical, or no information. This approach compares the study to an ideal randomized trial, highlighting how deviations introduce bias, and has been widely adopted in evidence syntheses like systematic reviews. Similarly, the RoBANS tool for non-randomized studies assesses selection, performance, detection, attrition, and reporting biases through domain-based checklists, promoting transparent evaluation in fields like epidemiology and clinical research.[22][23] Quantitative bias assessment employs sensitivity analyses and simulation-based methods to estimate the impact of unobserved or unmeasured biases on results. For instance, quantitative bias analysis, as outlined in methodological guides, involves specifying plausible bias parameters—such as misclassification rates or confounding effects—and recalculating effect estimates to bound the true value, providing intervals that reflect uncertainty due to systematic error. In measurement error contexts, techniques like regression calibration correct for bias by modeling the relationship between observed and true exposures, particularly useful in epidemiological studies where instrument error leads to attenuation bias. These methods prioritize sensitivity to key assumptions, with seminal applications demonstrating that even small biases can substantially alter conclusions in observational data.[24][25] In practice, bias assessment integrates these approaches to inform robustness checks; for example, in survey polling, funnel plots detect publication bias by visualizing study effect sizes against precision, where asymmetry indicates selective reporting. High-impact contributions emphasize that comprehensive assessment requires domain expertise and multiple tools to avoid over-reliance on any single method, ensuring credible interpretation of observational errors across applications like experiments and regression analyses.[26][11]Precision Evaluation
Precision evaluation quantifies the variability and reproducibility of observational measurements, distinct from bias assessment which focuses on systematic deviation from the true value. In metrology and statistics, precision is formally defined as the closeness of agreement between independent measurements obtained under specified conditions, often characterized by the dispersion of results around their mean.[27] This evaluation is essential for determining the reliability of data in fields ranging from scientific experimentation to surveys, where high precision indicates low random error and consistent outcomes under repeated trials. A primary method for assessing precision involves replicate measurements to compute statistical metrics of dispersion. The standard deviation () of a set of repeated observations measures the typical deviation from the mean, providing a direct indicator of precision for a single measurement; smaller values denote higher precision. For enhanced reliability, the standard error of the mean (SEM = , where is the number of replicates) evaluates the precision of the average, emphasizing how well the sample mean estimates the population parameter. The coefficient of variation (CV = , with as the mean) normalizes this for scale, facilitating comparisons across different measurement magnitudes. These metrics are derived from Type A uncertainty evaluations in the Guide to the Expression of Uncertainty in Measurement (GUM), which rely on statistical analysis of repeated observations.[20] In measurement systems, precision is further dissected through repeatability and reproducibility. Repeatability assesses variation under identical conditions (e.g., same operator, equipment, and environment), typically yielding a short-term standard deviation, while reproducibility examines consistency across varying conditions (e.g., different operators or laboratories), capturing broader random effects. These are quantified via interlaboratory studies as outlined in ISO 5725-2, where precision is estimated from standard deviations of laboratory means. For instance, in surface metrology applications, repeatability limits below 1 nm and reproducibility below 2 nm have been reported for atomic force microscopy parameters. Measurement system analysis (MSA), such as Gage R&R, partitions total variation into components from equipment, operators, and interactions; a Gage R&R percentage below 10% of study variation or tolerance indicates acceptable precision.[27][28] For observational studies in statistics, precision evaluation often incorporates confidence intervals and standard errors to reflect uncertainty in estimates, particularly in meta-analyses where inverse-variance weighting prioritizes studies with lower variability. However, spurious precision—arising from practices like p-hacking or selective model choices—can artificially narrow standard errors, biasing pooled results. Simulations demonstrate that such issues exacerbate bias more than publication bias alone, with unweighted averages sometimes outperforming weighted methods in affected datasets. To mitigate this, approaches like the Meta-Analysis Instrumental Variable Estimator (MAIVE) use sample size as an instrument to adjust reported precisions, reducing bias in up to 75% of psychological meta-analyses. Advanced uncertainty propagation via Monte Carlo simulations (JCGM 101) complements these by modeling distributions for nonlinear cases, yielding expanded uncertainty intervals (e.g., coverage factor for approximately 95% confidence).[29][30]Propagation
Basic Rules
In observational error analysis, the propagation of uncertainties refers to the process of determining how errors in measured input quantities affect the uncertainty in a derived result obtained through mathematical operations. This is essential in scientific measurements to quantify the overall reliability of computed values. The standard approach uses a first-order Taylor series approximation to linearize the functional relationship , assuming small uncertainties relative to the input values.[20] The basic law of propagation of uncertainty, as outlined in the Guide to the Expression of Uncertainty in Measurement (GUM), calculates the combined standard uncertainty for uncorrelated input quantities as the square root of the sum of the squared contributions from each input: where is the standard uncertainty in input , and is the sensitivity coefficient representing the partial derivative of with respect to , evaluated at the best estimates of the inputs. This formula applies under the assumption that the inputs are independent (uncorrelated) and follows from the variance propagation in probability theory for linear approximations.[20] For correlated inputs, covariance terms are added, but basic rules typically assume independence unless evidence of correlation exists.[20] Specific rules derive from this general law for common operations, assuming uncorrelated uncertainties and Gaussian error distributions for simplicity. For addition or subtraction, such as , the absolute uncertainties add in quadrature: This reflects that variances are additive for independent sums or differences. For example, if lengths with and with are added to find total length , then .[20][31] For multiplication or division, such as or , the relative uncertainties propagate in quadrature: This is particularly useful for quantities like resistance , where voltage and current have relative uncertainties that combine to give the relative uncertainty in . For instance, if and with no correlation, then .[20][31] For powers, such as , the relative uncertainty scales with the exponent: More generally, for , the relative uncertainty is This rule extends to logarithms or other functions via the general law, emphasizing that higher powers amplify relative errors. These rules assume the uncertainties are small compared to the values, ensuring the linear approximation holds; for larger errors, higher-order methods or Monte Carlo simulations may be needed.[20][31] The following table summarizes these basic propagation rules for uncorrelated uncertainties:| Operation | Formula for | Notes |
|---|---|---|
| Addition/Subtraction () | Absolute uncertainties; independent of signs. | |
| Multiplication/Division ( or ) | $ | y |
| Power () | $ | n |
| General () | Taylor approximation; sensitivity coefficients required. |
