Hubbry Logo
search
logo

Measurement uncertainty

logo
Community Hub0 Subscribers
Write something...
Be the first to start a discussion here.
Be the first to start a discussion here.
See all
Measurement uncertainty

In metrology, measurement uncertainty is the expression of the statistical dispersion of the values attributed to a quantity measured on an interval or ratio scale.

All measurements are subject to uncertainty and a measurement result is complete only when it is accompanied by a statement of the associated uncertainty, such as the standard deviation. By international agreement, this uncertainty has a probabilistic basis and reflects incomplete knowledge of the quantity value. It is a non-negative parameter.

The measurement uncertainty is often taken as the standard deviation of a state-of-knowledge probability distribution over the possible values that could be attributed to a measured quantity. Relative uncertainty is the measurement uncertainty relative to the magnitude of a particular single choice for the value for the measured quantity, when this choice is nonzero. This particular single choice is usually called the measured value, which may be optimal in some well-defined sense (e.g., a mean, median, or mode). Thus, the relative measurement uncertainty is the measurement uncertainty divided by the absolute value of the measured value, when the measured value is not zero.

The purpose of measurement is to provide information about a quantity of interest – a measurand. Measurands on ratio or interval scales include the size of a cylindrical feature, the volume of a vessel, the potential difference between the terminals of a battery, or the mass concentration of lead in a flask of water.

No measurement is exact. When a quantity is measured, the outcome depends on the measuring system, the measurement procedure, the skill of the operator, the environment, and other effects. Even if the quantity were to be measured several times, in the same way and in the same circumstances, a different measured value would in general be obtained each time, assuming the measuring system has sufficient resolution to distinguish between the values.

The dispersion of the measured values would relate to how well the measurement is performed. If measured on a ratio or interval scale, their average would provide an estimate of the true value of the quantity that generally would be more reliable than an individual measured value. The dispersion and the number of measured values would provide information relating to the average value as an estimate of the true value. However, this information would not generally be adequate.

The measuring system may provide measured values that are not dispersed about the true value, but about some value offset from it. Take a domestic bathroom scale. Suppose it is not set to show zero when there is nobody on the scale, but to show some value offset from zero. Then, no matter how many times the person's mass were re-measured, the effect of this offset would be inherently present in the average of the values.

The "Guide to the Expression of Uncertainty in Measurement" (commonly known as the GUM) is the definitive document on this subject. The GUM has been adopted by all major National Measurement Institutes (NMIs) and by international laboratory accreditation standards such as ISO/IEC 17025 General requirements for the competence of testing and calibration laboratories, which is required for international laboratory accreditation, and is employed in most modern national and international documentary standards on measurement methods and technology. See Joint Committee for Guides in Metrology.

See all
User Avatar
No comments yet.