Recent from talks
Contribute something
Nothing was collected or created yet.
Measurement
View on Wikipedia
Measurement is the quantification of attributes of an object or event, which can be used to compare with other objects or events.[1][2] In other words, measurement is a process of determining how large or small a physical quantity is as compared to a basic reference quantity of the same kind.[3] The scope and application of measurement are dependent on the context and discipline. In natural sciences and engineering, measurements do not apply to nominal properties of objects or events, which is consistent with the guidelines of the International Vocabulary of Metrology (VIM) published by the International Bureau of Weights and Measures (BIPM).[2] However, in other fields such as statistics as well as the social and behavioural sciences, measurements can have multiple levels, which would include nominal, ordinal, interval and ratio scales.[1][4]
Measurement is a cornerstone of trade, science, technology and quantitative research in many disciplines. Historically, many measurement systems existed for the varied fields of human existence to facilitate comparisons in these fields. Often these were achieved by local agreements between trading partners or collaborators. Since the 18th century, developments progressed towards unifying, widely accepted standards that resulted in the modern International System of Units (SI). This system reduces all physical measurements to a mathematical combination of seven base units. The science of measurement is pursued in the field of metrology.
Measurement is defined as the process of comparison of an unknown quantity with a known or standard quantity.
History
[edit]
Methodology
[edit]The measurement of a property may be categorized by the following criteria: type, magnitude, unit, and uncertainty.[citation needed] They enable unambiguous comparisons between measurements.
- The level of measurement is a taxonomy for the methodological character of a comparison. For example, two states of a property may be compared by ratio, difference, or ordinal preference. The type is commonly not explicitly expressed, but implicit in the definition of a measurement procedure.
- The magnitude is the numerical value of the characterization, usually obtained with a suitably chosen measuring instrument.
- A unit assigns a mathematical weighting factor to the magnitude that is derived as a ratio to the property of an artifact used as standard or a natural physical quantity.
- An uncertainty represents the random and systemic errors of the measurement procedure; it indicates a confidence level in the measurement. Errors are evaluated by methodically repeating measurements and considering the accuracy and precision of the measuring instrument.
Standardization of measurement units
[edit]Measurements most commonly use the International System of Units (SI) as a comparison framework. The system defines seven fundamental units: kilogram, metre, candela, second, ampere, kelvin, and mole. All of these units are defined without reference to a particular physical object which would serve as a standard. Artifact-free definitions fix measurements at an exact value related to a physical constant or other invariable natural phenomenon, in contrast to reliance on standard artifacts which are subject to deterioration or destruction. Instead, the measurement unit can only ever change through increased accuracy in determining the value of the constant it is tied to.

The first proposal to tie an SI base unit to an experimental standard independent of fiat[clarification needed] was by Charles Sanders Peirce (1839–1914),[6] who proposed to define the metre in terms of the wavelength of a spectral line.[7] This directly influenced the Michelson–Morley experiment; Michelson and Morley cite Peirce, and improve on his method.[8]
Standards
[edit]With the exception of a few fundamental quantum constants, units of measurement are derived from historical agreements. Nothing inherent in nature dictates that an inch has to be a certain length, nor that a mile is a better measure of distance than a kilometre. Over the course of human history, however, first for convenience and then out of necessity, standards of measurement evolved so that communities would have certain common benchmarks. Laws regulating measurement were originally developed to prevent fraud in commerce.
Units of measurement are generally defined on a scientific basis, overseen by governmental or independent agencies, and established in international treaties, pre-eminent of which is the General Conference on Weights and Measures (CGPM), established in 1875 by the Metre Convention, overseeing the International System of Units (SI). For example, the metre was redefined in 1983 by the CGPM in terms of the speed of light, the kilogram was redefined in 2019 in terms of the Planck constant and the international yard was defined in 1960 by the governments of the United States, United Kingdom, Australia and South Africa as being exactly 0.9144 metres.
In the United States, the National Institute of Standards and Technology (NIST), a division of the United States Department of Commerce, regulates commercial measurements. In the United Kingdom, the role is performed by the National Physical Laboratory (NPL), in Australia by the National Measurement Institute,[9] in South Africa by the Council for Scientific and Industrial Research, and in India by the National Physical Laboratory of India.
Units and systems
[edit]A unit is a known or standard quantity in terms of which other physical quantities are measured.

Imperial and US customary systems
[edit]Before SI units were widely adopted around the world, the British systems of English units and later imperial units were used in Britain, the Commonwealth and the United States. The system came to be known as U.S. customary units in the United States and is still in use there and in a few Caribbean countries. These various systems of measurement have at times been called foot-pound-second systems after the Imperial units for length, weight and time even though the tons, hundredweights, gallons, and nautical miles, for example, have different values in the U.S. and imperial systems. Many Imperial units remain in use in Britain, which has officially switched to the SI system, with a few exceptions such as road signs, where road distances are shown in miles (or in yards for short distances) and speed limits are in miles per hour. Draught beer and cider must be sold by the imperial pint, and milk in returnable bottles can be sold by the imperial pint. Many people measure their height in feet and inches and their weight in stone and pounds, to give just a few examples. Imperial units are used in many other places: for example, in many Commonwealth countries that are considered metricated, land area is measured in acres and floor space in square feet, particularly for commercial transactions (rather than government statistics). Similarly, gasoline is sold by the gallon in many countries that are considered metricated.
Metric system
[edit]The metric system is a decimal system of measurement based on its units for length, the metre and for mass, the kilogram. It exists in several variations, with different choices of base units, though these do not affect its day-to-day use. Since the 1960s, the International System of Units (SI) is the internationally recognised metric system. Metric units of mass, length, and electricity are widely used around the world for both everyday and scientific purposes.
International System of Units
[edit]The International System of Units (abbreviated as SI from the French language name Système International d'Unités) is the modern revision of the metric system. It is the world's most widely used system of units, both in everyday commerce and in science. The SI was developed in 1960 from the metre–kilogram–second (MKS) system, rather than the centimetre–gram–second (CGS) system, which, in turn, had many variants. The SI units for the seven base physical quantities are:[10]
| Base quantity | Base unit | Symbol | Defining constant |
|---|---|---|---|
| time | second | s | hyperfine splitting in caesium-133 |
| length | metre | m | speed of light, c |
| mass | kilogram | kg | Planck constant, h |
| electric current | ampere | A | elementary charge, e |
| temperature | kelvin | K | Boltzmann constant, k |
| amount of substance | mol | mol | Avogadro constant, NA |
| luminous intensity | candela | cd | luminous efficacy of a 540 THz source, Kcd |
In the SI, base units are the simple measurements for time, length, mass, temperature, amount of substance, electric current and light intensity. Derived units are constructed from the base units: for example, the watt, i.e. the unit for power, is defined from the base units as m2·kg·s−3. Other physical properties may be measured in compound units, such as material density, measured in kg·m−3.
Converting prefixes
[edit]The SI allows easy multiplication when switching among units having the same base but different prefixes. To convert from metres to centimetres it is only necessary to multiply the number of metres by 100, since there are 100 centimetres in a metre. Inversely, to switch from centimetres to metres one multiplies the number of centimetres by 0.01 or divides the number of centimetres by 100.
Length
[edit]
A ruler or rule is a tool used in, for example, geometry, technical drawing, engineering, and carpentry, to measure lengths or distances or to draw straight lines. Strictly speaking, the ruler is the instrument used to rule straight lines and the calibrated instrument used for determining length is called a measure, however common usage calls both instruments rulers and the special name straightedge is used for an unmarked rule. The use of the word measure, in the sense of a measuring instrument, only survives in the phrase tape measure, an instrument that can be used to measure but cannot be used to draw straight lines. As can be seen in the photographs on this page, a two-metre carpenter's rule can be folded down to a length of only 20 centimetres, to easily fit in a pocket, and a five-metre-long tape measure easily retracts to fit within a small housing.
Time
[edit]Time is an abstract measurement of elemental changes over a non-spatial continuum. It is denoted by numbers and/or named periods such as hours, days, weeks, months and years. It is an apparently irreversible series of occurrences within this non spatial continuum. It is also used to denote an interval between two relative points on this continuum.
Mass
[edit]Mass refers to the intrinsic property of all material objects to resist changes in their momentum. Weight, on the other hand, refers to the downward force produced when a mass is in a gravitational field. In free fall (no net gravitational forces) objects lack weight but retain their mass. The Imperial units of mass include the ounce, pound, and ton. The metric units gram and kilogram are units of mass.
One device for measuring weight or mass is called a weighing scale or, often, simply a "scale". A spring scale measures force but not mass, a balance compares weight; both require a gravitational field to operate. Some of the most accurate instruments for measuring weight or mass are based on load cells with a digital read-out, but require a gravitational field to function and would not work in free fall.
Economics
[edit]The measures used in economics are physical measures, nominal price value measures and real price measures. These measures differ from one another by the variables they measure and by the variables excluded from measurements.
Survey research
[edit]
In the field of survey research, measures are taken from individual attitudes, values, and behavior using questionnaires as a measurement instrument. As all other measurements, measurement in survey research is also vulnerable to measurement error, i.e. the departure from the true value of the measurement and the value provided using the measurement instrument.[11] In substantive survey research, measurement error can lead to biased conclusions and wrongly estimated effects. In order to get accurate results, when measurement errors appear, the results need to be corrected for measurement errors.
Exactness designation
[edit]The following rules generally apply for displaying the exactness of measurements:[12]
- All non-0 digits and any 0s appearing between them are significant for the exactness of any number. For example, the number 12000 has two significant digits, and has implied limits of 11500 and 12500.
- Additional 0s may be added after a decimal separator to denote a greater exactness, increasing the number of decimals. For example, 1 has implied limits of 0.5 and 1.5 whereas 1.0 has implied limits 0.95 and 1.05.
Difficulties
[edit]Since accurate measurement is essential in many fields, and since all measurements are necessarily approximations, a great deal of effort must be taken to make measurements as accurate as possible. For example, consider the problem of measuring the time it takes an object to fall a distance of one metre (about 39 in). Using physics, it can be shown that, in the gravitational field of the Earth, it should take any object about 0.45 second to fall one metre. However, the following are just some of the sources of error that arise:
- This computation used for the acceleration of gravity 9.8 metres per second squared (32 ft/s2). But neither of these two figures is exact, but only precise to two significant digits.
- The Earth's gravitational field varies slightly depending on height above sea level and other factors.
- The computation of 0.45 seconds involved extracting a square root, a mathematical operation that required rounding off to some number of significant digits, in this case two significant digits.
Additionally, other sources of experimental error include:
- carelessness,
- determining of the exact time at which the object is released and the exact time it hits the ground,
- measurement of the height and the measurement of the time both involve some error,
- air resistance,
- posture of human participants.[13]
Scientific experiments must be carried out with great care to eliminate as much error as possible, and to keep error estimates realistic.
Definitions and theories
[edit]Classical definition
[edit]In the classical definition, which is standard throughout the physical sciences, measurement is the determination or estimation of ratios of quantities.[14] Quantity and measurement are mutually defined: quantitative attributes are those possible to measure, at least in principle. The classical concept of quantity can be traced back to John Wallis and Isaac Newton, and was foreshadowed in Euclid's Elements.[14]
Representational theory
[edit]In the representational theory, measurement is defined as "the correlation of numbers with entities that are not numbers".[15] The most technically elaborated form of representational theory is also known as additive conjoint measurement. In this form of representational theory, numbers are assigned based on correspondences or similarities between the structure of number systems and the structure of qualitative systems. A property is quantitative if such structural similarities can be established. In weaker forms of representational theory, such as that implicit within the work of Stanley Smith Stevens,[16] numbers need only be assigned according to a rule.
The concept of measurement is often misunderstood as merely the assignment of a value, but it is possible to assign a value in a way that is not a measurement in terms of the requirements of additive conjoint measurement. One may assign a value to a person's height, but unless it can be established that there is a correlation between measurements of height and empirical relations, it is not a measurement according to additive conjoint measurement theory. Likewise, computing and assigning arbitrary values, like the "book value" of an asset in accounting, is not a measurement because it does not satisfy the necessary criteria.
Three type of representational theory
-
- Empirical relation
- In science, an empirical relationship is a relationship or correlation based solely on observation rather than theory. An empirical relationship requires only confirmatory data irrespective of theoretical basis.
-
- The rule of mapping
- The real world is the Domain of mapping, and the mathematical world is the range. when we map the attribute to mathematical system, we have many choice for mapping and the range.
-
- The representation condition of measurement
Theory
[edit]All data are inexact and statistical in nature. Thus the definition of measurement is: "A set of observations that reduce uncertainty where the result is expressed as a quantity."[17] This definition is implied in what scientists actually do when they measure something and report both the mean and statistics of the measurements. In practical terms, one begins with an initial guess as to the expected value of a quantity, and then, using various methods and instruments, reduces the uncertainty in the value. In this view, unlike the positivist representational theory, all measurements are uncertain, so instead of assigning one value, a range of values is assigned to a measurement. This also implies that there is not a clear or neat distinction between estimation and measurement.
Quantum mechanics
[edit]In quantum mechanics, a measurement is an action that determines a particular property (such as position, momentum, or energy) of a quantum system. Quantum measurements are always statistical samples from a probability distribution; the distribution for many quantum phenomena is discrete, not continuous.[18]: 197 Quantum measurements alter quantum states and yet repeated measurements on a quantum state are reproducible. The measurement appears to act as a filter, changing the quantum state into one with the single measured quantum value.[18] The unambiguous meaning of the quantum measurement is an unresolved fundamental problem in quantum mechanics; the most common interpretation is that when a measurement is performed, the wavefunction of the quantum system "collapses" to a single, definite value.[19]
Biology
[edit]In biology, there is generally no well established theory of measurement. However, the importance of the theoretical context is emphasized.[20] Moreover, the theoretical context stemming from the theory of evolution leads to articulate the theory of measurement and historicity as a fundamental notion.[21] Among the most developed fields of measurement in biology are the measurement of genetic diversity and species diversity.[22]
See also
[edit]- Conversion of units
- Electrical measurements
- History of measurement
- ISO 10012, Measurement management systems
- Levels of measurement
- List of humorous units of measurement
- List of unusual units of measurement
- Measurement in quantum mechanics
- Measurement uncertainty
- NCSL International
- Observable quantity
- Orders of magnitude
- Quantification (science)
- Standard (metrology)
- Timeline of temperature and pressure measurement technology
- Timeline of time measurement technology
- Weights and measures
References
[edit]- ^ a b Pedhazur, Elazar J.; Schmelkin, Leora and Albert (1991). Measurement, Design, and Analysis: An Integrated Approach (1st ed.). Hillsdale, NJ: Lawrence Erlbaum Associates. pp. 15–29. ISBN 978-0-8058-1063-9.
- ^ a b International Vocabulary of Metrology – Basic and General Concepts and Associated Terms (VIM) (PDF) (3rd ed.). International Bureau of Weights and Measures. 2008. p. 16.
- ^ Young, Hugh D; Freedman, Roger A. (2012). University Physics (13 ed.). Pearson Education Inc. ISBN 978-0-321-69686-1.
- ^ Kirch, Wilhelm, ed. (2008). "Level of measurement". Encyclopedia of Public Health. Vol. 2. Springer. p. 81. ISBN 978-0-321-02106-9.
- ^ "A Brief History of Metrology - bowersUK". bowers rest of world. Retrieved 2024-10-10.
- ^ Crease 2011, pp. 182–4
- ^ C.S. Peirce (July 1879) "Note on the Progress of Experiments for Comparing a Wave-length with a Metre" American Journal of Science, as referenced by Crease 2011, p. 203
- ^ Crease, Robert P. (2011). World in the Balance: The Historical Quest for an Absolute System of Measurement. New York & London: W. W. Norton. p. 203. ISBN 978-0-393-34354-0.
- ^ "About Us". National Measurement Institute of Australia. 3 December 2020.
- ^ The International System of Units (PDF), V3.01 (9th ed.), International Bureau of Weights and Measures, Aug 2024, ISBN 978-92-822-2272-0
- ^ Groves, Robert (2004). Survey Methodology. New Jersey: Wiley. ISBN 9780471483489. "By measurement error we mean a departure from the value of the measurement as applied to a sample unit and the value provided. " pp. 51–52 .
- ^ Page 41 in: VanPool, Todd (2011). Quantitative analysis in archaeology. Chichester Malden: Wiley-Blackwell. ISBN 978-1-4443-9017-9. OCLC 811317577.
- ^ Gill, Simeon; Parker, Christopher J. (2017). "Scan posture definition and hip girth measurement: the impact on clothing design and body scanning". Ergonomics. 60 (8): 1123–1136. doi:10.1080/00140139.2016.1251621. PMID 27764997. S2CID 23758581.
- ^ a b Michell, J. (1999). Measurement in psychology: a critical history of a methodological concept. New York: Cambridge University Press.
- ^ Ernest Nagel: "Measurement", Erkenntnis, Volume 2, Number 1 / December 1931, pp. 313–335, published by Springer, the Netherlands
- ^ Stevens, S.S. On the theory of scales and measurement 1946. Science. 103, 677–80.
- ^ Douglas Hubbard: "How to Measure Anything", Wiley (2007), p. 21
- ^ a b Messiah, Albert (1966). Quantum Mechanics. North Holland, John Wiley & Sons. ISBN 0486409244.
- ^ Penrose, Roger (2007). The road to reality : a complete guide to the laws of the universe. New York: Vintage Books. ISBN 978-0-679-77631-4. "The jumping of the quantum state to one of the eigenstates of Q is the process referred to as state-vector reduction or collapse of the wavefunction. It is one of quantum theory's most puzzling features ..." "[T]he way in which quantum mechanics is used in practice is to take the state indeed to jump in this curious way whenever a measurement is deemed to take place." p 528 Later Chapter 29 is entitled the Measurement paradox.
- ^ Houle, David; Pélabon, Christophe; Wagner, Günter P.; Hansen, Thomas F. (2011). "Measurement and Meaning in Biology" (PDF). The Quarterly Review of Biology. 86 (1): 3–34. doi:10.1086/658408. ISSN 0033-5770. PMID 21495498. S2CID 570080. Archived from the original (PDF) on 2019-05-29.
- ^ Montévil, Maël (2019). "Measurement in biology is methodized by theory". Biology & Philosophy. 34 (3). doi:10.1007/s10539-019-9687-x. ISSN 0169-3867. S2CID 96447209.
- ^ Magurran, A.E. & McGill, B.J. (Hg.) 2011: Biological Diversity: Frontiers in Measurement and Assessment Oxford University Press.
External links
[edit]
Media related to Measurement at Wikimedia Commons- Schlaudt, Oliver 2020: "measurement". In: Kirchhoff, Thomas (ed.): Online Encyclopedia Philosophy of Nature. Heidelberg: Universitätsbibliothek Heidelberg, measurement.
- Tal, Era 2020: "Measurement in Science". In: Zalta, Edward N. (ed.): The Stanford Encyclopedia of Philosophy (Fall 2020 ed.), Measurement in Science.
- Ball, Robert Stawell (1883). . Encyclopædia Britannica. Vol. XV (9th ed.). pp. 659–668.
- A Dictionary of Units of Measurement Archived 2018-10-06 at the Wayback Machine
- 'Metrology – in short' 3rd ed., July 2008 ISBN 978-87-988154-5-7
Measurement
View on GrokipediaDefinitions and Fundamentals
Core Definition
Measurement is the assignment of numerals to objects or events according to rules, a foundational concept in the study of quantification.[9] This definition, introduced by psychologist Stanley Smith Stevens, underscores that measurement requires systematic rules to ensure consistency and meaningful representation, distinguishing it from arbitrary numerical labeling.[10] The process focuses on quantitative assessments, which involve numerical values that can be ordered or scaled, in contrast to qualitative assessments that use descriptive terms without numerical assignment.[10] For example, determining the length of a rod by applying a ruler yields a numerical value such as 2.5 meters, enabling precise scaling, while describing the rod's texture as "rough" remains descriptive and non-numerical.[9] In scientific inquiry, measurement facilitates comparison across observations, supports predictions through mathematical modeling, and allows quantification of phenomena to test hypotheses empirically.[10] Assigning a temperature reading, like 25°C, to a sample of air not only quantifies its thermal state but also enables researchers to forecast atmospheric behavior and validate physical laws.[10]Classical and Representational Theories
The classical theory of measurement posits that numerical relations exist inherently in nature as objective properties, and measurement involves discovering and assigning these pre-existing magnitudes to empirical phenomena. This realist perspective, prominent in ancient and pre-modern science, assumes that quantities like length or area are real attributes independent of observation, which can be uncovered through geometric or arithmetic methods. For instance, in Euclidean geometry, measurement is framed as the quantification of spatial relations based on axioms such as the ability to extend a line segment or construct equilateral triangles, allowing ratios and proportions to be derived directly from the structure of physical space.[11][12] In contrast, the representational theory of measurement, developed in the 20th century, conceptualizes measurement as the assignment of numbers to objects or events according to rules that preserve empirical relational structures through mappings to numerical systems. Pioneered by Norman Campbell in his 1920 work, this approach distinguishes fundamental measurement—where numbers directly represent additive empirical concatenations, as in length via rulers or mass via balances—from derived measurement, which infers quantities indirectly through scientific laws, such as density from mass and volume. Campbell emphasized that valid measurement requires empirical operations that mirror mathematical addition, ensuring numerals reflect qualitative relations like "greater than" or "concatenable with."[10] Later formalized by Patrick Suppes and others, representational theory views measurement as establishing homomorphisms (structure-preserving mappings) from qualitative empirical domains—defined by relations like order or concatenation—to quantitative numerical domains, often aiming for isomorphisms where the structures are uniquely equivalent.[13] A key contribution to representational theory is S.S. Stevens' classification of measurement scales, introduced in 1946, which delineates four levels based on the properties preserved in the numerical assignment and the admissible transformations. These levels are:| Scale Type | Properties | Examples | Admissible Transformations |
|---|---|---|---|
| Nominal | Identity (categories distinguished) | Gender, blood types | Permutations (relabeling) |
| Ordinal | Identity and magnitude (order preserved) | Rankings, hardness scales | Monotonic increasing functions |
| Interval | Identity, magnitude, equal intervals (additive differences) | Temperature (Celsius), IQ scores | Linear transformations (aX + b, a > 0) |
| Ratio | Identity, magnitude, equal intervals, absolute zero (multiplicative structure) | Length, weight, time | Positive scale multiplications (aX, a > 0) |
Key Concepts in Measurability
Operationalism provides a foundational framework for defining measurable quantities by linking concepts directly to observable and verifiable operations. Pioneered by physicist Percy Williams Bridgman in his seminal 1927 work The Logic of Modern Physics, operationalism asserts that the meaning of a physical concept is synonymous with the set of operations used to measure it, ensuring definitions remain grounded in empirical procedures rather than abstract speculation.[15] This approach arose from Bridgman's experiences in high-pressure physics and the conceptual challenges posed by Einstein's relativity, where traditional definitions failed to account for context-dependent measurements, such as length varying by method (e.g., rigid rod versus light interferometry).[15] By insisting on operational ties, Bridgman aimed to eliminate ambiguity, influencing measurement practices across sciences by promoting definitions that specify exact procedures for replication.[15] In the International Vocabulary of Metrology (VIM), this aligns with the notion of a measurand as a quantity defined by a documented measurement procedure, allowing for consistent application in diverse contexts.[16] A critical distinction in measurement practices is between direct and indirect methods, which determines how a quantity's value is ascertained. Direct measurement involves obtaining the measurand's value through immediate comparison to a standard or by direct counting, without requiring supplementary computations or models; for instance, using a calibrated ruler to gauge an object's length exemplifies this by yielding the value straightforwardly from the instrument's indication.[16] Indirect measurement, conversely, infers the measurand from other directly measured quantities via a known functional relationship, often incorporating mathematical derivations to account for influence factors; a common example is calculating an object's mass from its weight measured on a scale, adjusted for local gravitational acceleration using Newton's law.[16] While direct methods offer simplicity and minimal error propagation, indirect approaches enable assessment of quantities inaccessible to direct observation, such as internal temperature via infrared spectroscopy, though they demand rigorous validation of the underlying model to maintain reliability.[16] Foundational attributes of any measurement—accuracy, precision, and resolution—characterize its quality and suitability for scientific or practical use. Accuracy quantifies the closeness of agreement between a measured value and the true value of the measurand, encompassing both systematic and random errors to reflect overall correctness; for example, a thermometer reading 100.0 °C for boiling water at sea level under ideal conditions demonstrates high accuracy if the true value is indeed 99.9839 °C per international standards.[16] Precision, in contrast, measures the closeness of agreement among repeated measurements under specified conditions, focusing on variability rather than truth; it is often expressed via standard deviation, where tight clustering of values (e.g., multiple length readings of 5.01 cm, 5.02 cm, 5.01 cm) indicates high precision, even if offset from the true 5.00 cm.[16] Resolution defines the smallest detectable change in the measurand that alters the instrument's indication, limiting the granularity of measurements; a digital scale with 0.01 g resolution can distinguish masses differing by at least that amount, but finer variations remain undetectable.[16] These attributes interrelate—high resolution supports precision, but only accuracy ensures meaningful results—guiding instrument selection and uncertainty evaluation in metrology.[17] Measurability requires adherence to core criteria: reproducibility, objectivity, and independence from the observer, which collectively ensure results are reliable and universally verifiable. Reproducibility assesses measurement precision under varied conditions, including changes in location, operator, measuring system, and time, confirming that the same value emerges despite such factors; per VIM standards, it is quantified by the dispersion of results from multiple laboratories or sessions, with low variability (e.g., standard deviation below 1% for inter-lab voltage measurements) signaling robust measurability.[16] Objectivity demands that procedures minimize subjective influences, relying on standardized protocols and automated instruments to produce impartial outcomes; this is evident in protocols like those in ISO 5725, where trueness and precision evaluations exclude observer bias through blind replications.[17] Independence from the observer further reinforces this by requiring results invariant to who conducts the measurement, achieved via reproducibility conditions that incorporate operator variation; for instance, gravitational constant determinations across global teams yield consistent values only if operator-independent, underscoring the criterion's role in establishing quantities as objectively measurable.[18] These criteria, rooted in metrological principles, distinguish measurable phenomena from those reliant on qualitative judgment, enabling cumulative scientific progress.[17]Historical Development
Ancient and Pre-Modern Measurement
Measurement practices in ancient civilizations emerged from practical needs in construction, agriculture, trade, and astronomy, often relying on body-based or natural units that varied by region but laid foundational principles for standardization. These early systems prioritized utility over uniformity, with lengths derived from human anatomy, areas from plowed land, and time from celestial observations. In ancient Egypt around 3000 BCE, the royal cubit (meh niswt) represented one of the earliest attested standardized linear measures, defined as approximately 523–525 mm and used extensively in pyramid construction and monumental architecture during the Old Kingdom.[19] This unit, based on the forearm length from elbow to middle fingertip, facilitated precise engineering feats, such as aligning structures with astronomical precision.[20] The Babylonians, inheriting the sexagesimal (base-60) system from the Sumerians in the 3rd millennium BCE, applied it to time and angular measurements, dividing the circle into 360 degrees and hours into 60 minutes and seconds—a framework still used today.[21] This positional numeral system enabled sophisticated astronomical calculations, including predictions of planetary positions, by allowing efficient handling of fractions and large numbers in cuneiform tablets.[22] Greek scholars advanced measurement through theoretical geometry and experimental methods. Euclid's Elements, composed around 300 BCE, systematized geometric principles with axioms and postulates that grounded the measurement of lengths, areas, and volumes, treating them as magnitudes comparable via ratios without numerical scales.[23] Complementing this, Archimedes (c. 287–212 BCE) pioneered hydrostatics, demonstrating that the buoyant force on an object equals the weight of displaced fluid, which provided a practical method to measure irregular volumes, as illustrated in his apocryphal resolution of the gold crown's purity for King Hiero II.[24] Roman engineering adopted and adapted earlier units, with the mille passus (thousand paces) defining the mile as roughly 1,480 meters—each pace equaling two steps or about 1.48 meters—used for road networks and military logistics across the empire.[25] In medieval Europe, land measurement evolved with the acre, a unit of area standardized around the 8th–10th centuries CE as the amount of land a yoke of oxen could plow in one day, measuring approximately 4,047 square meters (or 43,560 square feet in a 66-by-660-foot rectangle), reflecting agrarian practices in Anglo-Saxon England.[26] Craft guilds further enforced local consistency in weights and measures during this period, verifying scales and bushels through inspections and royal assizes to prevent fraud in markets, as mandated by statutes from the 12th century onward.[27] Cultural variations highlighted diverse approaches: the Maya of Mesoamerica developed interlocking calendars for time measurement, including the 260-day Tzolk'in ritual cycle, the 365-day Haab' solar year, and the Long Count for historical epochs spanning thousands of years, achieving remarkable accuracy in tracking celestial events.[28] In ancient China, the li served as a primary distance unit from the Zhou dynasty (c. 1046–256 BCE), originally varying between 400–500 meters but standardized over time relative to paces or the earth's circumference, facilitating imperial surveys and Silk Road trade.[29] These pre-modern systems, while localized, influenced subsequent global efforts toward uniformity.Modern Standardization Efforts
The push for modern standardization of measurements began during the French Revolution, as reformers sought to replace the fragmented and arbitrary units of the Ancien Régime with a universal, decimal-based system to promote equality and scientific progress. In 1791, the French Academy of Sciences defined the meter as one ten-millionth of the distance from the North Pole to the equator along the meridian passing through Paris, establishing it as the fundamental unit of length in the proposed metric system.[30] This definition was intended to ground measurements in natural phenomena, with the kilogram similarly derived from the mass of a cubic decimeter of water, though practical implementation involved extensive surveys to determine the exact length.[31] The metric system was officially adopted in France by 1795, but initial resistance from traditionalists and logistical challenges delayed widespread use.[32] By the mid-19th century, the need for international uniformity became evident amid growing global trade and scientific collaboration, leading to diplomatic efforts to promote the metric system beyond France. The pivotal 1875 Metre Convention, signed by representatives from 17 nations in Paris, formalized the metric system's international status and established the Bureau International des Poids et Mesures (BIPM) to maintain and disseminate standards.[33] The BIPM, headquartered in Sèvres, France, was tasked with preserving prototypes and coordinating metrological activities, marking the first permanent intergovernmental organization dedicated to measurement science.[34] This treaty laid the groundwork for global adoption, though progress varied by country. Adoption faced significant challenges, particularly from nations with entrenched customary systems. In Britain, despite participation in the 1875 Convention, resistance stemmed from imperial pride, economic concerns over retooling industries, and legislative inertia; the metric system was permitted but not mandated, preserving the imperial system's dominance in trade and daily life.[35] The United States legalized metric use in 1866 and signed the Metre Convention, but adoption remained partial, limited mainly to scientific and engineering contexts while customary units prevailed in commerce and public use due to familiarity and the vast scale of existing infrastructure.[36] These hurdles highlighted the tension between national traditions and the benefits of standardization. In response to inaccuracies in early provisional standards, 19th-century reforms refined the metric prototypes for greater precision and durability. At the first General Conference on Weights and Measures in 1889, the meter was redefined as the distance between two marks on an international prototype bar made of 90% platinum and 10% iridium alloy, maintained at the melting point of ice (0°C).[37] This artifact-based standard, selected from ten similar bars for its stability, replaced the original meridian-derived definition and served as the global reference until later revisions, ensuring reproducibility across borders.[38] Such advancements solidified the metric system's role as the foundation of modern metrology.Evolution in the 20th and 21st Centuries
The International System of Units (SI) was formally established in 1960 by the 11th General Conference on Weights and Measures (CGPM), providing a coherent framework built on seven base units: the metre for length, kilogram for mass, second for time, ampere for electric current, kelvin for temperature, mole for amount of substance, and candela for luminous intensity.[39] This system replaced earlier metric variants and aimed to unify global scientific and industrial measurements through decimal-based coherence.[40] Throughout the 20th century, advancements in physics prompted iterative refinements to SI units, culminating in the 2019 redefinition approved by the 26th CGPM, which anchored all base units to fixed values of fundamental physical constants rather than artifacts or processes.[41] For instance, the kilogram was redefined using the Planck constant (h = 6.62607015 × 10^{-34} J s), eliminating reliance on the platinum-iridium prototype and enabling more stable, reproducible mass standards via quantum methods like the Kibble balance.[42] Similarly, the ampere, kelvin, and mole were tied to the elementary charge, Boltzmann constant, and Avogadro constant, respectively, enhancing precision across electrical, thermal, and chemical measurements.[43] In the 21st century, time measurement evolved significantly with the deployment of cesium fountain atomic clocks, such as NIST-F2, operational since 2014 and serving as the U.S. civilian time standard with an accuracy that neither gains nor loses a second in over 300 million years.[44] This clock, using laser-cooled cesium atoms in a fountain configuration, contributes to International Atomic Time (TAI) and underpins GPS and telecommunications by defining the second as 9,192,631,770 oscillations of the cesium-133 hyperfine transition.[45] For mass, quantum standards emerged, including silicon-sphere-based Avogadro experiments and watt balances, which realize the kilogram through quantum electrical effects and have achieved uncertainties below 10 parts per billion, supporting applications in nanotechnology and precision manufacturing.[46][47] These evolutions had profound global impacts, exemplified by the 1999 loss of NASA's Mars Climate Orbiter, where a mismatch between metric (newton-seconds) and imperial (pound-seconds) units in software led to the spacecraft entering Mars' atmosphere at an altitude of 57 km instead of the planned 150 km, resulting in its destruction and a $327 million setback that underscored the need for universal SI adoption in international space missions.[48][49] Digital metrology advanced concurrently, with 20th-century innovations like coordinate measuring machines (CMMs) evolving into 21st-century laser trackers and computed tomography systems, enabling sub-micron accuracy in three-dimensional inspections for industries such as aerospace and automotive, while integrating with Industry 4.0 through AI-driven data analytics and blockchain for traceable calibrations.[50][51]Units and Measurement Systems
Imperial and US Customary Systems
The Imperial and US customary systems of measurement originated from ancient influences, including Anglo-Saxon and Roman traditions, where units were often derived from human body parts and natural references for practicality in daily trade and construction.[52] The inch, for instance, traces back to the width of a thumb or the length of three barley grains placed end to end, as standardized in medieval England under King Edward II in 1324.[25] Similarly, the yard evolved from the approximate length of an outstretched arm or the distance from the nose to the thumb tip, as defined by King Henry I of England around 1100–1135, reflecting a shift from inconsistent local measures to more uniform standards in the British Isles.[53] These systems formalized in Britain through the Weights and Measures Act of 1824, establishing the Imperial system, while the US retained pre-independence English units with minor adaptations after 1776.[52] Key units in these systems emphasize length, weight, and volume, with non-decimal relationships that differ from modern decimal-based alternatives. For length, the foot equals 12 inches (0.3048 meters), the yard comprises 3 feet (0.9144 meters), and the mile consists of 1,760 yards (1.609 kilometers), all inherited from English precedents.[52] Weight units include the avoirdupois pound (0.45359237 kilograms), subdivided into 16 ounces, used for general commodities, while the troy pound (containing 12 troy ounces) applies to precious metals.[54] Volume measures feature the gallon as a primary unit: the US gallon holds 231 cubic inches (3.785 liters), divided into 4 quarts or 128 fluid ounces, suitable for liquid capacities like fuel or beverages.[52] The US customary and British Imperial systems diverged notably after 1824, when Britain redefined its standards independently of American practices. The US gallon, based on the 18th-century English wine gallon of 231 cubic inches, contrasts with the Imperial gallon of 277.42 cubic inches (4.546 liters), defined as the volume occupied by 10 pounds of water at 62°F, making the US version about 83.3% of the Imperial.[52] This post-1824 split also affected derived units, such as the fluid ounce (US: 29.5735 milliliters; Imperial: 28.4131 milliliters) and the bushel (US: 35.239 liters for dry goods; Imperial: 36.368 liters), complicating transatlantic trade and requiring precise conversions.[52] Other differences include the ton, with the US short ton at 2,000 pounds versus the Imperial long ton at 2,240 pounds.[54] These systems persist today in specific sectors despite global metric adoption, particularly in the United States for everyday and industrial applications. In construction, US customary units dominate for dimensions like lumber (e.g., 2x4 inches) and site plans, as federal guidelines allow their continued use where practical.[55] Aviation relies on them for altitude (feet above sea level) and pressure (inches of mercury), with international standards incorporating customary measures to align with US-dominated aircraft manufacturing.[56] In the UK and some Commonwealth nations, Imperial units linger in informal contexts like road signs (miles) and recipes (pints), though official metrication since 1965 has reduced their scope.[52] Conversion challenges, such as 1 mile equaling exactly 1.609 kilometers, often lead to errors in international contexts, underscoring the systems' historical entrenchment over decimal simplicity.[52]Metric System and International System of Units
The metric system is a decimal-based framework for measurement that employs powers of ten to form multiples and submultiples of base units, facilitating straightforward conversions and calculations across scales.[57] This principle underpins the International System of Units (SI), the contemporary evolution of the metric system, which serves as the worldwide standard for scientific, technical, and everyday measurements due to its coherence and universality.[58] Coherence in the SI means that derived units can be expressed directly from base units without additional conversion factors, enhancing precision in fields like physics and engineering.[57] The SI comprises seven base units, each defined by fixed numerical values of fundamental physical constants to ensure stability and reproducibility independent of artifacts or environmental conditions.[57] These are:- Metre (m) for length: the distance traveled by light in vacuum in 1/299 792 458 of a second.[57]
- Kilogram (kg) for mass: defined via Planck's constant.[57]
- Second (s) for time: the duration of 9 192 631 770 periods of radiation corresponding to the transition between two hyperfine levels of the caesium-133 atom.[57]
- Ampere (A) for electric current: defined via the elementary charge.[57]
- Kelvin (K) for thermodynamic temperature: defined via the Boltzmann constant.[57]
- Mole (mol) for amount of substance: defined via the Avogadro constant.[57]
- Candela (cd) for luminous intensity: defined via the luminous efficacy of monochromatic radiation.[57]
| Prefix | Symbol | Factor |
|---|---|---|
| quecto | q | |
| ronto | r | |
| atto | a | |
| femto | f | |
| pico | p | |
| nano | n | |
| micro | µ | |
| milli | m | |
| centi | c | |
| deci | d | |
| deca | da | |
| hecto | h | |
| kilo | k | |
| mega | M | |
| giga | G | |
| tera | T | |
| peta | P | |
| exa | E | |
| ronna | R | |
| quetta | Q |