Parameter
View on WikipediaThis article needs additional citations for verification. (August 2011) |
A parameter (from Ancient Greek παρά (pará) 'beside, subsidiary' and μέτρον (métron) 'measure'), generally, is any characteristic that can help in defining or classifying a particular system (meaning an event, project, object, situation, etc.). That is, a parameter is an element of a system that is useful, or critical, when identifying the system, or when evaluating its performance, status, condition, etc.
Parameter has more specific meanings within various disciplines, including mathematics, computer programming, engineering, statistics, logic, linguistics, and electronic musical composition.
In addition to its technical uses, there are also extended uses, especially in non-scientific contexts, where it is used to mean defining characteristics or boundaries, as in the phrases 'test parameters' or 'game play parameters'.[citation needed]
Modelization
[edit]When a system is modeled by equations, the values that describe the system are called parameters. For example, in mechanics, the masses, the dimensions and shapes (for solid bodies), the densities and the viscosities (for fluids), appear as parameters in the equations modeling movements. There are often several choices for the parameters, and choosing a convenient set of parameters is called parametrization.
For example, if one were considering the movement of an object on the surface of a sphere much larger than the object (e.g. the Earth), there are two commonly used parametrizations of its position: angular coordinates (like latitude/longitude), which neatly describe large movements along circles on the sphere, and directional distance from a known point (e.g. "10km NNW of Toronto" or equivalently "8km due North, and then 6km due West, from Toronto" ), which are often simpler for movement confined to a (relatively) small area, like within a particular country or region. Such parametrizations are also relevant to the modelization of geographic areas (i.e. map drawing).
Mathematical functions
[edit]Mathematical functions have one or more arguments that are designated in the definition by variables. A function definition can also contain parameters, but unlike variables, parameters are not listed among the arguments that the function takes. When parameters are present, the definition actually defines a whole family of functions, one for every valid set of values of the parameters. For instance, one could define a general quadratic function by declaring
- ;
Here, the variable x designates the function's argument, but a, b, and c are parameters (in this instance, also called coefficients) that determine which particular quadratic function is being considered. A parameter could be incorporated into the function name to indicate its dependence on the parameter. For instance, one may define the base-b logarithm by the formula
where b is a parameter that indicates which logarithmic function is being used. It is not an argument of the function, and will, for instance, be a constant when considering the derivative .
In some informal situations it is a matter of convention (or historical accident) whether some or all of the symbols in a function definition are called parameters. However, changing the status of symbols between parameter and variable changes the function as a mathematical object. For instance, the notation for the falling factorial power
- ,
defines a polynomial function of n (when k is considered a parameter), but is not a polynomial function of k (when n is considered a parameter). Indeed, in the latter case, it is only defined for non-negative integer arguments. More formal presentations of such situations typically start out with a function of several variables (including all those that might sometimes be called "parameters") such as
as the most fundamental object being considered, then defining functions with fewer variables from the main one by means of currying.
Sometimes it is useful to consider all functions with certain parameters as parametric family, i.e. as an indexed family of functions. Examples from probability theory are given further below.
Examples
[edit]- In a section on frequently misused words in his book The Writer's Art, James J. Kilpatrick quoted a letter from a correspondent, giving examples to illustrate the correct use of the word parameter:
W.M. Woods ... a mathematician ... writes ... "... a variable is one of the many things a parameter is not." ... The dependent variable, the speed of the car, depends on the independent variable, the position of the gas pedal.
[Kilpatrick quoting Woods] "Now ... the engineers ... change the lever arms of the linkage ... the speed of the car ... will still depend on the pedal position ... but in a ... different manner. You have changed a parameter"
- A parametric equaliser is an audio filter that allows the frequency of maximum cut or boost to be set by one control, and the size of the cut or boost by another. These settings, the frequency level of the peak or trough, are two of the parameters of a frequency response curve, and in a two-control equaliser they completely describe the curve. More elaborate parametric equalisers may allow other parameters to be varied, such as skew. These parameters each describe some aspect of the response curve seen as a whole, over all frequencies. A graphic equaliser provides individual level controls for various frequency bands, each of which acts only on that particular frequency band.
- If asked to imagine the graph of the relationship y = ax2, one typically visualizes a range of values of x, but only one value of a. Of course a different value of a can be used, generating a different relation between x and y. Thus a is a parameter: it is less variable than the variable x or y, but it is not an explicit constant like the exponent 2. More precisely, changing the parameter a gives a different (though related) problem, whereas the variations of the variables x and y (and their interrelation) are part of the problem itself.
- In calculating income based on wage and hours worked (income equals wage multiplied by hours worked), it is typically assumed that the number of hours worked is easily changed, but the wage is more static. This makes wage a parameter, hours worked an independent variable, and income a dependent variable.
Mathematical models
[edit]In the context of a mathematical model, such as a probability distribution, the distinction between variables and parameters was described by Bard as follows:
- We refer to the relations which supposedly describe a certain physical situation, as a model. Typically, a model consists of one or more equations. The quantities appearing in the equations we classify into variables and parameters. The distinction between these is not always clear cut, and it frequently depends on the context in which the variables appear. Usually a model is designed to explain the relationships that exist among quantities which can be measured independently in an experiment; these are the variables of the model. To formulate these relationships, however, one frequently introduces "constants" which stand for inherent properties of nature (or of the materials and equipment used in a given experiment). These are the parameters.[1]
Analytic geometry
[edit]In analytic geometry, a curve can be described as the image of a function whose argument, typically called the parameter, lies in a real interval.
For example, the unit circle can be specified in the following two ways:
- implicit form, the curve is the locus of points (x, y) in the Cartesian plane that satisfy the relation
- parametric form, the curve is the image of the function
with parameter As a parametric equation this can be written
The parameter t in this equation would elsewhere in mathematics be called the independent variable.
Mathematical analysis
[edit]In mathematical analysis, integrals dependent on a parameter are often considered. These are of the form
In this formula, t is the argument of the function F, and on the right-hand side the parameter on which the integral depends. When evaluating the integral, t is held constant, and so it is considered to be a parameter. If we are interested in the value of F for different values of t, we then consider t to be a variable. The quantity x is a dummy variable or variable of integration (confusingly, also sometimes called a parameter of integration).
Statistics and econometrics
[edit]In statistics and econometrics, the probability framework above still holds, but attention shifts to estimating the parameters of a distribution based on observed data, or testing hypotheses about them. In frequentist estimation parameters are considered "fixed but unknown", whereas in Bayesian estimation they are treated as random variables, and their uncertainty is described as a distribution.[citation needed][2]
In estimation theory of statistics, "statistic" or estimator refers to samples, whereas "parameter" or estimand refers to populations, where the samples are taken from. A statistic is a numerical characteristic of a sample that can be used as an estimate of the corresponding parameter, the numerical characteristic of the population from which the sample was drawn.
For example, the sample mean (estimator), denoted , can be used as an estimate of the mean parameter (estimand), denoted μ, of the population from which the sample was drawn. Similarly, the sample variance (estimator), denoted S2, can be used to estimate the variance parameter (estimand), denoted σ2, of the population from which the sample was drawn. (Note that the sample standard deviation (S) is not an unbiased estimate of the population standard deviation (σ): see Unbiased estimation of standard deviation.)
It is possible to make statistical inferences without assuming a particular parametric family of probability distributions. In that case, one speaks of non-parametric statistics as opposed to the parametric statistics just described. For example, a test based on Spearman's rank correlation coefficient would be called non-parametric since the statistic is computed from the rank-order of the data disregarding their actual values (and thus regardless of the distribution they were sampled from), whereas those based on the Pearson product-moment correlation coefficient are parametric tests since it is computed directly from the data values and thus estimates the parameter known as the population correlation.
Probability theory
[edit]
In probability theory, one may describe the distribution of a random variable as belonging to a family of probability distributions, distinguished from each other by the values of a finite number of parameters. For example, one talks about "a Poisson distribution with mean value λ". The function defining the distribution (the probability mass function) is:
This example nicely illustrates the distinction between constants, parameters, and variables. e is Euler's number, a fundamental mathematical constant. The parameter λ is the mean number of observations of some phenomenon in question, a property characteristic of the system. k is a variable, in this case the number of occurrences of the phenomenon actually observed from a particular sample. If we want to know the probability of observing k1 occurrences, we plug it into the function to get . Without altering the system, we can take multiple samples, which will have a range of values of k, but the system is always characterized by the same λ.
For instance, suppose we have a radioactive sample that emits, on average, five particles every ten minutes. We take measurements of how many particles the sample emits over ten-minute periods. The measurements exhibit different values of k, and if the sample behaves according to Poisson statistics, then each value of k will come up in a proportion given by the probability mass function above. From measurement to measurement, however, λ remains constant at 5. If we do not alter the system, then the parameter λ is unchanged from measurement to measurement; if, on the other hand, we modulate the system by replacing the sample with a more radioactive one, then the parameter λ would increase.
Another common distribution is the normal distribution, which has as parameters the mean μ and the variance σ².
In these above examples, the distributions of the random variables are completely specified by the type of distribution, i.e. Poisson or normal, and the parameter values, i.e. mean and variance. In such a case, we have a parameterized distribution.
It is possible to use the sequence of moments (mean, mean square, ...) or cumulants (mean, variance, ...) as parameters for a probability distribution: see Statistical parameter.
Computer programming
[edit]In computer programming, two notions of parameter are commonly used, and are referred to as parameters and arguments—or more formally as a formal parameter and an actual parameter.
For example, in the definition of a function such as
- y = f(x) = x + 2,
x is the formal parameter (the parameter) of the defined function.
When the function is evaluated for a given value, as in
- f(3): or, y = f(3) = 3 + 2 = 5,
3 is the actual parameter (the argument) for evaluation by the defined function; it is a given value (actual value) that is substituted for the formal parameter of the defined function. (In casual usage the terms parameter and argument might inadvertently be interchanged, and thereby used incorrectly.)
These concepts are discussed in a more precise way in functional programming and its foundational disciplines, lambda calculus and combinatory logic. Terminology varies between languages; some computer languages such as C define parameter and argument as given here, while Eiffel uses an alternative convention.
Artificial intelligence
[edit]In artificial intelligence, a model describes the probability that something will occur. Parameters in a model are the weight of the various probabilities. Tiernan Ray, in an article on GPT-3, described parameters this way:
A parameter is a calculation in a neural network that applies a great or lesser weighting to some aspect of the data, to give that aspect greater or lesser prominence in the overall calculation of the data. It is these weights that give shape to the data, and give the neural network a learned perspective on the data.[3]
Engineering
[edit]In engineering (especially involving data acquisition) the term parameter sometimes loosely refers to an individual measured item. This usage is not consistent, as sometimes the term channel refers to an individual measured item, with parameter referring to the setup information about that channel.
"Speaking generally, properties are those physical quantities which directly describe the physical attributes of the system; parameters are those combinations of the properties which suffice to determine the response of the system. Properties can have all sorts of dimensions, depending upon the system being considered; parameters are dimensionless, or have the dimension of time or its reciprocal."[4]
The term can also be used in engineering contexts, however, as it is typically used in the physical sciences.
Environmental science
[edit]In environmental science and particularly in chemistry and microbiology, a parameter is used to describe a discrete chemical or microbiological entity that can be assigned a value: commonly a concentration, but may also be a logical entity (present or absent), a statistical result such as a 95 percentile value or in some cases a subjective value.
Linguistics
[edit]Within linguistics, the word "parameter" is almost exclusively used to denote a binary switch in a Universal Grammar within a Principles and Parameters framework.
Logic
[edit]In logic, the parameters passed to (or operated on by) an open predicate are called parameters by some authors (e.g., Prawitz's Natural Deduction;[5] Paulson's Designing a theorem prover). Parameters locally defined within the predicate are called variables. This extra distinction pays off when defining substitution (without this distinction special provision must be made to avoid variable capture). Others (maybe most) just call parameters passed to (or operated on by) an open predicate variables, and when defining substitution have to distinguish between free variables and bound variables.
Music
[edit]In music theory, a parameter denotes an element which may be manipulated (composed), separately from the other elements. The term is used particularly for pitch, loudness, duration, and timbre, though theorists or composers have sometimes considered other musical aspects as parameters. The term is particularly used in serial music, where each parameter may follow some specified series. Paul Lansky and George Perle criticized the extension of the word "parameter" to this sense, since it is not closely related to its mathematical sense,[6] but it remains common. The term is also common in music production, as the functions of audio processing units (such as the attack, release, ratio, threshold, and other variables on a compressor) are defined by parameters specific to the type of unit (compressor, equalizer, delay, etc.).
See also
[edit]- Coefficient
- Coordinate system
- Function parameter
- Occam's razor (with regards to the trade-off of many or few parameters in data fitting)
References
[edit]- ^ Bard, Yonathan (1974). Nonlinear Parameter Estimation. New York: Academic Press. p. 11. ISBN 0-12-078250-2.
- ^ Efron, Bradley (2014-09-10). "Frequentist Accuracy of Bayesian Estimates". researchgate.net. Retrieved 2023-04-12.
- ^ "OpenAI's gigantic GPT-3 hints at the limits of language models for AI". ZDNet.
- ^ Trimmer, John D. (1950). Response of Physical Systems. New York: Wiley. p. 13.
- ^ Prawitz, Dag (2006). Natural Deduction: A Proof-Theoretic Study. Mineola, New York: Dover Publications. ISBN 9780486446554. OCLC 61296001.
- ^ Lansky, Paul & Perle, George (2001). "Parameter". In Sadie, Stanley & Tyrrell, John (eds.). The New Grove Dictionary of Music and Musicians (2nd ed.). London: Macmillan Publishers. ISBN 978-1-56159-239-5.
Parameter
View on GrokipediaFundamentals
Definition and Usage
A parameter is a quantity or variable that defines or characterizes a system, function, or model, often held constant during a specific analysis while remaining adjustable to explore different scenarios or variations.[2] In mathematical contexts, it serves as an input that shapes the behavior or properties of the entity under study without being the primary focus of variation.[2] The term "parameter" originates from the Greek roots para- meaning "beside" or "subsidiary" and metron meaning "measure," reflecting its role as a supplementary measure that accompanies the main elements of a system.[5] This etymology underscores its historical use in geometry as a line or quantity parallel to another, which evolved into a broader concept for fixed descriptors in analytical frameworks. The English term "parameter" entered mathematical usage in the 1650s, initially referring to quantities in conic sections.[5] Unlike variables, which vary freely within a given domain to represent changing states or inputs, parameters are typically fixed within a particular context to maintain the structure of the model or equation.[2] This distinction allows parameters to provide stability and specificity, while variables enable exploration of dynamic relationships. Common examples include the radius in the equation describing a circle, which determines the shape's size and is held constant for that geometric figure, or the growth rate in a population model, which characterizes the rate of expansion and can be adjusted to simulate different environmental conditions.[2] These cases illustrate parameters' utility in simplifying complex systems without delving into field-specific computations. Parameters facilitate abstraction in scientific and mathematical modeling by encapsulating essential characteristics, enabling the creation of generalizable frameworks that can be applied or adapted across diverse contexts with minimal reconfiguration.[6] This role promotes efficiency in representing real-world phenomena, allowing researchers to focus on core dynamics rather than unique details for each instance.Historical Context
The concept of a parameter traces its roots to ancient Greek geometry, where it referred to a constant quantity used to define the properties of conic sections. Although the modern term "parameter" derives from the Greek words para- (beside) and metron (measure), denoting a subsidiary measure, early applications appear in the works of mathematicians like Euclid and Archimedes, who described conic sections through proportional relations and auxiliary lines that functioned parametrically. For instance, Archimedes utilized analogous fixed measures in his quadrature of the parabola around 250 BCE to determine areas.[7] Apollonius of Perga further systematized this approach in his Conics circa 200 BCE, using the term orthia pleura (upright side) for the fixed chord parallel to the tangent at the vertex—now known as the parameter or latus rectum—essential for classifying ellipses, parabolas, and hyperbolas.[8][9][10] Advancements in the 17th and 18th centuries integrated parameters into analytic geometry and curve theory. René Descartes, in his 1637 treatise La Géométrie, revolutionized the field by representing geometric curves algebraically using coordinates, where constants in the equations served as parameters defining the loci, bridging algebra and geometry without relying solely on synthetic methods. This laid the groundwork for parametric equations in modern form. Leonhard Euler expanded on this in the 18th century, developing parametric representations for complex curves, such as in his studies of elastic curves (elastica) and spirals during the 1740s, where parameters like arc length and curvature enabled precise descriptions of plane figures and variational problems. Euler's work, including his 1744 paper on the elastica, emphasized parameters as tools for solving differential equations governing curve shapes.[11][12] In the 19th and early 20th centuries, parameters gained prominence in statistics, physics, and estimation theory. Carl Friedrich Gauss introduced parameter estimation via the least squares method in his 1809 Theoria Motus Corporum Coelestium, applying it to astronomical data to minimize errors in orbital parameters, marking the birth of rigorous statistical inference. Ronald A. Fisher advanced this in the 1920s with maximum likelihood estimation, detailed in his 1922 paper "On the Mathematical Foundations of Theoretical Statistics," where parameters represent unknown population characteristics maximized for observed data likelihood. In physics, James Clerk Maxwell incorporated parameters like permittivity and permeability in his 1865 electromagnetic theory, formalized in equations that unified electricity, magnetism, and light, treating these as constants scaling field interactions.[13][14] The mid-20th century saw parameters adopted across interdisciplinary fields, particularly computing and artificial intelligence. In computing, the term emerged in the 1950s with the development of subroutines in early programming languages like FORTRAN (1957), where parameters passed values between procedures, enabling modular code as seen in IBM's mathematical subroutine libraries. In AI, parameters proliferated in the 1980s amid the expert systems boom and the revival of neural networks; for example, backpropagation algorithms optimized network parameters (weights) for learning, as in Rumelhart, Hinton, and Williams' 1986 seminal work, scaling AI from rule-based to data-driven models. Notably, while parameters are central to modern generative linguistics since Chomsky's 1981 principles-and-parameters framework, pre-20th-century linguistic usage remains underexplored, with sparse evidence in 19th-century descriptive grammars treating structural constants analogously but without the formalized term.[15][16]Mathematics
Parameters in Functions
In mathematics, a parameter is a quantity that influences the output or behavior of a function but is viewed as being held constant during the evaluation of that function for varying inputs.[1] This distinguishes parameters from variables, which are the inputs that change to produce different outputs. Parameters effectively define the specific form or characteristics of the function, allowing it to be part of a broader family of related functions. Functions with parameters are often denoted using a semicolon to separate the variable from the parameter, such as , where is the independent variable and represents one or more parameters.[2] Here, is fixed for a given function instance, but varying generates different functions within the same family, enabling the modeling of diverse behaviors through a single parameterized expression. For instance, the exponential family of functions, such as for , illustrates how parameters create a versatile class of functions applicable in various mathematical contexts. Key properties of parameters in functions include linearity, identifiability, and sensitivity. A function is linear in its parameters if the output can be expressed as a linear combination of those parameters, meaning no products, powers, or other nonlinear operations involving the parameters appear in the expression.[17] This linearity simplifies analysis and estimation, as seen in polynomial functions where parameters multiply powers of the variable but not each other. Identifiability refers to the ability to uniquely determine parameter values from the function's observed behavior; for example, in a linear function, parameters are identifiable provided the inputs span the necessary range to distinguish their effects.[18] Sensitivity measures how changes in a parameter affect the function's output, typically quantified by the partial derivative with respect to the parameter, , which indicates the rate of change in the function for small perturbations in .[19] Basic examples highlight these concepts. Consider the linear function , where is the slope parameter controlling the steepness and is the intercept parameter setting the y-value at .[1] Varying and produces a family of straight lines, with the function linear in both parameters. Similarly, the quadratic function involves three parameters: determines the parabola's direction and curvature, affects its tilt, and shifts it vertically. This form is also linear in , , and , allowing straightforward adjustments to fit observed data patterns.[1] Parameter estimation in functions typically involves curve fitting, where observed data points are used to determine parameter values that best match the function to the data. A fundamental method is least squares fitting, which minimizes the sum of squared differences between observed values and the function's predictions.[20] For linear and quadratic functions, this approach yields closed-form solutions for the parameters, such as solving normal equations derived from the data. This method, dating back to the work of Gauss and Legendre in the early 19th century, provides reliable estimates when the data noise is minimal and the function form is appropriate.[20]Parameters in Models
In mathematical models, parameters act as tunable components that encapsulate key system properties, enabling the simulation and prediction of dynamic behaviors through differential equations or computational frameworks. These parameters allow models to represent real-world processes by adjusting rates of change, interactions, or thresholds, thereby facilitating the exploration of scenarios that would otherwise be infeasible to observe directly. For example, in epidemiological simulations, the parameter β in the SIR model quantifies the transmission rate of infection from susceptible to infected individuals, influencing the spread dynamics within a population.[21] Parameters in models are broadly categorized into structural ones, which define the underlying form and assumptions of the model—such as the choice of differential equation structure—and observational ones, which are empirically fitted to align model outputs with available data. Structural parameters establish the model's architecture, often derived from theoretical principles, while observational parameters are adjusted during calibration to reflect measurement outcomes. A critical challenge is identifiability, where parameters may not be uniquely recoverable from outputs due to correlations or insufficient data, leading to non-unique solutions that undermine prediction reliability; this issue is particularly pronounced in nonlinear systems.[22][23] Model calibration involves optimizing parameters to minimize discrepancies between simulated results and empirical observations, with least squares fitting being a foundational technique that minimizes the sum of squared residuals. In the Lotka-Volterra predator-prey model, for instance, the parameters α (prey growth rate), β (predation efficiency), γ (predator mortality rate), and δ (predator conversion efficiency from prey) are calibrated to capture oscillatory population dynamics, often using time-series data on species abundances. The calibrated model is given by the system:Analytic Geometry
In analytic geometry, parametric equations provide a method to represent geometric objects such as curves and surfaces by expressing their coordinates as functions of one or more parameters, offering a flexible alternative to Cartesian or implicit forms. For instance, a straight line passing through points with direction vector can be parameterized as , , where is the parameter that traces points along the line.[26] Similarly, a circle of radius centered at the origin is given by , , with ranging from 0 to to complete the loop.[27] For an ellipse centered at the origin with semi-major axis and semi-minor axis , the equations become , , allowing the parameter to control the position around the ellipse.[28] Parametric representations offer several advantages over Cartesian equations, particularly in handling intersections, tracing paths, and facilitating animations, as they explicitly incorporate direction and parameterization by time or angle.[29] For example, computing intersections between curves is often simpler parametrically, as it involves solving for parameter values rather than eliminating variables from implicit equations. A historical development in this area includes Plücker coordinates, introduced by Julius Plücker in the mid-19th century, which use six homogeneous parameters to describe lines in three-dimensional projective space, advancing the analytic treatment of line geometry.[30] In higher dimensions, parametric equations extend to curves and surfaces, enabling descriptions of complex shapes. A sphere of radius can be parameterized using two parameters, and , as:Mathematical Analysis
In mathematical analysis, parameters often appear in functions where their variation affects the behavior of limits, derivatives, and integrals, enabling the study of how solutions depend continuously or differentiably on these parameters. A key tool for handling parameter dependence in integrals is the Leibniz integral rule, which allows differentiation under the integral sign. This rule states that if $ f(x, t) $ is continuous in $ x $ and $ t $, and differentiable in $ t $, with the partial derivative $ \frac{\partial f}{\partial t} $ continuous, then for fixed limits of integration,Statistics and Econometrics
In statistics, parameters are unknown quantities in a model that are estimated from data to describe the underlying population or process. Parameter estimation involves methods that use observed data to infer these values, with two foundational approaches being the method of moments and maximum likelihood estimation. The method of moments, introduced by Karl Pearson, equates sample moments to population moments to solve for parameters; for the normal distribution, the first moment yields the mean as the sample average, and the second central moment gives the variance as the sample variance (adjusted for bias).[40] Maximum likelihood estimation, developed by Ronald Fisher, maximizes the likelihood function—the probability of observing the data given the parameters—to obtain point estimates; for the normal distribution, this produces the same estimators for and as the method of moments, but the approach generalizes more efficiently to complex distributions by leveraging the data's joint density. Once estimated, statistical inference assesses the reliability of parameters through confidence intervals and hypothesis tests. Confidence intervals provide a range within which the true parameter likely lies, with coverage probability determined by the interval's construction; for example, a 95% confidence interval for in a normal model with known variance uses the sample mean plus or minus 1.96 standard errors. Hypothesis testing evaluates specific claims about parameters, such as equality to a null value, using test statistics like the t-statistic in Student's t-test, which William Sealy Gosset introduced to handle small-sample inference on means when the variance is unknown, comparing the sample mean to the hypothesized value under a t-distribution. In econometrics, parameters often represent relationships between economic variables, estimated via regression models to inform policy and prediction. Ordinary least squares (OLS) estimates regression coefficients and in the linear model by minimizing the sum of squared residuals, a method formalized by Carl Friedrich Gauss for error-prone observations. When endogeneity biases OLS estimates—such as due to omitted variables or reverse causality—instrumental variables (IV) estimation uses exogenous instruments correlated with the regressor but not the error term to identify causal parameters, as advanced in modern causal inference frameworks.[41] Time-series analysis treats parameters as characterizing temporal dependencies in data, with autoregressive integrated moving average (ARIMA) models specifying orders , , and for autoregressive, differencing, and moving average components, respectively; estimation typically employs maximum likelihood on differenced series to achieve stationarity. While classical ARIMA relies on frequentist methods, Bayesian approaches for parameter inference, incorporating priors to handle uncertainty in volatile series like economic indicators, have been integrated since the late 1990s.[42]Probability Theory
In probability theory, parameters specify the properties of probability distributions, enabling the modeling of random phenomena. These parameters are typically classified into location, scale, and shape categories. The location parameter, often denoted by μ, determines the central tendency or shift of the distribution, such as the mean in the normal distribution. The scale parameter, denoted by σ, controls the spread or dispersion, representing the standard deviation in the normal case. Shape parameters alter the form of the distribution, for instance, the success probability p in the Bernoulli distribution, which governs the probability mass at 0 or 1, or the rate λ in the Poisson distribution, which sets the expected number of events in a fixed interval.[43][44][45] A prominent class of distributions unified by their parametric structure is the exponential family, which encompasses many common distributions like the normal, Poisson, and Bernoulli. In this family, the probability density or mass function can be expressed asComputing
Computer Programming
In computer programming, parameters serve as placeholders for values or references passed to functions, subroutines, or methods, enabling modular code by allowing external inputs to influence execution without hardcoding specifics.[54] They facilitate reusability and abstraction, distinguishing between formal parameters (defined in the function signature) and actual arguments (provided during invocation). Early programming languages emphasized parameters for numerical computations, evolving to support diverse passing mechanisms and scoping rules in modern contexts. The concept of parameters originated in the 1950s with FORTRAN, developed by IBM for scientific computing on the IBM 704. FORTRAN I (1957) introduced function statements using dummy arguments in an assignment-like syntax, such asfunction(arg) = expression, where arguments were passed by address, allowing functions to modify values indirectly.[55] FORTRAN II (1958) enhanced this by supporting user-defined subroutines with separate compilation, retaining symbolic information for parameter references, while FORTRAN III (late 1950s) permitted function and subroutine names as arguments themselves, expanding flexibility for alphanumeric handling.[55] These innovations marked parameters as essential for procedural abstraction in early high-level languages.
Function parameters vary by type and passing mechanism across languages. Positional parameters are matched by order of declaration, as in Python's def add(a, b): return a + b, invoked as add(2, 3).[56] Keyword parameters allow named passing for clarity and flexibility, such as greet(name="Alice", greeting="Hello") in def greet(name, greeting="Hello"): ....[57] Default parameters provide fallback values, evaluated once at definition; for example, def power(base, exponent=2): return base ** exponent yields 9 when called as power(3).[58]
Parameters can be passed by value or by reference, affecting mutability. Pass-by-value copies the argument's value into the formal parameter, isolating changes; in C++, void swapByVal(int num1, int num2) leaves originals unchanged (e.g., inputs 10 and 20 remain 10 and 20 post-call).[59] Pass-by-reference passes the address, enabling modifications to the original; C++'s void swapByRef(int& num1, int& num2) swaps values (10 and 20 become 20 and 10).[59] This distinction balances efficiency for small types (value) with avoidance of copies for large structures (reference, often with const for read-only access).[59]
Configuration parameters configure program behavior at runtime, often via command-line arguments or APIs. In C++, the main function receives them as int main(int argc, char* argv[]): argc counts arguments (minimum 1), and argv is an array where argv[0] is the program name and subsequent elements are strings; for instance, invoking ./program input.txt -v sets argc=3 and argv[1]="input.txt".[60] This mechanism supports scripting and external control without recompilation.
Parameter scope defines accessibility, while binding associates names with values or types. Local parameters, declared within a function or block, are confined to that lexical scope (e.g., Python's function arguments act as locals), preventing unintended interference.[54] Global parameters, declared at module level, are accessible program-wide but risk namespace pollution; languages like C++ use static binding at compile time for types (e.g., int x) and dynamic binding at runtime for values.[54] Type systems enforce parameter constraints, such as strong typing in Java to prevent mismatches.
Modern languages extend parameters with generics for type-safe reusability. TypeScript's generics use type parameters like <Type> in functions: function identity<Type>(arg: Type): Type { return arg; }, callable as identity<string>("hello") to infer and preserve string type.[61] In functional programming paradigms, lambda expressions with parameters gained mainstream adoption in the 2000s, enabling anonymous functions for concise higher-order operations; C++11 (2011) and Java 8 (2014) integrated them, building on earlier influences like Python's 1994 lambda but accelerating use in object-oriented contexts for tasks like event handling.[62]
