Hubbry Logo
logo
Parameter
Community hub

Parameter

logo
0 subscribers

Wikipedia

from Wikipedia

A parameter (from Ancient Greek παρά (pará) 'beside, subsidiary' and μέτρον (métron) 'measure'), generally, is any characteristic that can help in defining or classifying a particular system (meaning an event, project, object, situation, etc.). That is, a parameter is an element of a system that is useful, or critical, when identifying the system, or when evaluating its performance, status, condition, etc.

Parameter has more specific meanings within various disciplines, including mathematics, computer programming, engineering, statistics, logic, linguistics, and electronic musical composition.

In addition to its technical uses, there are also extended uses, especially in non-scientific contexts, where it is used to mean defining characteristics or boundaries, as in the phrases 'test parameters' or 'game play parameters'.[citation needed]

Modelization

[edit]

When a system is modeled by equations, the values that describe the system are called parameters. For example, in mechanics, the masses, the dimensions and shapes (for solid bodies), the densities and the viscosities (for fluids), appear as parameters in the equations modeling movements. There are often several choices for the parameters, and choosing a convenient set of parameters is called parametrization.

For example, if one were considering the movement of an object on the surface of a sphere much larger than the object (e.g. the Earth), there are two commonly used parametrizations of its position: angular coordinates (like latitude/longitude), which neatly describe large movements along circles on the sphere, and directional distance from a known point (e.g. "10km NNW of Toronto" or equivalently "8km due North, and then 6km due West, from Toronto" ), which are often simpler for movement confined to a (relatively) small area, like within a particular country or region. Such parametrizations are also relevant to the modelization of geographic areas (i.e. map drawing).

Mathematical functions

[edit]

Mathematical functions have one or more arguments that are designated in the definition by variables. A function definition can also contain parameters, but unlike variables, parameters are not listed among the arguments that the function takes. When parameters are present, the definition actually defines a whole family of functions, one for every valid set of values of the parameters. For instance, one could define a general quadratic function by declaring

;

Here, the variable x designates the function's argument, but a, b, and c are parameters (in this instance, also called coefficients) that determine which particular quadratic function is being considered. A parameter could be incorporated into the function name to indicate its dependence on the parameter. For instance, one may define the base-b logarithm by the formula

where b is a parameter that indicates which logarithmic function is being used. It is not an argument of the function, and will, for instance, be a constant when considering the derivative .

In some informal situations it is a matter of convention (or historical accident) whether some or all of the symbols in a function definition are called parameters. However, changing the status of symbols between parameter and variable changes the function as a mathematical object. For instance, the notation for the falling factorial power

,

defines a polynomial function of n (when k is considered a parameter), but is not a polynomial function of k (when n is considered a parameter). Indeed, in the latter case, it is only defined for non-negative integer arguments. More formal presentations of such situations typically start out with a function of several variables (including all those that might sometimes be called "parameters") such as

as the most fundamental object being considered, then defining functions with fewer variables from the main one by means of currying.

Sometimes it is useful to consider all functions with certain parameters as parametric family, i.e. as an indexed family of functions. Examples from probability theory are given further below.

Examples

[edit]
  • In a section on frequently misused words in his book The Writer's Art, James J. Kilpatrick quoted a letter from a correspondent, giving examples to illustrate the correct use of the word parameter:

W.M. Woods ... a mathematician ... writes ... "... a variable is one of the many things a parameter is not." ... The dependent variable, the speed of the car, depends on the independent variable, the position of the gas pedal.

[Kilpatrick quoting Woods] "Now ... the engineers ... change the lever arms of the linkage ... the speed of the car ... will still depend on the pedal position ... but in a ... different manner. You have changed a parameter"

  • A parametric equaliser is an audio filter that allows the frequency of maximum cut or boost to be set by one control, and the size of the cut or boost by another. These settings, the frequency level of the peak or trough, are two of the parameters of a frequency response curve, and in a two-control equaliser they completely describe the curve. More elaborate parametric equalisers may allow other parameters to be varied, such as skew. These parameters each describe some aspect of the response curve seen as a whole, over all frequencies. A graphic equaliser provides individual level controls for various frequency bands, each of which acts only on that particular frequency band.
  • If asked to imagine the graph of the relationship y = ax2, one typically visualizes a range of values of x, but only one value of a. Of course a different value of a can be used, generating a different relation between x and y. Thus a is a parameter: it is less variable than the variable x or y, but it is not an explicit constant like the exponent 2. More precisely, changing the parameter a gives a different (though related) problem, whereas the variations of the variables x and y (and their interrelation) are part of the problem itself.
  • In calculating income based on wage and hours worked (income equals wage multiplied by hours worked), it is typically assumed that the number of hours worked is easily changed, but the wage is more static. This makes wage a parameter, hours worked an independent variable, and income a dependent variable.

Mathematical models

[edit]

In the context of a mathematical model, such as a probability distribution, the distinction between variables and parameters was described by Bard as follows:

We refer to the relations which supposedly describe a certain physical situation, as a model. Typically, a model consists of one or more equations. The quantities appearing in the equations we classify into variables and parameters. The distinction between these is not always clear cut, and it frequently depends on the context in which the variables appear. Usually a model is designed to explain the relationships that exist among quantities which can be measured independently in an experiment; these are the variables of the model. To formulate these relationships, however, one frequently introduces "constants" which stand for inherent properties of nature (or of the materials and equipment used in a given experiment). These are the parameters.[1]

Analytic geometry

[edit]

In analytic geometry, a curve can be described as the image of a function whose argument, typically called the parameter, lies in a real interval.

For example, the unit circle can be specified in the following two ways:

  • implicit form, the curve is the locus of points (x, y) in the Cartesian plane that satisfy the relation
  • parametric form, the curve is the image of the function

    with parameter As a parametric equation this can be written

    The parameter t in this equation would elsewhere in mathematics be called the independent variable.

Mathematical analysis

[edit]

In mathematical analysis, integrals dependent on a parameter are often considered. These are of the form

In this formula, t is the argument of the function F, and on the right-hand side the parameter on which the integral depends. When evaluating the integral, t is held constant, and so it is considered to be a parameter. If we are interested in the value of F for different values of t, we then consider t to be a variable. The quantity x is a dummy variable or variable of integration (confusingly, also sometimes called a parameter of integration).

Statistics and econometrics

[edit]

In statistics and econometrics, the probability framework above still holds, but attention shifts to estimating the parameters of a distribution based on observed data, or testing hypotheses about them. In frequentist estimation parameters are considered "fixed but unknown", whereas in Bayesian estimation they are treated as random variables, and their uncertainty is described as a distribution.[citation needed][2]

In estimation theory of statistics, "statistic" or estimator refers to samples, whereas "parameter" or estimand refers to populations, where the samples are taken from. A statistic is a numerical characteristic of a sample that can be used as an estimate of the corresponding parameter, the numerical characteristic of the population from which the sample was drawn.

For example, the sample mean (estimator), denoted , can be used as an estimate of the mean parameter (estimand), denoted μ, of the population from which the sample was drawn. Similarly, the sample variance (estimator), denoted S2, can be used to estimate the variance parameter (estimand), denoted σ2, of the population from which the sample was drawn. (Note that the sample standard deviation (S) is not an unbiased estimate of the population standard deviation (σ): see Unbiased estimation of standard deviation.)

It is possible to make statistical inferences without assuming a particular parametric family of probability distributions. In that case, one speaks of non-parametric statistics as opposed to the parametric statistics just described. For example, a test based on Spearman's rank correlation coefficient would be called non-parametric since the statistic is computed from the rank-order of the data disregarding their actual values (and thus regardless of the distribution they were sampled from), whereas those based on the Pearson product-moment correlation coefficient are parametric tests since it is computed directly from the data values and thus estimates the parameter known as the population correlation.

Probability theory

[edit]
These traces all represent Poisson distributions, but with different values for the parameter λ

In probability theory, one may describe the distribution of a random variable as belonging to a family of probability distributions, distinguished from each other by the values of a finite number of parameters. For example, one talks about "a Poisson distribution with mean value λ". The function defining the distribution (the probability mass function) is:

This example nicely illustrates the distinction between constants, parameters, and variables. e is Euler's number, a fundamental mathematical constant. The parameter λ is the mean number of observations of some phenomenon in question, a property characteristic of the system. k is a variable, in this case the number of occurrences of the phenomenon actually observed from a particular sample. If we want to know the probability of observing k1 occurrences, we plug it into the function to get . Without altering the system, we can take multiple samples, which will have a range of values of k, but the system is always characterized by the same λ.

For instance, suppose we have a radioactive sample that emits, on average, five particles every ten minutes. We take measurements of how many particles the sample emits over ten-minute periods. The measurements exhibit different values of k, and if the sample behaves according to Poisson statistics, then each value of k will come up in a proportion given by the probability mass function above. From measurement to measurement, however, λ remains constant at 5. If we do not alter the system, then the parameter λ is unchanged from measurement to measurement; if, on the other hand, we modulate the system by replacing the sample with a more radioactive one, then the parameter λ would increase.

Another common distribution is the normal distribution, which has as parameters the mean μ and the variance σ².

In these above examples, the distributions of the random variables are completely specified by the type of distribution, i.e. Poisson or normal, and the parameter values, i.e. mean and variance. In such a case, we have a parameterized distribution.

It is possible to use the sequence of moments (mean, mean square, ...) or cumulants (mean, variance, ...) as parameters for a probability distribution: see Statistical parameter.

Computer programming

[edit]

In computer programming, two notions of parameter are commonly used, and are referred to as parameters and arguments—or more formally as a formal parameter and an actual parameter.

For example, in the definition of a function such as

y = f(x) = x + 2,

x is the formal parameter (the parameter) of the defined function.

When the function is evaluated for a given value, as in

f(3): or, y = f(3) = 3 + 2 = 5,

3 is the actual parameter (the argument) for evaluation by the defined function; it is a given value (actual value) that is substituted for the formal parameter of the defined function. (In casual usage the terms parameter and argument might inadvertently be interchanged, and thereby used incorrectly.)

These concepts are discussed in a more precise way in functional programming and its foundational disciplines, lambda calculus and combinatory logic. Terminology varies between languages; some computer languages such as C define parameter and argument as given here, while Eiffel uses an alternative convention.

Artificial intelligence

[edit]

In artificial intelligence, a model describes the probability that something will occur. Parameters in a model are the weight of the various probabilities. Tiernan Ray, in an article on GPT-3, described parameters this way:

A parameter is a calculation in a neural network that applies a great or lesser weighting to some aspect of the data, to give that aspect greater or lesser prominence in the overall calculation of the data. It is these weights that give shape to the data, and give the neural network a learned perspective on the data.[3]

Engineering

[edit]

In engineering (especially involving data acquisition) the term parameter sometimes loosely refers to an individual measured item. This usage is not consistent, as sometimes the term channel refers to an individual measured item, with parameter referring to the setup information about that channel.

"Speaking generally, properties are those physical quantities which directly describe the physical attributes of the system; parameters are those combinations of the properties which suffice to determine the response of the system. Properties can have all sorts of dimensions, depending upon the system being considered; parameters are dimensionless, or have the dimension of time or its reciprocal."[4]

The term can also be used in engineering contexts, however, as it is typically used in the physical sciences.

Environmental science

[edit]

In environmental science and particularly in chemistry and microbiology, a parameter is used to describe a discrete chemical or microbiological entity that can be assigned a value: commonly a concentration, but may also be a logical entity (present or absent), a statistical result such as a 95 percentile value or in some cases a subjective value.

Linguistics

[edit]

Within linguistics, the word "parameter" is almost exclusively used to denote a binary switch in a Universal Grammar within a Principles and Parameters framework.

Logic

[edit]

In logic, the parameters passed to (or operated on by) an open predicate are called parameters by some authors (e.g., Prawitz's Natural Deduction;[5] Paulson's Designing a theorem prover). Parameters locally defined within the predicate are called variables. This extra distinction pays off when defining substitution (without this distinction special provision must be made to avoid variable capture). Others (maybe most) just call parameters passed to (or operated on by) an open predicate variables, and when defining substitution have to distinguish between free variables and bound variables.

Music

[edit]

In music theory, a parameter denotes an element which may be manipulated (composed), separately from the other elements. The term is used particularly for pitch, loudness, duration, and timbre, though theorists or composers have sometimes considered other musical aspects as parameters. The term is particularly used in serial music, where each parameter may follow some specified series. Paul Lansky and George Perle criticized the extension of the word "parameter" to this sense, since it is not closely related to its mathematical sense,[6] but it remains common. The term is also common in music production, as the functions of audio processing units (such as the attack, release, ratio, threshold, and other variables on a compressor) are defined by parameters specific to the type of unit (compressor, equalizer, delay, etc.).

See also

[edit]

References

[edit]

Grokipedia

from Grokipedia
In mathematics, a parameter is a quantity that influences the output or behavior of a mathematical object, such as a function or equation, but is viewed as being held constant within a specific context.[1] Unlike variables, which are manipulated to produce different outputs in a given instance, parameters remain fixed for that instance while allowing variation across a family of related objects; for example, in the equation of an ellipse x2a2+y2b2=1\frac{x^2}{a^2} + \frac{y^2}{b^2} = 1, the values aa and bb serve as parameters that define the shape and size without varying during evaluation.[2] In statistics, a parameter refers to a numerical characteristic or summary measure that describes an entire population, such as the population mean μ\mu or standard deviation σ\sigma, which is typically unknown and estimated from sample data.[3] This contrasts with a statistic, which is a similar measure computed from a sample subset of the population.[3] In computer science, a parameter is a value or variable passed to a function, method, or subroutine during its invocation, enabling reusable code by specifying inputs like data or configuration options; for instance, formal parameters are placeholders declared in the function definition, while actual parameters supply the concrete values at call time.[4] Parameters facilitate modularity and abstraction in programming, appearing in diverse contexts from algorithm design to machine learning models where they are tuned to optimize performance.[4] Beyond these core disciplines, parameters play a critical role in fields like engineering and physics, where they quantify system properties—such as coefficients in differential equations modeling physical phenomena—that are adjusted to fit experimental data or simulate behaviors.[1] Their consistent use across domains underscores their utility in defining boundaries, constraints, and tunable elements within complex models and analyses.

Fundamentals

Definition and Usage

A parameter is a quantity or variable that defines or characterizes a system, function, or model, often held constant during a specific analysis while remaining adjustable to explore different scenarios or variations.[2] In mathematical contexts, it serves as an input that shapes the behavior or properties of the entity under study without being the primary focus of variation.[2] The term "parameter" originates from the Greek roots para- meaning "beside" or "subsidiary" and metron meaning "measure," reflecting its role as a supplementary measure that accompanies the main elements of a system.[5] This etymology underscores its historical use in geometry as a line or quantity parallel to another, which evolved into a broader concept for fixed descriptors in analytical frameworks. The English term "parameter" entered mathematical usage in the 1650s, initially referring to quantities in conic sections.[5] Unlike variables, which vary freely within a given domain to represent changing states or inputs, parameters are typically fixed within a particular context to maintain the structure of the model or equation.[2] This distinction allows parameters to provide stability and specificity, while variables enable exploration of dynamic relationships. Common examples include the radius in the equation describing a circle, which determines the shape's size and is held constant for that geometric figure, or the growth rate in a population model, which characterizes the rate of expansion and can be adjusted to simulate different environmental conditions.[2] These cases illustrate parameters' utility in simplifying complex systems without delving into field-specific computations. Parameters facilitate abstraction in scientific and mathematical modeling by encapsulating essential characteristics, enabling the creation of generalizable frameworks that can be applied or adapted across diverse contexts with minimal reconfiguration.[6] This role promotes efficiency in representing real-world phenomena, allowing researchers to focus on core dynamics rather than unique details for each instance.

Historical Context

The concept of a parameter traces its roots to ancient Greek geometry, where it referred to a constant quantity used to define the properties of conic sections. Although the modern term "parameter" derives from the Greek words para- (beside) and metron (measure), denoting a subsidiary measure, early applications appear in the works of mathematicians like Euclid and Archimedes, who described conic sections through proportional relations and auxiliary lines that functioned parametrically. For instance, Archimedes utilized analogous fixed measures in his quadrature of the parabola around 250 BCE to determine areas.[7] Apollonius of Perga further systematized this approach in his Conics circa 200 BCE, using the term orthia pleura (upright side) for the fixed chord parallel to the tangent at the vertex—now known as the parameter or latus rectum—essential for classifying ellipses, parabolas, and hyperbolas.[8][9][10] Advancements in the 17th and 18th centuries integrated parameters into analytic geometry and curve theory. René Descartes, in his 1637 treatise La Géométrie, revolutionized the field by representing geometric curves algebraically using coordinates, where constants in the equations served as parameters defining the loci, bridging algebra and geometry without relying solely on synthetic methods. This laid the groundwork for parametric equations in modern form. Leonhard Euler expanded on this in the 18th century, developing parametric representations for complex curves, such as in his studies of elastic curves (elastica) and spirals during the 1740s, where parameters like arc length and curvature enabled precise descriptions of plane figures and variational problems. Euler's work, including his 1744 paper on the elastica, emphasized parameters as tools for solving differential equations governing curve shapes.[11][12] In the 19th and early 20th centuries, parameters gained prominence in statistics, physics, and estimation theory. Carl Friedrich Gauss introduced parameter estimation via the least squares method in his 1809 Theoria Motus Corporum Coelestium, applying it to astronomical data to minimize errors in orbital parameters, marking the birth of rigorous statistical inference. Ronald A. Fisher advanced this in the 1920s with maximum likelihood estimation, detailed in his 1922 paper "On the Mathematical Foundations of Theoretical Statistics," where parameters represent unknown population characteristics maximized for observed data likelihood. In physics, James Clerk Maxwell incorporated parameters like permittivity and permeability in his 1865 electromagnetic theory, formalized in equations that unified electricity, magnetism, and light, treating these as constants scaling field interactions.[13][14] The mid-20th century saw parameters adopted across interdisciplinary fields, particularly computing and artificial intelligence. In computing, the term emerged in the 1950s with the development of subroutines in early programming languages like FORTRAN (1957), where parameters passed values between procedures, enabling modular code as seen in IBM's mathematical subroutine libraries. In AI, parameters proliferated in the 1980s amid the expert systems boom and the revival of neural networks; for example, backpropagation algorithms optimized network parameters (weights) for learning, as in Rumelhart, Hinton, and Williams' 1986 seminal work, scaling AI from rule-based to data-driven models. Notably, while parameters are central to modern generative linguistics since Chomsky's 1981 principles-and-parameters framework, pre-20th-century linguistic usage remains underexplored, with sparse evidence in 19th-century descriptive grammars treating structural constants analogously but without the formalized term.[15][16]

Mathematics

Parameters in Functions

In mathematics, a parameter is a quantity that influences the output or behavior of a function but is viewed as being held constant during the evaluation of that function for varying inputs.[1] This distinguishes parameters from variables, which are the inputs that change to produce different outputs. Parameters effectively define the specific form or characteristics of the function, allowing it to be part of a broader family of related functions. Functions with parameters are often denoted using a semicolon to separate the variable from the parameter, such as f(x;θ)f(x; \theta), where xx is the independent variable and θ\theta represents one or more parameters.[2] Here, θ\theta is fixed for a given function instance, but varying θ\theta generates different functions within the same family, enabling the modeling of diverse behaviors through a single parameterized expression. For instance, the exponential family of functions, such as f(x;θ)=θxf(x; \theta) = \theta^x for θ>0\theta > 0, illustrates how parameters create a versatile class of functions applicable in various mathematical contexts. Key properties of parameters in functions include linearity, identifiability, and sensitivity. A function is linear in its parameters if the output can be expressed as a linear combination of those parameters, meaning no products, powers, or other nonlinear operations involving the parameters appear in the expression.[17] This linearity simplifies analysis and estimation, as seen in polynomial functions where parameters multiply powers of the variable but not each other. Identifiability refers to the ability to uniquely determine parameter values from the function's observed behavior; for example, in a linear function, parameters are identifiable provided the inputs span the necessary range to distinguish their effects.[18] Sensitivity measures how changes in a parameter affect the function's output, typically quantified by the partial derivative with respect to the parameter, fθ\frac{\partial f}{\partial \theta}, which indicates the rate of change in the function for small perturbations in θ\theta.[19] Basic examples highlight these concepts. Consider the linear function y=mx+by = mx + b, where mm is the slope parameter controlling the steepness and bb is the intercept parameter setting the y-value at x=0x=0.[1] Varying mm and bb produces a family of straight lines, with the function linear in both parameters. Similarly, the quadratic function y=ax2+bx+cy = ax^2 + bx + c involves three parameters: aa determines the parabola's direction and curvature, bb affects its tilt, and cc shifts it vertically. This form is also linear in aa, bb, and cc, allowing straightforward adjustments to fit observed data patterns.[1] Parameter estimation in functions typically involves curve fitting, where observed data points are used to determine parameter values that best match the function to the data. A fundamental method is least squares fitting, which minimizes the sum of squared differences between observed values and the function's predictions.[20] For linear and quadratic functions, this approach yields closed-form solutions for the parameters, such as solving normal equations derived from the data. This method, dating back to the work of Gauss and Legendre in the early 19th century, provides reliable estimates when the data noise is minimal and the function form is appropriate.[20]

Parameters in Models

In mathematical models, parameters act as tunable components that encapsulate key system properties, enabling the simulation and prediction of dynamic behaviors through differential equations or computational frameworks. These parameters allow models to represent real-world processes by adjusting rates of change, interactions, or thresholds, thereby facilitating the exploration of scenarios that would otherwise be infeasible to observe directly. For example, in epidemiological simulations, the parameter β in the SIR model quantifies the transmission rate of infection from susceptible to infected individuals, influencing the spread dynamics within a population.[21] Parameters in models are broadly categorized into structural ones, which define the underlying form and assumptions of the model—such as the choice of differential equation structure—and observational ones, which are empirically fitted to align model outputs with available data. Structural parameters establish the model's architecture, often derived from theoretical principles, while observational parameters are adjusted during calibration to reflect measurement outcomes. A critical challenge is identifiability, where parameters may not be uniquely recoverable from outputs due to correlations or insufficient data, leading to non-unique solutions that undermine prediction reliability; this issue is particularly pronounced in nonlinear systems.[22][23] Model calibration involves optimizing parameters to minimize discrepancies between simulated results and empirical observations, with least squares fitting being a foundational technique that minimizes the sum of squared residuals. In the Lotka-Volterra predator-prey model, for instance, the parameters α (prey growth rate), β (predation efficiency), γ (predator mortality rate), and δ (predator conversion efficiency from prey) are calibrated to capture oscillatory population dynamics, often using time-series data on species abundances. The calibrated model is given by the system:
dxdt=αxβxy,dydt=δxyγy, \begin{align*} \frac{dx}{dt} &= \alpha x - \beta x y, \\ \frac{dy}{dt} &= \delta x y - \gamma y, \end{align*}
where xx and yy denote prey and predator populations, respectively; least squares methods integrate numerical solutions of these equations with data to estimate the parameters.[24] Post-2000 advancements have emphasized sensitivity analysis to evaluate parameter influence on model robustness, particularly through global methods that explore parameter ranges holistically rather than locally. These techniques, such as variance-based decomposition, quantify how variations in individual or combined parameters propagate to output uncertainty, aiding in model simplification and prioritization of calibration efforts in complex simulations.[25]

Analytic Geometry

In analytic geometry, parametric equations provide a method to represent geometric objects such as curves and surfaces by expressing their coordinates as functions of one or more parameters, offering a flexible alternative to Cartesian or implicit forms. For instance, a straight line passing through points (x0,y0)(x_0, y_0) with direction vector (a,b)(a, b) can be parameterized as x=x0+atx = x_0 + a t, y=y0+bty = y_0 + b t, where tt is the parameter that traces points along the line.[26] Similarly, a circle of radius rr centered at the origin is given by x=rcostx = r \cos t, y=rsinty = r \sin t, with tt ranging from 0 to 2π2\pi to complete the loop.[27] For an ellipse centered at the origin with semi-major axis aa and semi-minor axis bb, the equations become x=acostx = a \cos t, y=bsinty = b \sin t, allowing the parameter tt to control the position around the ellipse.[28] Parametric representations offer several advantages over Cartesian equations, particularly in handling intersections, tracing paths, and facilitating animations, as they explicitly incorporate direction and parameterization by time or angle.[29] For example, computing intersections between curves is often simpler parametrically, as it involves solving for parameter values rather than eliminating variables from implicit equations. A historical development in this area includes Plücker coordinates, introduced by Julius Plücker in the mid-19th century, which use six homogeneous parameters to describe lines in three-dimensional projective space, advancing the analytic treatment of line geometry.[30] In higher dimensions, parametric equations extend to curves and surfaces, enabling descriptions of complex shapes. A sphere of radius rr can be parameterized using two parameters, θ\theta and ϕ\phi, as:
x=rsinθcosϕ,y=rsinθsinϕ,z=rcosθ, \begin{align*} x &= r \sin \theta \cos \phi, \\ y &= r \sin \theta \sin \phi, \\ z &= r \cos \theta, \end{align*}
where θ[0,π]\theta \in [0, \pi] and ϕ[0,2π]\phi \in [0, 2\pi], covering the entire surface.[31] A helix, as a space curve, is represented by x=rcostx = r \cos t, y=rsinty = r \sin t, z=ctz = c t, with tt as the parameter controlling both rotation and linear ascent.[32] Unlike implicit forms, which define surfaces via equations like F(x,y,z)=0F(x, y, z) = 0 (e.g., x2+y2+z2=r2x^2 + y^2 + z^2 = r^2 for a sphere), parametric forms allow direct mapping from parameter domains to the surface, aiding in visualization and computation without solving for coordinates implicitly.[33]
These parametric approaches find foundational applications in computer graphics, where they model smooth curves and surfaces for rendering and animation, such as tracing object paths or generating wireframe models without delving into algorithmic implementation.[34]

Mathematical Analysis

In mathematical analysis, parameters often appear in functions where their variation affects the behavior of limits, derivatives, and integrals, enabling the study of how solutions depend continuously or differentiably on these parameters. A key tool for handling parameter dependence in integrals is the Leibniz integral rule, which allows differentiation under the integral sign. This rule states that if $ f(x, t) $ is continuous in $ x $ and $ t $, and differentiable in $ t $, with the partial derivative $ \frac{\partial f}{\partial t} $ continuous, then for fixed limits of integration,
ddtabf(x,t)dx=abtf(x,t)dx. \frac{d}{dt} \int_a^b f(x, t) \, dx = \int_a^b \frac{\partial}{\partial t} f(x, t) \, dx.
The rule, first employed by Gottfried Wilhelm Leibniz in the late 17th century, facilitates the analysis of parameter-dependent integrals by interchanging differentiation and integration under suitable conditions on the domain and function regularity.[35] For series expansions, parameter-dependent functions can be approximated using Taylor series centered at a point, where the coefficients involve derivatives with respect to the primary variable but may themselves depend on the parameter. Consider a function $ f(x; p) $ analytic in $ x $ near $ x_0 $; its Taylor series is $ f(x; p) = \sum_{n=0}^\infty \frac{f^{(n)}(x_0; p)}{n!} (x - x_0)^n $, allowing assessment of how the approximation varies with $ p $. This parametric form underpins perturbation theory, where a small parameter $ \epsilon $ perturbs a solvable base problem $ P_0(x) = 0 $ to $ P_\epsilon(x) = P_0(x) + \epsilon Q(x) + O(\epsilon^2) = 0 $, and solutions are sought as asymptotic series $ x(\epsilon) = x_0 + \epsilon x_1 + \epsilon^2 x_2 + \cdots $. In regular perturbation cases, these series converge for small $ \epsilon $, providing quantitative dependence; singular cases require rescaling for uniform validity across domains./10:_Power_Series/10.03:_Taylor_and_Maclaurin_Series)[36] The properties of continuity and differentiability for parameter-dependent functions $ f(x; p) $ rely on convergence behaviors of approximating sequences or series. Uniform convergence of a sequence of continuous functions $ f_n(x; p) $ to $ f(x; p) $ on a domain preserves continuity in both $ x $ and $ p $, ensuring the limit function inherits these properties uniformly. For differentiability, if $ {f_n} $ converges uniformly and their derivatives $ {f_n'} $ converge uniformly to some $ g(x; p) $, then $ f $ is differentiable with $ f' = g $, critical for analyzing parameter sensitivity in limits and integrals. This framework extends to parameter families, where uniform convergence prevents pathologies like pointwise limits yielding discontinuous parameter dependence.[37] In advanced settings, such as dynamical systems, parameters serve as bifurcation points where qualitative solution structures change abruptly. A bifurcation parameter $ r $ in an ordinary differential equation $ \dot{x} = f(x; r) $ induces transitions like the supercritical pitchfork bifurcation, governed by the normal form $ \dot{x} = r x - x^3 $, where the origin shifts from stable (for $ r < 0 $) to unstable (for $ r > 0 $), spawning two new stable equilibria. These phenomena, analyzed via local Taylor expansions around equilibria, reveal how small parameter variations destabilize systems and generate complex behaviors like periodic orbits. Complementing this, the implicit function theorem provides local solvability for parameter-dependent equations $ F(x; p) = 0 $, ensuring unique, differentiable solutions $ x(p) $ near points where $ \frac{\partial F}{\partial x} \neq 0 $. In early 20th-century analysis, Ulisse Dini's rigorous formulations (1907–1915) applied the theorem to real analytic implicit functions and differential geometry, enabling studies of singularities and manifold structures, while extensions by Gilbert A. Bliss (1909) addressed existence in higher dimensions for Riemann surfaces.[38][39]

Statistics and Econometrics

In statistics, parameters are unknown quantities in a model that are estimated from data to describe the underlying population or process. Parameter estimation involves methods that use observed data to infer these values, with two foundational approaches being the method of moments and maximum likelihood estimation. The method of moments, introduced by Karl Pearson, equates sample moments to population moments to solve for parameters; for the normal distribution, the first moment yields the mean μ\mu as the sample average, and the second central moment gives the variance σ2\sigma^2 as the sample variance (adjusted for bias).[40] Maximum likelihood estimation, developed by Ronald Fisher, maximizes the likelihood function—the probability of observing the data given the parameters—to obtain point estimates; for the normal distribution, this produces the same estimators for μ\mu and σ2\sigma^2 as the method of moments, but the approach generalizes more efficiently to complex distributions by leveraging the data's joint density. Once estimated, statistical inference assesses the reliability of parameters through confidence intervals and hypothesis tests. Confidence intervals provide a range within which the true parameter likely lies, with coverage probability determined by the interval's construction; for example, a 95% confidence interval for μ\mu in a normal model with known variance uses the sample mean plus or minus 1.96 standard errors. Hypothesis testing evaluates specific claims about parameters, such as equality to a null value, using test statistics like the t-statistic in Student's t-test, which William Sealy Gosset introduced to handle small-sample inference on means when the variance is unknown, comparing the sample mean to the hypothesized value under a t-distribution. In econometrics, parameters often represent relationships between economic variables, estimated via regression models to inform policy and prediction. Ordinary least squares (OLS) estimates regression coefficients β0\beta_0 and β1\beta_1 in the linear model y=β0+β1x+ϵy = \beta_0 + \beta_1 x + \epsilon by minimizing the sum of squared residuals, a method formalized by Carl Friedrich Gauss for error-prone observations. When endogeneity biases OLS estimates—such as due to omitted variables or reverse causality—instrumental variables (IV) estimation uses exogenous instruments correlated with the regressor but not the error term to identify causal parameters, as advanced in modern causal inference frameworks.[41] Time-series analysis treats parameters as characterizing temporal dependencies in data, with autoregressive integrated moving average (ARIMA) models specifying orders pp, dd, and qq for autoregressive, differencing, and moving average components, respectively; estimation typically employs maximum likelihood on differenced series to achieve stationarity. While classical ARIMA relies on frequentist methods, Bayesian approaches for parameter inference, incorporating priors to handle uncertainty in volatile series like economic indicators, have been integrated since the late 1990s.[42]

Probability Theory

In probability theory, parameters specify the properties of probability distributions, enabling the modeling of random phenomena. These parameters are typically classified into location, scale, and shape categories. The location parameter, often denoted by μ, determines the central tendency or shift of the distribution, such as the mean in the normal distribution. The scale parameter, denoted by σ, controls the spread or dispersion, representing the standard deviation in the normal case. Shape parameters alter the form of the distribution, for instance, the success probability p in the Bernoulli distribution, which governs the probability mass at 0 or 1, or the rate λ in the Poisson distribution, which sets the expected number of events in a fixed interval.[43][44][45] A prominent class of distributions unified by their parametric structure is the exponential family, which encompasses many common distributions like the normal, Poisson, and Bernoulli. In this family, the probability density or mass function can be expressed as
f(xη)=h(x)exp(ηTT(x)A(η)), f(x \mid \eta) = h(x) \exp\left( \eta^T T(x) - A(\eta) \right),
where η is the natural parameter vector, T(x) is the sufficient statistic, h(x) is the base measure, and A(η) is the log-partition function ensuring normalization. The natural parameters η reparameterize the distribution in a form that simplifies inference, as they directly multiply the sufficient statistic T(x), facilitating properties like convexity of the log-partition function. This parameterization highlights the role of parameters in capturing the essential variability across family members.[46][47] Parameters also define stochastic processes, which model sequences of random variables evolving over time. In Markov chains, a discrete-time stochastic process with the Markov property, the transition probabilities p_{ij} = P(X_{t+1} = j \mid X_t = i) serve as the key parameters, forming the rows of the transition matrix that dictate the probability of moving between states. These parameters fully characterize the chain's stationary behavior and long-term dynamics when the matrix is stochastic. For continuous-time processes like Brownian motion, also known as the Wiener process, the parameters include the drift μ, which specifies the expected linear trend, and the volatility σ, which measures the instantaneous variance per unit time, yielding the stochastic differential equation dX_t = μ dt + σ dW_t where W_t is standard Brownian motion.[48][49][50] Sufficient statistics play a crucial role in parameter inference within probability theory by encapsulating all relevant information about the parameters from the data. A statistic T(X) is sufficient for a parameter θ if the conditional distribution of the data given T(X) is independent of θ, allowing inference to proceed solely from T(X) without loss of information. In exponential families, the natural sufficient statistic T(x) directly informs estimation of η, as it appears linearly in the likelihood. This concept underpins efficient inference procedures, reducing dimensionality while preserving probabilistic structure.[51][52] Lévy processes, a broad class of stochastic processes with independent and stationary increments generalizing Brownian motion, developed through contributions starting in the early 1900s, received key formalization by Paul Lévy in the 1930s. These processes are parameterized by a triplet (b, σ², ν), where b is the drift vector, σ² is the Gaussian covariance matrix for the diffusion component, and ν is the Lévy measure describing the intensity and size of jumps. This parameterization captures jumps, diffusion, and drift, enabling the modeling of heavy-tailed phenomena beyond Gaussian assumptions.[53]

Computing

Computer Programming

In computer programming, parameters serve as placeholders for values or references passed to functions, subroutines, or methods, enabling modular code by allowing external inputs to influence execution without hardcoding specifics.[54] They facilitate reusability and abstraction, distinguishing between formal parameters (defined in the function signature) and actual arguments (provided during invocation). Early programming languages emphasized parameters for numerical computations, evolving to support diverse passing mechanisms and scoping rules in modern contexts. The concept of parameters originated in the 1950s with FORTRAN, developed by IBM for scientific computing on the IBM 704. FORTRAN I (1957) introduced function statements using dummy arguments in an assignment-like syntax, such as function(arg) = expression, where arguments were passed by address, allowing functions to modify values indirectly.[55] FORTRAN II (1958) enhanced this by supporting user-defined subroutines with separate compilation, retaining symbolic information for parameter references, while FORTRAN III (late 1950s) permitted function and subroutine names as arguments themselves, expanding flexibility for alphanumeric handling.[55] These innovations marked parameters as essential for procedural abstraction in early high-level languages. Function parameters vary by type and passing mechanism across languages. Positional parameters are matched by order of declaration, as in Python's def add(a, b): return a + b, invoked as add(2, 3).[56] Keyword parameters allow named passing for clarity and flexibility, such as greet(name="Alice", greeting="Hello") in def greet(name, greeting="Hello"): ....[57] Default parameters provide fallback values, evaluated once at definition; for example, def power(base, exponent=2): return base ** exponent yields 9 when called as power(3).[58] Parameters can be passed by value or by reference, affecting mutability. Pass-by-value copies the argument's value into the formal parameter, isolating changes; in C++, void swapByVal(int num1, int num2) leaves originals unchanged (e.g., inputs 10 and 20 remain 10 and 20 post-call).[59] Pass-by-reference passes the address, enabling modifications to the original; C++'s void swapByRef(int& num1, int& num2) swaps values (10 and 20 become 20 and 10).[59] This distinction balances efficiency for small types (value) with avoidance of copies for large structures (reference, often with const for read-only access).[59] Configuration parameters configure program behavior at runtime, often via command-line arguments or APIs. In C++, the main function receives them as int main(int argc, char* argv[]): argc counts arguments (minimum 1), and argv is an array where argv[0] is the program name and subsequent elements are strings; for instance, invoking ./program input.txt -v sets argc=3 and argv[1]="input.txt".[60] This mechanism supports scripting and external control without recompilation. Parameter scope defines accessibility, while binding associates names with values or types. Local parameters, declared within a function or block, are confined to that lexical scope (e.g., Python's function arguments act as locals), preventing unintended interference.[54] Global parameters, declared at module level, are accessible program-wide but risk namespace pollution; languages like C++ use static binding at compile time for types (e.g., int x) and dynamic binding at runtime for values.[54] Type systems enforce parameter constraints, such as strong typing in Java to prevent mismatches. Modern languages extend parameters with generics for type-safe reusability. TypeScript's generics use type parameters like <Type> in functions: function identity<Type>(arg: Type): Type { return arg; }, callable as identity<string>("hello") to infer and preserve string type.[61] In functional programming paradigms, lambda expressions with parameters gained mainstream adoption in the 2000s, enabling anonymous functions for concise higher-order operations; C++11 (2011) and Java 8 (2014) integrated them, building on earlier influences like Python's 1994 lambda but accelerating use in object-oriented contexts for tasks like event handling.[62]

Artificial Intelligence

In artificial intelligence, particularly in machine learning models, parameters refer to the internal variables that are learned during training to optimize model performance. Model parameters, often denoted as θ\theta, include the weights and biases in neural networks, which are adjusted to minimize a loss function that quantifies the difference between predicted and actual outputs. This optimization process typically employs backpropagation, an algorithm that computes gradients of the loss with respect to each parameter and updates them iteratively using gradient descent. The seminal introduction of backpropagation in multilayer networks enabled efficient training of deep architectures by propagating errors backward through the layers.[63] For instance, in deep feedforward networks, the objective is to find θ\theta that minimizes the expected loss E(x,y)[L(y,f(x;θ))]\mathbb{E}_{(x,y)}[L(y, f(x; \theta))], where ff is the model function. Hyperparameters, in contrast, are configuration settings external to the model that are not learned from data but must be specified before training. Common examples include the learning rate α\alpha, which controls the step size in parameter updates during optimization, and the batch size, which determines the number of training examples processed per iteration. These are tuned through methods such as grid search, which exhaustively evaluates combinations on a predefined grid, or more efficient approaches like random search, which samples hyperparameters randomly and often outperforms grid search by focusing on promising regions of the space. Bayesian optimization further advances this by modeling the hyperparameter-performance relationship as a probabilistic surrogate, such as a Gaussian process, to intelligently select configurations that balance exploration and exploitation.[64] Specific examples illustrate the role of parameters in AI architectures. In transformer models, the model dimension dmodeld_{model} serves as a key hyperparameter that sets the size of input embeddings and hidden states, influencing the network's capacity to capture complex dependencies; for instance, the original transformer used dmodel=512d_{model} = 512. In reinforcement learning, the discount factor γ[0,1)\gamma \in [0, 1) is a hyperparameter that weights future rewards in the value function, balancing immediate versus long-term gains in agents trained via methods like Q-learning. Recent developments have emphasized parameter efficiency and scaling in large-scale AI systems. Transfer learning, prominent since the 2010s, involves initializing model parameters with representations learned on large source tasks and fine-tuning them on target tasks, leveraging the transferability of lower-layer features while adapting higher layers. Studies show that early layers retain general features transferable across domains, whereas later layers become task-specific, guiding efficient fine-tuning strategies. In large language models, scaling laws have revealed optimal parameter regimes; for example, the Chinchilla model demonstrated that compute-optimal training allocates equal resources to model size (parameters) and data tokens, achieving better performance than larger undertrained models like Gopher by using 70 billion parameters trained on 1.4 trillion tokens.[65]

Applied Sciences

Engineering

In engineering, parameters are essential variables that define the behavior, performance, and constraints of systems during design, analysis, and optimization. These include physical properties, operational settings, and dimensional specifications that engineers adjust to meet functional requirements while ensuring reliability and efficiency. For instance, in mechanical and electrical systems, parameters such as material strengths, load conditions, and circuit elements directly influence system stability and output.[66] In control theory, system parameters like the proportional gain KpK_p and time constant τ\tau are critical for tuning proportional-integral-derivative (PID) controllers, which regulate processes in applications ranging from robotics to industrial automation. The gain KpK_p determines the controller's responsiveness to error, amplifying the corrective action proportionally, while τ\tau represents the system's inherent response time, affecting settling and overshoot in dynamic systems. Proper selection of these parameters, often through methods like Ziegler-Nichols tuning, ensures stable operation by balancing speed and accuracy.[67][68] Design optimization in engineering relies on parameters such as tolerances in manufacturing, which specify allowable deviations in part dimensions to achieve interchangeability and precision. For example, in machining processes, tolerance parameters define limits for linear dimensions (e.g., ±0.1 mm for general fits) to minimize defects and assembly issues. Similarly, finite element analysis (FEA) uses input parameters like material Young's modulus, Poisson's ratio, and boundary loads to simulate stress distributions and predict failure modes in structures. These parameters enable iterative optimization, reducing material use while maintaining safety factors.[69][70] Representative examples illustrate parameter roles in specific domains. In electrical circuits, resistance RR (measured in ohms) quantifies opposition to current flow, while capacitance CC (in farads) indicates charge storage capacity, both governing time constants in RC networks for filtering and timing applications. In fluid dynamics, the Reynolds number ReRe, defined as Re=ρvDμRe = \frac{\rho v D}{\mu} where ρ\rho is density, vv is velocity, DD is diameter, and μ\mu is viscosity, serves as a dimensionless parameter to characterize flow regimes—low ReRe (<2000) indicates laminar flow, while high ReRe (>4000) signals turbulence, guiding pipeline and aerodynamic designs.[71][72] Engineering standards formalize these parameters for consistency. The ISO 2768 standard establishes general tolerances for linear and angular dimensions in manufacturing, categorizing them into fine (f), medium (m), coarse (c), and very coarse (v) classes to suit production methods like casting or milling. In sustainable engineering, parameters such as carbon footprint factors—quantifying embodied emissions per unit mass—have evolved in the 2020s to incorporate lifecycle assessments, aiding low-carbon material selection and reducing global warming potential by up to 30% in building designs.[73][74]

Environmental Science

In environmental science, parameters are essential for modeling complex systems such as climate and ecosystems, enabling predictions of natural processes and human impacts. Climate models, particularly general circulation models (GCMs), rely on key inputs like the climate sensitivity parameter λ, defined as the equilibrium change in global mean surface temperature per unit radiative forcing (ΔT = λ × ΔF), typically around 0.5 K/(W m⁻²) in one-dimensional radiative-convective models and exhibiting 20-30% variation in three-dimensional atmosphere-ocean GCMs due to feedbacks such as water vapor and clouds.[75] This parameter quantifies the Earth's radiative response and is nearly invariant across forcings like well-mixed greenhouse gases and solar radiation, though it varies more for stratospheric ozone perturbations.[75] Surface properties in GCMs, including albedo (the fraction of incident solar radiation reflected by the surface) and emissivity (the efficiency of surface thermal radiation emission), further govern energy balance; albedo influences absorbed solar energy and atmospheric circulation, while emissivity determines upward longwave flux in radiative transfer equations like F↑ = ε_s σ T_s^4, where σ is the Stefan-Boltzmann constant.[76] These parameters, often assumed constant over global surfaces in simplified GCMs, are tuned to match observed climate states and drive simulations of future scenarios.[76] Ecological models incorporate parameters to capture population dynamics and community structure, with the carrying capacity K in the logistic growth equation dN/dt = rN(1 - N/K) representing the maximum sustainable population size limited by resources, where r is the intrinsic growth rate and N is population size.[77] This parameter levels off exponential growth as environmental constraints intensify, providing a foundational metric for predicting species responses to habitat changes in metapopulation contexts.[78] Biodiversity indices, used as parameters to evaluate ecosystem diversity, include the Shannon index H = -∑ p_i ln(p_i), where p_i is the proportion of species i, measuring uncertainty in species identity and balancing rare and common taxa via geometric mean rarity.[79] Similarly, the Simpson index D = 1 / ∑ p_i^2 quantifies the probability of interspecific encounters, emphasizing dominant species through harmonic mean rarity and serving as a robust input for models assessing community stability and resilience.[79] These indices enable comparisons across ecosystems, informing conservation strategies by parameterizing diversity gradients standardized by sampling coverage.[79] Uncertainty in environmental parameters arises from incomplete knowledge and model variability, with the Intergovernmental Panel on Climate Change (IPCC) addressing it through qualitative confidence levels (e.g., high based on evidence quality and agreement) and probabilistic likelihood terms (e.g., likely: 66-100% probability).[80] For instance, IPCC models report parameter ranges like climate sensitivity with 90-95% confidence intervals to capture tails of distributions, ensuring traceable judgments in projections.[80] Scenario analysis employs Representative Concentration Pathways (RCPs) to parameterize emissions trajectories, such as RCP2.6 (low forcing, ~2.6 W m⁻² by 2100) limiting warming to 1.6°C and RCP8.5 (high forcing, ~8.5 W m⁻²) projecting 4.3°C, influencing outcomes like sea level rise (0.43 m vs. 0.84 m by 2100).[81] These pathways integrate parameters for greenhouse gas concentrations, land use, and socioeconomic drivers to evaluate risks across low- and high-emission futures.[81] Recent assessments of biodiversity loss highlight parameters quantifying exploitation and decline, as detailed in a 2020 IPBES workshop report. For example, unsustainable wildlife trade affects 72% (6,241 species) of threatened or near-threatened vertebrates, with regional hunting risking 13% of Southeast Asian threatened mammals (113 species) and 8% in Africa (91 species).[82] African elephant populations have declined 30-fold to ~400,000 over the past century, exacerbated by poaching exceeding 100,000 individuals between 2010 and 2012, serving as key metrics for modeling overexploitation drivers. More recent assessments, such as a 2024 PNAS study, confirm ongoing declines, with savanna elephant populations decreasing by an average of 70% at surveyed sites over the past 50 years (1964–2016) and forest elephants by 90%.[82][83] Land use change contributes over 30% to emerging infectious diseases since 1960, linking biodiversity erosion to health risks through enhanced human-wildlife interfaces.[82] These parameters underscore the economic scale of loss, with illegal trade valued at US$7-23 billion annually and prevention costs estimated at US$17.7-31.2 billion yearly.[82]

Other Disciplines

Linguistics

In linguistics, the concept of parameters forms a core component of generative grammar, particularly within Noam Chomsky's Principles and Parameters (P&P) theory developed in the 1980s. This framework posits that Universal Grammar (UG) consists of invariant principles shared across all human languages, alongside a finite set of parameters that account for cross-linguistic variation. Parameters are binary switches that children "set" during language acquisition based on environmental input, allowing the grammar to generate language-specific structures while adhering to universal constraints. The theory emerged as an evolution of earlier generative models, aiming to explain both the uniformity and diversity of natural languages through a biologically endowed language faculty.[84] A key example of a parameter is the head-directionality parameter, which determines whether the head of a phrase (such as a verb in a verb phrase or a preposition in a prepositional phrase) precedes or follows its complements. In head-initial languages like English, heads typically come first (e.g., "eat the apple"), whereas in head-final languages like Japanese, they follow (e.g., "apple eat"). Setting this parameter influences broader syntactic patterns, such as word order in clauses. Another prominent parameter is the pro-drop parameter, which licenses the null realization of subjects in finite clauses. Pro-drop languages like Spanish allow sentences such as "Habla inglés" (meaning "He/She speaks English"), where the subject pronoun is omitted due to rich verbal morphology, unlike non-pro-drop languages like English, which require overt subjects ("He/She speaks English"). These parameters illustrate how subtle settings can yield significant typological differences. In language acquisition, parameter setting explains how children rapidly converge on their target grammar despite limited and noisy input, guided by UG. The process involves hypothesizing parameter values that match the linguistic evidence, with mechanisms like the subset principle ensuring learnability. Formulated by Wexler and Manzini (1987), the subset principle states that if one parameter value generates a proper subset of structures compared to another (the superset), learners initially adopt the subset value to avoid overgeneralization, only resetting to the superset upon unambiguous evidence. For instance, in acquiring the pro-drop parameter, children initially adopt the non-pro-drop setting (subset, as in English). Learners of pro-drop languages like Spanish shift to the superset value upon encountering unambiguous evidence of null subjects in the input. This principle resolves potential learnability paradoxes in P&P theory, preventing erroneous generalizations from impoverished data.[85][86][87] The P&P framework evolved into Chomsky's Minimalist Program (MP) in the 1990s, where parameters are refined to minimize the computational burden on the language faculty, focusing on core operations like Merge and Agree. In MP, parameters are increasingly localized to the lexicon or functional features, reducing their number and shifting emphasis from syntax to interfaces with other cognitive systems. Post-2010 developments have explored how these parameters interface with computational linguistics, such as in probabilistic models of parameter optimization that simulate acquisition via Bayesian inference, bridging theoretical syntax with statistical learning algorithms. This extension addresses gaps in earlier models by integrating minimalist assumptions into computational simulations of language variation and change.[88]

Logic

In formal logic, parameters often manifest as free variables within proofs and formulas, serving as placeholders for arbitrary but fixed elements from the domain of discourse. A free variable in a logical formula is one that is not bound by a quantifier, allowing it to function as a parameter that can be instantiated with specific terms during proof construction or interpretation.[89] This treatment enables the generalization of proofs: for instance, a proof of a formula ϕ(x)\phi(x) with free variable xx (as a parameter) can be universally quantified to xϕ(x)\forall x \phi(x) if xx does not occur free in the assumptions.[90] In automated reasoning and resolution-based theorem proving, free variables as parameters facilitate unification and substitution, ensuring that proofs remain schematic and applicable across instances. The Herbrand universe plays a central role in this context, providing a domain constructed solely from the function symbols of the language without requiring non-logical constants or variables. Defined as the set of all ground terms (variable-free terms) generated by applying function symbols to each other, the Herbrand universe allows proofs to be analyzed over a countable, term-generated structure, where free variables in open formulas are effectively parameterized by these ground terms.[91] Herbrand's theorem, which states that a set of first-order clauses is satisfiable if and only if it has a Herbrand model (an interpretation over the Herbrand universe), relies on this parameterization to reduce satisfiability to propositional logic over ground instances, eliminating the need for infinite domains in proof search. In model theory, parameters refer to specific elements of a structure incorporated into the defining formulas of sets or relations, enabling more expressive definitions than those without parameters. A set AA in a structure M\mathcal{M} is definable with parameters if there exists a first-order formula ϕ(x,aˉ)\phi(x, \bar{a}) with free variable xx and parameters aˉM\bar{a} \in \mathcal{M} such that A={bMMϕ(b,aˉ)}A = \{ b \in |\mathcal{M}| \mid \mathcal{M} \models \phi(b, \bar{a}) \}.[92] This contrasts with parameter-free definability, where ϕ(x)\phi(x) has no additional elements from M\mathcal{M}, and allows for capturing structure-specific properties, such as the set of even integers in (Z,+)(\mathbb{Z}, +) defined by ϕ(x,0,1):y(x=y+y)\phi(x, 0, 1): \exists y (x = y + y), using parameters 0 and 1. Historically, model theory's emphasis on parameters emerged in the 1940s through works by Alfred Tarski and Abraham Robinson, who used them to study definable subsets and stability in structures, influencing the field's shift toward algebraic and geometric applications.[93] Examples in first-order logic illustrate parameters' utility: consider a sentence with parameters like x(P(x,c)Q(x))\forall x (P(x, c) \rightarrow Q(x)), where cc is a constant parameter naming an element in the model; this asserts a property relative to cc, generalizing to arbitrary interpretations when cc is treated as a free variable in proofs. Skolem functions extend this by replacing existentially quantified variables with functions of universal parameters, as in Skolemization: the formula xyP(x,y)\forall x \exists y P(x, y) becomes xP(x,f(x))\forall x P(x, f(x)), where ff is a Skolem function depending on the parameter xx.[94] Introduced by Thoralf Skolem in the 1920s as part of his work on the Löwenheim-Skolem theorem, these functions preserve satisfiability while eliminating existentials, facilitating resolution and model construction in first-order logic.[95] Historically, Kurt Gödel's completeness theorem of 1930 established that every consistent first-order theory has a model, with proofs involving the systematic instantiation of free variables (parameters) to construct Henkin-style witnesses or canonical models.[96] Gödel's original proof reduces the problem to sentences in prenex form, using parameters to build a countable model from the theory's axioms and ensuring that free variables are adequately substituted to satisfy all instances.[97] However, in intuitionistic logic, while free variables function similarly as parameters in proof rules (e.g., for quantifier introduction, provided they do not occur free in assumptions), a gap arises in completeness: Gödel's semantic completeness over classical models does not directly extend, requiring instead Kripke or Heyting semantics for an analogous theorem, as intuitionistic provability demands constructive witnesses rather than classical truth preservation.[89] This distinction highlights how parameters in intuitionistic systems emphasize realizability over mere existence, addressing foundational differences in logical deduction.[98]

Music

In music, acoustic parameters describe the fundamental properties of sound waves that contribute to auditory perception. Frequency, denoted as $ f $, determines the pitch of a sound, corresponding to the number of vibrations per second in hertz (Hz), with higher frequencies producing higher pitches. Amplitude, represented as $ A $, governs the loudness or intensity of the sound, directly related to the magnitude of pressure variations in the wave. Timbre, often described as the "color" or quality of a sound, arises from the complex interplay of harmonic overtones and their relative amplitudes, distinguishing, for example, a violin from a trumpet even at the same pitch and volume.[99][100][101] In musical composition, parameters such as tempo and key signatures provide structural guidelines within scores. Tempo, measured in beats per minute (BPM), dictates the overall pace, influencing the emotional character; for instance, a tempo of 60 BPM evokes a steady, deliberate feel, while 120 BPM suggests vivacity. Key signatures specify the tonal center by indicating the sharps or flats applied throughout a piece, establishing whether it is in a major (typically brighter) or minor (often somber) mode, and facilitating modulation between sections. These elements allow composers to manipulate time and harmony systematically, as seen in classical scores where tempo markings and key changes guide performer interpretation.[102][103] Sound synthesis employs adjustable parameters to generate and shape tones electronically. In frequency modulation (FM) synthesis, the modulation index, often denoted as $ I $, controls the depth of frequency deviation applied by a modulator to a carrier wave, producing rich timbres through sidebands; values of $ I $ around 5 can yield bell-like sounds, while lower values approximate simpler tones. MIDI (Musical Instrument Digital Interface) controls extend this by transmitting parameters like note velocity (for dynamics), aftertouch (for expressive modulation), and continuous controllers for real-time adjustments to parameters such as filter cutoff or volume, enabling performers to interact with synthesizers intuitively.[104][105] Historically, the adoption of 12-tone equal temperament in the 18th century standardized tuning parameters, dividing the octave into 12 equal semitones (each approximately 100 cents), which facilitated chromatic harmony and modulation across all keys, as promoted in works by Johann Sebastian Bach. This system, where the frequency ratio between consecutive semitones is $ 2^{1/12} $, became the foundation for Western music, enabling keyboard instruments to play in any key without retuning. In post-2000 algorithmic composition, parameters like mapping functions extract and transform gestural data (e.g., pitch density or rhythmic variance) into musical outputs, allowing composers to define probabilistic rules for structure, as in tools that map input gestures to polyphonic textures for experimental pieces.[106][107]

References

User Avatar
No comments yet.