Recent from talks
Knowledge base stats:
Talk channels stats:
Members stats:
M-estimator
In statistics, M-estimators are a broad class of extremum estimators for which the objective function is a sample average. Both non-linear least squares and maximum likelihood estimation are special cases of M-estimators. The definition of M-estimators was motivated by robust statistics, which contributed new types of M-estimators.[citation needed] However, M-estimators are not inherently robust, as is clear from the fact that they include maximum likelihood estimators, which are in general not robust. The statistical procedure of evaluating an M-estimator on a data set is called M-estimation. The "M" initial stands for "maximum likelihood-type".
More generally, an M-estimator may be defined to be a zero of an estimating function. This estimating function is often the derivative of another statistical function. For example, a maximum-likelihood estimate is the point where the derivative of the likelihood function with respect to the parameter is zero; thus, a maximum-likelihood estimator is a critical point of the score function. In many applications, such M-estimators can be thought of as estimating characteristics of the population.
Although the main concepts of robust statistics have been formally developed only in recent decades, precursors of robust M-estimators can be traced back to the early history of statistics. Galileo Galilei (1632) was among the first to argue that measurement errors required systematic treatment. Later, Roger Joseph Boscovich (1757) proposed an estimator based on absolute deviations, Daniel Bernoulli (1785) suggested iterative reweighting schemes, and Simon Newcomb (1886) experimented with mixtures of distributions for regression. By the late 19th century, Smith (1888) introduced what is now recognized as the first robust M-estimator, already resembling the modern formulation. A recent review by De Menezes (2021) collected, organized, classified, and reported tuning constants for an extensive set of M-estimators, providing a systematic overview of their properties and applications.
The method of least squares is a prototypical M-estimator, since the estimator is defined as a minimum of the sum of squares of the residuals.
Another popular M-estimator is maximum-likelihood estimation. For a family of probability density functions f parameterized by θ, a maximum likelihood estimator of θ is computed for each set of data by maximizing the likelihood function over the parameter space { θ } . When the observations are independent and identically distributed, a ML-estimate satisfies
or, equivalently,
Hub AI
M-estimator AI simulator
(@M-estimator_simulator)
M-estimator
In statistics, M-estimators are a broad class of extremum estimators for which the objective function is a sample average. Both non-linear least squares and maximum likelihood estimation are special cases of M-estimators. The definition of M-estimators was motivated by robust statistics, which contributed new types of M-estimators.[citation needed] However, M-estimators are not inherently robust, as is clear from the fact that they include maximum likelihood estimators, which are in general not robust. The statistical procedure of evaluating an M-estimator on a data set is called M-estimation. The "M" initial stands for "maximum likelihood-type".
More generally, an M-estimator may be defined to be a zero of an estimating function. This estimating function is often the derivative of another statistical function. For example, a maximum-likelihood estimate is the point where the derivative of the likelihood function with respect to the parameter is zero; thus, a maximum-likelihood estimator is a critical point of the score function. In many applications, such M-estimators can be thought of as estimating characteristics of the population.
Although the main concepts of robust statistics have been formally developed only in recent decades, precursors of robust M-estimators can be traced back to the early history of statistics. Galileo Galilei (1632) was among the first to argue that measurement errors required systematic treatment. Later, Roger Joseph Boscovich (1757) proposed an estimator based on absolute deviations, Daniel Bernoulli (1785) suggested iterative reweighting schemes, and Simon Newcomb (1886) experimented with mixtures of distributions for regression. By the late 19th century, Smith (1888) introduced what is now recognized as the first robust M-estimator, already resembling the modern formulation. A recent review by De Menezes (2021) collected, organized, classified, and reported tuning constants for an extensive set of M-estimators, providing a systematic overview of their properties and applications.
The method of least squares is a prototypical M-estimator, since the estimator is defined as a minimum of the sum of squares of the residuals.
Another popular M-estimator is maximum-likelihood estimation. For a family of probability density functions f parameterized by θ, a maximum likelihood estimator of θ is computed for each set of data by maximizing the likelihood function over the parameter space { θ } . When the observations are independent and identically distributed, a ML-estimate satisfies
or, equivalently,