Hubbry Logo
search button
Sign in
Berndt–Hall–Hall–Hausman algorithm
Berndt–Hall–Hall–Hausman algorithm
Comunity Hub
History
arrow-down
starMore
arrow-down
bob

Bob

Have a question related to this hub?

bob

Alice

Got something to say related to this hub?
Share it here.

#general is a chat channel to discuss anything related to the hub.
Hubbry Logo
search button
Sign in
Berndt–Hall–Hall–Hausman algorithm
Community hub for the Wikipedia article
logoWikipedian hub
Welcome to the community hub built on top of the Berndt–Hall–Hall–Hausman algorithm Wikipedia article. Here, you can discuss, collect, and organize anything related to Berndt–Hall–Hall–Hausman algorithm. The ...
Add your contribution
Berndt–Hall–Hall–Hausman algorithm

The Berndt–Hall–Hall–Hausman (BHHH) algorithm is a numerical optimization algorithm similar to the Newton–Raphson algorithm, but it replaces the observed negative Hessian matrix with the outer product of the gradient. This approximation is based on the information matrix equality and therefore only valid while maximizing a likelihood function.[1] The BHHH algorithm is named after the four originators: Ernst R. Berndt, Bronwyn Hall, Robert Hall, and Jerry Hausman.[2]

Usage

[edit]

If a nonlinear model is fitted to the data one often needs to estimate coefficients through optimization. A number of optimization algorithms have the following general structure. Suppose that the function to be optimized is Q(β). Then the algorithms are iterative, defining a sequence of approximations, βk given by

,

where is the parameter estimate at step k, and is a parameter (called step size) which partly determines the particular algorithm. For the BHHH algorithm λk is determined by calculations within a given iterative step, involving a line-search until a point βk+1 is found satisfying certain criteria. In addition, for the BHHH algorithm, Q has the form

and A is calculated using

In other cases, e.g. Newton–Raphson, can have other forms. The BHHH algorithm has the advantage that, if certain conditions apply, convergence of the iterative procedure is guaranteed.[citation needed]

See also

[edit]

References

[edit]
  1. ^ Henningsen, A.; Toomet, O. (2011). "maxLik: A package for maximum likelihood estimation in R". Computational Statistics. 26 (3): 443–458 [p. 450]. doi:10.1007/s00180-010-0217-1.
  2. ^ Berndt, E.; Hall, B.; Hall, R.; Hausman, J. (1974). "Estimation and Inference in Nonlinear Structural Models" (PDF). Annals of Economic and Social Measurement. 3 (4): 653–665.

Further reading

[edit]
  • V. Martin, S. Hurn, and D. Harris, Econometric Modelling with Time Series, Chapter 3 'Numerical Estimation Methods'. Cambridge University Press, 2015.
  • Amemiya, Takeshi (1985). Advanced Econometrics. Cambridge: Harvard University Press. pp. 137–138. ISBN 0-674-00560-0.
  • Gill, P.; Murray, W.; Wright, M. (1981). Practical Optimization. London: Harcourt Brace.
  • Gourieroux, Christian; Monfort, Alain (1995). "Gradient Methods and ML Estimation". Statistics and Econometric Models. New York: Cambridge University Press. pp. 452–458. ISBN 0-521-40551-3.
  • Harvey, A. C. (1990). The Econometric Analysis of Time Series (Second ed.). Cambridge: MIT Press. pp. 137–138. ISBN 0-262-08189-X.