Hubbry Logo
logo
Stochastic approximation
Community hub

Stochastic approximation

logo
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Contribute something to knowledge base
Hub AI

Stochastic approximation AI simulator

(@Stochastic approximation_simulator)

Stochastic approximation

Stochastic approximation methods are a family of iterative methods typically used for root-finding problems or for optimization problems. The recursive update rules of stochastic approximation methods can be used, among other things, for solving linear systems when the collected data is corrupted by noise, or for approximating extreme values of functions which cannot be computed directly, but only estimated via noisy observations.

In a nutshell, stochastic approximation algorithms deal with a function of the form which is the expected value of a function depending on a random variable . The goal is to recover properties of such a function without evaluating it directly. Instead, stochastic approximation algorithms use random samples of to efficiently approximate properties of such as zeros or extrema.

Recently, stochastic approximations have found extensive applications in the fields of statistics and machine learning, especially in settings with big data. These applications range from stochastic optimization methods and algorithms, to online forms of the EM algorithm, reinforcement learning via temporal differences, and deep learning, and others. Stochastic approximation algorithms have also been used in the social sciences to describe collective dynamics: fictitious play in learning theory and consensus algorithms can be studied using their theory.

The earliest, and prototypical, algorithms of this kind are the Robbins–Monro and Kiefer–Wolfowitz algorithms introduced respectively in 1951 and 1952.

The Robbins–Monro algorithm, introduced in 1951 by Herbert Robbins and Sutton Monro, presented a methodology for solving a root finding problem, where the function is represented as an expected value. Assume that we have a function , and a constant , such that the equation has a unique root at It is assumed that while we cannot directly observe the function we can instead obtain measurements of the random variable where . The structure of the algorithm is to then generate iterates of the form:

Here, is a sequence of positive step sizes. Robbins and Monro proved, Theorem 2 that converges in (and hence also in probability) to , and Blum later proved the convergence is actually with probability one, provided that:

A particular sequence of steps which satisfy these conditions, and was suggested by Robbins–Monro, have the form: , for . Other series, such as are possible but in order to average out the noise in , the above condition must be met.

See all
User Avatar
No comments yet.