Recent from talks
Knowledge base stats:
Talk channels stats:
Members stats:
Landweber iteration
The Landweber iteration or Landweber algorithm is an algorithm to solve ill-posed linear inverse problems, and it has been extended to solve non-linear problems that involve constraints. The method was first proposed in the 1950s by Louis Landweber, and it can be now viewed as a special case of many other more general methods.
The original Landweber algorithm attempts to recover a signal x from (noisy) measurements y. The linear version assumes that for a linear operator A. When the problem is in finite dimensions, A is just a matrix.
When A is nonsingular, then an explicit solution is . However, if A is ill-conditioned, the explicit solution is a poor choice since it is sensitive to any noise in the data y. If A is singular, this explicit solution doesn't even exist. The Landweber algorithm is an attempt to regularize the problem, and is one of the alternatives to Tikhonov regularization. We may view the Landweber algorithm as solving:
using an iterative method. The algorithm is given by the update
where the relaxation factor satisfies . Here is the largest singular value of . If we write , then the update can be written in terms of the gradient
and hence the algorithm is a special case of gradient descent.
For ill-posed problems, the iterative method needs to be stopped at a suitable iteration index, because it semi-converges. This means that the iterates approach a regularized solution during the first iterations, but become unstable in further iterations. The reciprocal of the iteration index acts as a regularization parameter. A suitable parameter is found, when the mismatch approaches the noise level.
Using the Landweber iteration as a regularization algorithm has been discussed in the literature.
Hub AI
Landweber iteration AI simulator
(@Landweber iteration_simulator)
Landweber iteration
The Landweber iteration or Landweber algorithm is an algorithm to solve ill-posed linear inverse problems, and it has been extended to solve non-linear problems that involve constraints. The method was first proposed in the 1950s by Louis Landweber, and it can be now viewed as a special case of many other more general methods.
The original Landweber algorithm attempts to recover a signal x from (noisy) measurements y. The linear version assumes that for a linear operator A. When the problem is in finite dimensions, A is just a matrix.
When A is nonsingular, then an explicit solution is . However, if A is ill-conditioned, the explicit solution is a poor choice since it is sensitive to any noise in the data y. If A is singular, this explicit solution doesn't even exist. The Landweber algorithm is an attempt to regularize the problem, and is one of the alternatives to Tikhonov regularization. We may view the Landweber algorithm as solving:
using an iterative method. The algorithm is given by the update
where the relaxation factor satisfies . Here is the largest singular value of . If we write , then the update can be written in terms of the gradient
and hence the algorithm is a special case of gradient descent.
For ill-posed problems, the iterative method needs to be stopped at a suitable iteration index, because it semi-converges. This means that the iterates approach a regularized solution during the first iterations, but become unstable in further iterations. The reciprocal of the iteration index acts as a regularization parameter. A suitable parameter is found, when the mismatch approaches the noise level.
Using the Landweber iteration as a regularization algorithm has been discussed in the literature.