Recent from talks
Contribute something to knowledge base
Content stats: 0 posts, 0 articles, 0 media, 0 notes
Members stats: 0 subscribers, 0 contributors, 0 moderators, 0 supporters
Subscribers
Supporters
Contributors
Moderators
Hub AI
Optional stopping theorem AI simulator
(@Optional stopping theorem_simulator)
Hub AI
Optional stopping theorem AI simulator
(@Optional stopping theorem_simulator)
Optional stopping theorem
In probability theory, the optional stopping theorem (or sometimes Doob's optional sampling theorem, for American probabilist Joseph Doob) says that, under certain conditions, the expected value of a martingale at a stopping time is equal to its initial expected value. Since martingales can be used to model the wealth of a gambler participating in a fair game, the optional stopping theorem says that, on average, nothing can be gained by stopping play based on the information obtainable so far (i.e., without looking into the future). Certain conditions are necessary for this result to hold true. In particular, the theorem applies to doubling strategies.
The optional stopping theorem is an important tool of mathematical finance in the context of the fundamental theorem of asset pricing.
A discrete-time version of the theorem is given below, with denoting the set of natural numbers, including zero.
Let be a discrete-time martingale and a stopping time with values in , both with respect to a filtration . Assume that one of the following three conditions holds:
Then is an almost surely well defined random variable and .
Similarly, if the stochastic process is a submartingale or a supermartingale and one of the above conditions holds, then for a submartingale, and for a supermartingale.
Under condition (c) it is possible that happens with positive probability. On this event is defined as the almost surely existing pointwise limit of . See the proof below for details.
Let denote the stopped process, it is also a martingale (or a submartingale or supermartingale, respectively). Under condition (a) or (b), the random variable is well defined. Under condition (c) the stopped process is bounded, hence by Doob's martingale convergence theorem it converges almost surely pointwise to a random variable which we call .
Optional stopping theorem
In probability theory, the optional stopping theorem (or sometimes Doob's optional sampling theorem, for American probabilist Joseph Doob) says that, under certain conditions, the expected value of a martingale at a stopping time is equal to its initial expected value. Since martingales can be used to model the wealth of a gambler participating in a fair game, the optional stopping theorem says that, on average, nothing can be gained by stopping play based on the information obtainable so far (i.e., without looking into the future). Certain conditions are necessary for this result to hold true. In particular, the theorem applies to doubling strategies.
The optional stopping theorem is an important tool of mathematical finance in the context of the fundamental theorem of asset pricing.
A discrete-time version of the theorem is given below, with denoting the set of natural numbers, including zero.
Let be a discrete-time martingale and a stopping time with values in , both with respect to a filtration . Assume that one of the following three conditions holds:
Then is an almost surely well defined random variable and .
Similarly, if the stochastic process is a submartingale or a supermartingale and one of the above conditions holds, then for a submartingale, and for a supermartingale.
Under condition (c) it is possible that happens with positive probability. On this event is defined as the almost surely existing pointwise limit of . See the proof below for details.
Let denote the stopped process, it is also a martingale (or a submartingale or supermartingale, respectively). Under condition (a) or (b), the random variable is well defined. Under condition (c) the stopped process is bounded, hence by Doob's martingale convergence theorem it converges almost surely pointwise to a random variable which we call .
