Recent from talks
Nothing was collected or created yet.
Markov decision process
View on WikipediaMarkov decision process (MDP), also called a stochastic dynamic program or stochastic control problem, is a model for sequential decision making when outcomes are uncertain.[1]
Originating from operations research in the 1950s,[2][3] MDPs have since gained recognition in a variety of fields, including ecology, economics, healthcare, telecommunications and reinforcement learning.[4] Reinforcement learning utilizes the MDP framework to model the interaction between a learning agent and its environment. In this framework, the interaction is characterized by states, actions, and rewards. The MDP framework is designed to provide a simplified representation of key elements of artificial intelligence challenges. These elements encompass the understanding of cause and effect, the management of uncertainty and nondeterminism, and the pursuit of explicit goals.[4]
The name comes from its connection to Markov chains, a concept developed by the Russian mathematician Andrey Markov. The "Markov" in "Markov decision process" refers to the underlying structure of state transitions that still follow the Markov property. The process is called a "decision process" because it involves making decisions that influence these state transitions, extending the concept of a Markov chain into the realm of decision-making under uncertainty.
Definition
[edit]
A Markov decision process is a 4-tuple , where:
- is a set of states called the state space. The state space may be discrete or continuous, like the set of real numbers.
- is a set of actions called the action space (alternatively, is the set of actions available from state ). As for state, this set may be discrete or continuous.
- is, on an intuitive level, the probability that action in state at time will lead to state at time . In general, this probability transition is defined to satisfy for every measurable. In case the state space is discrete, the integral is intended with respect to the counting measure, so that the latter simplifies as ; in case , the integral is usually intended with respect to the Lebesgue measure.
- is the immediate reward (or expected immediate reward) received after transitioning from state to state , due to action .
A policy function is a (potentially probabilistic) mapping from state space () to action space ().
Optimization objective
[edit]The goal in a Markov decision process is to find a good "policy" for the decision maker: a function that specifies the action that the decision maker will choose when in state . Once a Markov decision process is combined with a policy in this way, this fixes the action for each state and the resulting combination behaves like a Markov chain (since the action chosen in state is completely determined by ).
The objective is to choose a policy that will maximize some cumulative function of the random rewards, typically the expected discounted sum over a potentially infinite horizon:
- (where we choose , i.e. actions given by the policy). And the expectation is taken over
where is the discount factor satisfying , which is usually close to (for example, for some discount rate ). A lower discount factor makes the decision maker more short-sighted, in that it comparatively disregards the effect that following its current policy has at times lying further in the future.
Another possible, but strictly related, objective that is commonly used is the step return. This time, instead of using a discount factor , the agent is interested only in the first steps of the process, with each reward having the same weight.
- (where we choose , i.e. actions given by the policy). And the expectation is taken over
where is the time horizon. Compared to the previous objective, the latter one is more used in Learning Theory.
A policy that maximizes the function above is called an optimal policy and is usually denoted . A particular MDP may have multiple distinct optimal policies. Because of the Markov property, it can be shown that the optimal policy is a function of the current state, as assumed above.
Simulator models
[edit]In many cases, it is difficult to represent the transition probability distributions, , explicitly. In such cases, a simulator can be used to model the MDP implicitly by providing samples from the transition distributions. One common form of implicit MDP model is an episodic environment simulator that can be started from an initial state and yields a subsequent state and reward every time it receives an action input. In this manner, trajectories of states, actions, and rewards, often called episodes may be produced.
Another form of simulator is a generative model, a single step simulator that can generate samples of the next state and reward given any state and action.[5] (Note that this is a different meaning from the term generative model in the context of statistical classification.) In algorithms that are expressed using pseudocode, is often used to represent a generative model. For example, the expression might denote the action of sampling from the generative model where and are the current state and action, and and are the new state and reward. Compared to an episodic simulator, a generative model has the advantage that it can yield data from any state, not only those encountered in a trajectory.
These model classes form a hierarchy of information content: an explicit model trivially yields a generative model through sampling from the distributions, and repeated application of a generative model yields an episodic simulator. In the opposite direction, it is only possible to learn approximate models through regression. The type of model available for a particular MDP plays a significant role in determining which solution algorithms are appropriate. For example, the dynamic programming algorithms described in the next section require an explicit model, and Monte Carlo tree search requires a generative model (or an episodic simulator that can be copied at any state), whereas most reinforcement learning algorithms require only an episodic simulator.
Example
[edit]
An example of MDP is the Pole-Balancing model, which comes from classic control theory.
In this example, we have
- is the set of ordered tuples given by pole angle, angular velocity, position of the cart and its speed.
- is , corresponding to applying a force on the left (right) on the cart.
- is the transition of the system, which in this case is going to be deterministic and driven by the laws of mechanics.
- is if the pole is up after the transition, zero otherwise. Therefore, this function only depend on in this specific case.
Algorithms
[edit]Solutions for MDPs with finite state and action spaces may be found through a variety of methods such as dynamic programming. The algorithms in this section apply to MDPs with finite state and action spaces and explicitly given transition probabilities and reward functions, but the basic concepts may be extended to handle other problem classes, for example using function approximation. Also, some processes with countably infinite state and action spaces can be exactly reduced to ones with finite state and action spaces.[6]
The standard family of algorithms to calculate optimal policies for finite state and action MDPs requires storage for two arrays indexed by state: value , which contains real values, and policy , which contains actions. At the end of the algorithm, will contain the solution and will contain the discounted sum of the rewards to be earned (on average) by following that solution from state .
The algorithm has two steps, (1) a value update and (2) a policy update, which are repeated in some order for all the states until no further changes take place. Both recursively update a new estimation of the optimal policy and state value using an older estimation of those values.
Their order depends on the variant of the algorithm; one can also do them for all states at once or state by state, and more often to some states than others. As long as no state is permanently excluded from either of the steps, the algorithm will eventually arrive at the correct solution.[7]
Notable variants
[edit]Value iteration
[edit]In value iteration (Bellman 1957), which is also called backward induction, the function is not used; instead, the value of is calculated within whenever it is needed. Substituting the calculation of into the calculation of gives the combined step;[further explanation needed]
where is the iteration number. Value iteration starts at and as a guess of the value function. It then iterates, repeatedly computing for all states , until converges with the left-hand side equal to the right-hand side (which is the "Bellman equation" for this problem[clarification needed]). Lloyd Shapley's 1953 paper on stochastic games included as a special case the value iteration method for MDPs,[8] but this was recognized only later on.[9]
Policy iteration
[edit]In policy iteration (Howard 1960), step one is performed once, and then step two is performed once, then both are repeated until policy converges. Then step one is again performed once and so on. (Policy iteration was invented by Howard to optimize Sears catalogue mailing, which he had been optimizing using value iteration.[10])
Instead of repeating step two to convergence, it may be formulated and solved as a set of linear equations. These equations are merely obtained by making in the step two equation.[clarification needed] Thus, repeating step two to convergence can be interpreted as solving the linear equations by relaxation.
This variant has the advantage that there is a definite stopping condition: when the array does not change in the course of applying step 1 to all states, the algorithm is completed.
Policy iteration is usually slower than value iteration for a large number of possible states.
Modified policy iteration
[edit]In modified policy iteration (van Nunen 1976; Puterman & Shin 1978), step one is performed once, and then step two is repeated several times.[11][12] Then step one is again performed once and so on.
Prioritized sweeping
[edit]In this variant, the steps are preferentially applied to states which are in some way important – whether based on the algorithm (there were large changes in or around those states recently) or based on use (those states are near the starting state, or otherwise of interest to the person or program using the algorithm).
Computational complexity
[edit]Algorithms for finding optimal policies with time complexity polynomial in the size of the problem representation exist for finite MDPs. Thus, decision problems based on MDPs are in computational complexity class P.[13] However, due to the curse of dimensionality, the size of the problem representation is often exponential in the number of state and action variables, limiting exact solution techniques to problems that have a compact representation. In practice, online planning techniques such as Monte Carlo tree search can find useful solutions in larger problems, and, in theory, it is possible to construct online planning algorithms that can find an arbitrarily near-optimal policy with no computational complexity dependence on the size of the state space.[14]
Extensions and generalizations
[edit]A Markov decision process is a stochastic game with only one player.
Partial observability
[edit]The solution above assumes that the state is known when action is to be taken; otherwise cannot be calculated. When this assumption is not true, the problem is called a partially observable Markov decision process or POMDP.
Constrained Markov decision processes
[edit]Constrained Markov decision processes (CMDPS) are extensions to Markov decision process (MDPs). There are three fundamental differences between MDPs and CMDPs.[15]
- There are multiple costs incurred after applying an action instead of one.
- CMDPs are solved with linear programs only, and dynamic programming does not work.
- The final policy depends on the starting state.
The method of Lagrange multipliers applies to CMDPs. Many Lagrangian-based algorithms have been developed.
- Natural policy gradient primal-dual method.[16]
There are a number of applications for CMDPs. It has recently been used in motion planning scenarios in robotics.[17]
Continuous-time Markov decision process
[edit]In discrete-time Markov Decision Processes, decisions are made at discrete time intervals. However, for continuous-time Markov decision processes, decisions can be made at any time the decision maker chooses. In comparison to discrete-time Markov decision processes, continuous-time Markov decision processes can better model the decision-making process for a system that has continuous dynamics, i.e., the system dynamics is defined by ordinary differential equations (ODEs). These kind of applications raise in queueing systems, epidemic processes, and population processes.
Like the discrete-time Markov decision processes, in continuous-time Markov decision processes the agent aims at finding the optimal policy which could maximize the expected cumulated reward. The only difference with the standard case stays in the fact that, due to the continuous nature of the time variable, the sum is replaced by an integral:
where
Discrete space: Linear programming formulation
[edit]If the state space and action space are finite, we could use linear programming to find the optimal policy, which was one of the earliest approaches applied. Here we only consider the ergodic model, which means our continuous-time MDP becomes an ergodic continuous-time Markov chain under a stationary policy. Under this assumption, although the decision maker can make a decision at any time in the current state, there is no benefit in taking multiple actions. It is better to take an action only at the time when system is transitioning from the current state to another state. Under some conditions,[18] if our optimal value function is independent of state , we will have the following inequality:
If there exists a function , then will be the smallest satisfying the above equation. In order to find , we could use the following linear programming model:
- Primal linear program(P-LP)
- Dual linear program(D-LP)
is a feasible solution to the D-LP if is nonnative and satisfied the constraints in the D-LP problem. A feasible solution to the D-LP is said to be an optimal solution if
for all feasible solution to the D-LP. Once we have found the optimal solution , we can use it to establish the optimal policies.
Continuous space: Hamilton–Jacobi–Bellman equation
[edit]In continuous-time MDP, if the state space and action space are continuous, the optimal criterion could be found by solving Hamilton–Jacobi–Bellman (HJB) partial differential equation. In order to discuss the HJB equation, we need to reformulate our problem
is the terminal reward function, is the system state vector, is the system control vector we try to find. shows how the state vector changes over time. The Hamilton–Jacobi–Bellman equation is as follows:
We could solve the equation to find the optimal value function , which in turns yield the optimal control at any time , through
Reinforcement learning
[edit]Reinforcement learning is an interdisciplinary area of machine learning and optimal control that has, as main objective, finding an approximately optimal policy for MDPs where transition probabilities and rewards are unknown.[19]
Reinforcement learning can solve Markov-Decision processes without explicit specification of the transition probabilities which are instead needed to perform policy iteration. In this setting, transition probabilities and rewards must be learned from experience, i.e. by letting an agent interact with the MDP for a given number of steps. Both on a theoretical and on a practical level, effort is put in maximizing the sample efficiency, i.e. minimimizing the number of samples needed to learn a policy whose performance is close to the optimal one (due to the stochastic nature of the process, learning the optimal policy with a finite number of samples is, in general, impossible).
Reinforcement Learning for discrete MDPs
[edit]For the purpose of this section, it is useful to define a further function, which corresponds to taking the action and then continuing optimally (or according to whatever policy one currently has):
While this function is also unknown, experience during learning is based on pairs (together with the outcome ; that is, "I was in state and I tried doing and happened"). Thus, one has an array and uses experience to update it directly. This is known as Q-learning.
Other scopes
[edit]Learning automata
[edit]Another application of MDP process in machine learning theory is called learning automata. This is also one type of reinforcement learning if the environment is stochastic. The first detail learning automata paper is surveyed by Narendra and Thathachar (1974), which were originally described explicitly as finite-state automata.[20] Similar to reinforcement learning, a learning automata algorithm also has the advantage of solving the problem when probability or rewards are unknown. The difference between learning automata and Q-learning is that the former technique omits the memory of Q-values, but updates the action probability directly to find the learning result. Learning automata is a learning scheme with a rigorous proof of convergence.[21]
In learning automata theory, a stochastic automaton consists of:
- a set x of possible inputs,
- a set Φ = { Φ1, ..., Φs } of possible internal states,
- a set α = { α1, ..., αr } of possible outputs, or actions, with r ≤ s,
- an initial state probability vector p(0) = ≪ p1(0), ..., ps(0) ≫,
- a computable function A which after each time step t generates p(t + 1) from p(t), the current input, and the current state, and
- a function G: Φ → α which generates the output at each time step.
The states of such an automaton correspond to the states of a "discrete-state discrete-parameter Markov process".[22] At each time step t = 0,1,2,3,..., the automaton reads an input from its environment, updates P(t) to P(t + 1) by A, randomly chooses a successor state according to the probabilities P(t + 1) and outputs the corresponding action. The automaton's environment, in turn, reads the action and sends the next input to the automaton.[21]
Category theoretic interpretation
[edit]Other than the rewards, a Markov decision process can be understood in terms of Category theory. Namely, let denote the free monoid with generating set A. Let Dist denote the Kleisli category of the Giry monad. Then a functor encodes both the set S of states and the probability function P.
In this way, Markov decision processes could be generalized from monoids (categories with one object) to arbitrary categories. One can call the result a context-dependent Markov decision process, because moving from one object to another in changes the set of available actions and the set of possible states.[citation needed]
Alternative notations
[edit]The terminology and notation for MDPs are not entirely settled. There are two main streams — one focuses on maximization problems from contexts like economics, using the terms action, reward, value, and calling the discount factor β or γ, while the other focuses on minimization problems from engineering and navigation[citation needed], using the terms control, cost, cost-to-go, and calling the discount factor α. In addition, the notation for the transition probability varies.
| in this article | alternative | comment |
|---|---|---|
| action a | control u | |
| reward R | cost g | g is the negative of R |
| value V | cost-to-go J | J is the negative of V |
| policy π | policy μ | |
| discounting factor γ | discounting factor α | |
| transition probability | transition probability |
In addition, transition probability is sometimes written , or, rarely,
See also
[edit]- Probabilistic automata
- Odds algorithm
- Quantum finite automata
- Partially observable Markov decision process
- Dynamic programming
- Bellman equation for applications to economics.
- Hamilton–Jacobi–Bellman equation
- Optimal control
- Recursive economics
- Mabinogion sheep problem
- Stochastic games
- Q-learning
- Markov chain
References
[edit]- ^ Puterman, Martin L. (1994). Markov decision processes: discrete stochastic dynamic programming. Wiley series in probability and mathematical statistics. Applied probability and statistics section. New York: Wiley. ISBN 978-0-471-61977-2.
- ^ Schneider, S.; Wagner, D. H. (1957-02-26). "Error detection in redundant systems". Papers presented at the February 26-28, 1957, western joint computer conference: Techniques for reliability on - IRE-AIEE-ACM '57 (Western). New York, NY, USA: Association for Computing Machinery. pp. 115–121. doi:10.1145/1455567.1455587. ISBN 978-1-4503-7861-1.
{{cite book}}: ISBN / Date incompatibility (help) - ^ Bellman, Richard (1958-09-01). "Dynamic programming and stochastic control processes". Information and Control. 1 (3): 228–239. doi:10.1016/S0019-9958(58)80003-0. ISSN 0019-9958.
- ^ a b Sutton, Richard S.; Barto, Andrew G. (2018). Reinforcement learning: an introduction. Adaptive computation and machine learning series (2nd ed.). Cambridge, Massachusetts: The MIT Press. ISBN 978-0-262-03924-6.
- ^ Kearns, Michael; Mansour, Yishay; Ng, Andrew (2002). "A Sparse Sampling Algorithm for Near-Optimal Planning in Large Markov Decision Processes". Machine Learning. 49 (193–208): 193–208. doi:10.1023/A:1017932429737.
- ^ Wrobel, A. (1984). "On Markovian decision models with a finite skeleton". Zeitschrift für Operations Research. 28 (1): 17–27. doi:10.1007/bf01919083. S2CID 2545336.
- ^ Reinforcement Learning: Theory and Python Implementation. Beijing: China Machine Press. 2019. p. 44. ISBN 9787111631774.
- ^ Shapley, Lloyd (1953). "Stochastic Games". Proceedings of the National Academy of Sciences of the United States of America. 39 (10): 1095–1100. Bibcode:1953PNAS...39.1095S. doi:10.1073/pnas.39.10.1095. PMC 1063912. PMID 16589380.
- ^ Kallenberg, Lodewijk (2002). "Finite state and action MDPs". In Feinberg, Eugene A.; Shwartz, Adam (eds.). Handbook of Markov decision processes: methods and applications. Springer. ISBN 978-0-7923-7459-6.
- ^ Howard 2002, "Comments on the Origin and Application of Markov Decision Processes"
- ^ Puterman, M. L.; Shin, M. C. (1978). "Modified Policy Iteration Algorithms for Discounted Markov Decision Problems". Management Science. 24 (11): 1127–1137. doi:10.1287/mnsc.24.11.1127.
- ^ van Nunen, J.A. E. E (1976). "A set of successive approximation methods for discounted Markovian decision problems". Zeitschrift für Operations Research. 20 (5): 203–208. doi:10.1007/bf01920264. S2CID 5167748.
- ^ Papadimitriou, Christos; Tsitsiklis, John (1987). "The Complexity of Markov Decision Processes". Mathematics of Operations Research. 12 (3): 441–450. doi:10.1287/moor.12.3.441. hdl:1721.1/2893. Retrieved November 2, 2023.
- ^ Kearns, Michael; Mansour, Yishay; Ng, Andrew (November 2002). "A Sparse Sampling Algorithm for Near-Optimal Planning in Large Markov Decision Processes". Machine Learning. 49 (2/3): 193–208. doi:10.1023/A:1017932429737.
- ^ Altman, Eitan (1999). Constrained Markov decision processes. Vol. 7. CRC Press.
- ^ Ding, Dongsheng; Zhang, Kaiqing; Jovanovic, Mihailo; Basar, Tamer (2020). Natural policy gradient primal-dual method for constrained Markov decision processes. Advances in Neural Information Processing Systems.
- ^ Feyzabadi, S.; Carpin, S. (18–22 Aug 2014). "Risk-aware path planning using hierarchical constrained Markov Decision Processes". Automation Science and Engineering (CASE). IEEE International Conference. pp. 297, 303.
- ^ Continuous-Time Markov Decision Processes. Stochastic Modelling and Applied Probability. Vol. 62. 2009. doi:10.1007/978-3-642-02547-1. ISBN 978-3-642-02546-4.
- ^ Shoham, Y.; Powers, R.; Grenager, T. (2003). "Multi-agent reinforcement learning: a critical survey" (PDF). Technical Report, Stanford University: 1–13. Retrieved 2018-12-12.
- ^ Narendra, K. S.; Thathachar, M. A. L. (1974). "Learning Automata – A Survey". IEEE Transactions on Systems, Man, and Cybernetics. SMC-4 (4): 323–334. CiteSeerX 10.1.1.295.2280. doi:10.1109/TSMC.1974.5408453. ISSN 0018-9472.
- ^ a b Narendra, Kumpati S.; Thathachar, Mandayam A. L. (1989). Learning automata: An introduction. Prentice Hall. ISBN 9780134855585.
- ^ Narendra & Thathachar 1974, p.325 left.
Sources
[edit]- Bellman, R. (1957), Dynamic Programming, Princeton University Press, ISBN 978-0-486-42809-3
{{citation}}: ISBN / Date incompatibility (help). Dover paperback edition (2003)
Further reading
[edit]- Bellman., R. E. (2003) [1957]. Dynamic Programming (Dover paperback ed.). Princeton, NJ: Princeton University Press. ISBN 978-0-486-42809-3.
- Bertsekas, D. (1995). Dynamic Programming and Optimal Control. Vol. 2. MA: Athena.
- Derman, C. (1970). Finite state Markovian decision processes. Academic Press.
- Feinberg, E.A.; Shwartz, A., eds. (2002). Handbook of Markov Decision Processes. Boston, MA: Kluwer. ISBN 9781461508052.
- Guo, X.; Hernández-Lerma, O. (2009). Continuous-Time Markov Decision Processes. Stochastic Modelling and Applied Probability. Springer. ISBN 9783642025464.
- Meyn, S. P. (2007). Control Techniques for Complex Networks. Cambridge University Press. ISBN 978-0-521-88441-9. Archived from the original on 19 June 2010. Appendix contains abridged "Meyn & Tweedie". Archived from the original on 18 December 2012.
- Puterman., M. L. (1994). Markov Decision Processes. Wiley.
- Ross, S. M. (1983). Introduction to stochastic dynamic programming (PDF). Academic press.
- Sutton, R. S.; Barto, A. G. (2017). Reinforcement Learning: An Introduction. Cambridge, MA: The MIT Press.
- Tijms., H.C. (2003). A First Course in Stochastic Models. Wiley. ISBN 9780470864289.
Markov decision process
View on GrokipediaFundamentals
Definition
A Markov decision process (MDP) is a mathematical framework used to model decision-making in dynamic environments where outcomes are influenced by both deliberate choices and inherent randomness. It formalizes sequential decision problems as a stochastic process, enabling the analysis of optimal policies under uncertainty. MDPs assume basic knowledge of probability theory, particularly stochastic processes, which are sequences of random variables representing system states evolving over time. Formally, an MDP is defined as a tuple , where denotes the state space, encompassing all possible configurations of the system; represents the action space, the set of available decisions at each state; is the transition probability function, specifying the likelihood of moving to a next state given the current state and action; is the reward function, assigning immediate payoffs for taking an action in a state; and is the discount factor, which weights the importance of future rewards relative to immediate ones. This structure captures the essence of problems where an agent interacts with an environment over discrete time steps, selecting actions to maximize cumulative rewards. Central to the MDP framework is the Markov property, which posits that the future state of the system depends solely on the current state and the action taken, independent of any prior history. Mathematically, this is expressed as , ensuring that the process is memoryless and simplifying computation by focusing decisions on the present. This property distinguishes MDPs from more general decision processes that might require tracking full histories. The concept of MDPs originated in the 1950s, coined by Richard Bellman as part of his development of dynamic programming techniques for addressing sequential decision problems in uncertain settings. Bellman's foundational work introduced the Markovian structure to model asymptotic behaviors in controlled stochastic systems, laying the groundwork for modern applications in optimization and control.[9]Components
A Markov decision process (MDP) is formally defined by a tuple consisting of several key components that model the environment for sequential decision-making under uncertainty.[10] The state space represents the set of all possible states in which the decision-maker can find itself, which may be discrete or continuous, finite or infinite in size.[10] For example, in a grid world navigation task, states could correspond to the agent's position as coordinates on a 2D lattice.[10] This component encapsulates all relevant information about the system's configuration at any time, adhering to the Markov property where future states depend only on the current state.[10] The action space denotes the set of actions available to the decision-maker, which can vary depending on the current state, leading to state-dependent action spaces .[10] Actions may be deterministic, where selecting an action leads to a fixed outcome, or part of stochastic policies that randomize over actions to explore or balance risk.[10] In practice, actions often represent choices like moving in a specific direction or adjusting a control parameter in a robotic system.[10] The transition function specifies the probability distribution over next states given the current state and action , capturing the dynamics of the environment.[10] This function assumes stationarity, meaning the transition probabilities do not change over time, which simplifies modeling long-term behaviors.[11] Stochastic transitions are common in real-world applications, such as weather-dependent movements in environmental planning.[10] The reward function or alternatively provides the immediate reward or cost received after transitioning from state to via action , which can be deterministic or stochastic to reflect uncertainty in outcomes.[10] Rewards quantify the desirability of state-action transitions, guiding the decision-maker toward beneficial behaviors, such as positive scores for reaching goals in inventory management problems.[10] Stochastic rewards allow modeling noisy feedback, as seen in financial trading scenarios where payoffs vary.[11] The discount factor is a parameter that weights the importance of future rewards relative to immediate ones, ensuring convergence in infinite-horizon settings by prioritizing short-term gains when is low.[10] A value of implies myopic decision-making focused solely on the current reward, while values closer to 1 emphasize long-term planning, as in sustainable resource allocation tasks.[10] MDPs can operate over finite or infinite horizons, where a finite horizon limits the number of decision steps, often leading to time-dependent policies, whereas infinite horizons assume ongoing interactions without a fixed endpoint.[10] Additionally, tasks may be episodic, consisting of distinct sequences starting from initial states and ending in terminal states, or continuing, involving perpetual interactions without natural terminations, such as in ongoing network routing.[10]Mathematical Formulation
Discrete-Time Model
In discrete-time Markov decision processes (MDPs), time progresses in discrete steps , where at each step the decision-maker observes the current state , selects an action , receives a reward , and transitions to the next state .[12] The evolution of the system is governed by the state transition probabilities, which specify the probability distribution over the next state and reward given the current state and action. Specifically, the transition kernel is defined as , the joint probability of transitioning to state and receiving reward after taking action in state . This can be marginalized to yield the state transition probabilities and the expected reward function .[12] These components ensure the Markov property: the future state and reward depend only on the current state and action, not on prior history.[9] The performance of a policy , which maps states to action distributions , is evaluated using the value function , defined as the expected discounted cumulative reward starting from state and following thereafter. Formally, where is the discount factor that weights future rewards.[12] Similarly, the action-value function assesses the expected discounted return when starting in state , taking action , and then adhering to : This relates to the value function via , providing a way to decompose policy evaluation.[12] The Bellman expectation equation provides a recursive characterization of by conditioning on the immediate action, next state, and reward under . Starting from the definition of , expand the expectation: since the remaining sum from onward is times by the law of total expectation and the Markov property. Substituting the policy and transition probabilities yields This equation expresses the value as the expected immediate reward plus the discounted value of the successor state, averaged over actions and outcomes.[12] A similar recursion holds for .[12] The optimal value function is defined as the supremum over all policies: , representing the best possible expected discounted return from state . It satisfies for any , with equality for optimal policies.Optimization Objective
The primary optimization objective in a Markov decision process (MDP) is to select a policy that maximizes the expected discounted cumulative reward over an infinite horizon, formally expressed as , where is the discount factor ensuring convergence. This expectation is taken with respect to the state-action trajectories induced by and the transition dynamics . An optimal policy achieves this maximum and satisfies for all states , where denotes the optimal action-value function representing the maximum expected return starting from state and taking action . The Bellman optimality equation characterizes the optimal value function , defined as the maximum expected discounted reward from state , via for all .[9] Similarly, the optimal action-value function obeys [9] These equations embody the principle of optimality, stating that an optimal policy divides the problem into subproblems where the remaining decisions are also optimal. The existence and uniqueness of as the fixed point of the Bellman optimality operator follow from the Banach fixed-point theorem applied in the complete metric space of bounded value functions equipped with the supremum norm . Specifically, is a -contraction mapping because for any value functions , ensuring a unique fixed point that is approached iteratively by applying to any initial . This contraction property holds under the finite-state, finite-action assumption and bounded rewards, guaranteeing the operator's well-definedness on the space of bounded functions. The policy improvement theorem provides a constructive way to approach optimality: for any policy , the greedy policy (with respect to the value function under ) satisfies for all , with strict inequality for some unless is already optimal. Equality holds if and only if is optimal, as the improvement is zero everywhere precisely when satisfies the Bellman optimality equation. This theorem underpins methods for refining policies toward . In finite MDPs (with finite state and action spaces), there always exists an optimal policy that is stationary (time-independent) and deterministic, meaning selects a single action for each without randomization or dependence on time or history. Such policies suffice to achieve from any starting state, leveraging the contraction properties and the finite structure to ensure the existence of pure-strategy optima.Examples and Applications
Illustrative Example
A simple yet insightful example of a Markov decision process is the gridworld environment, where an agent navigates a 4×4 grid representing the state space , with positions labeled as for row to 4 (top to bottom) and column to 4 (left to right). The agent's actions are up, down, left, and right. The goal state is the absorbing position at , which provides a reward of upon entry and thereafter. Each transition in non-absorbing states yields a reward of , encouraging efficient paths. Two pitfall states at and are also absorbing, delivering a reward of upon entry to penalize dangerous areas. This configuration highlights risk-reward trade-offs in stochastic environments, akin to examples in standard reinforcement learning literature.[10] The dynamics introduce stochasticity: from state and action , the agent moves to the intended neighbor with probability 0.8; with probability 0.1 each, it slips perpendicularly left or right relative to the intended direction (treating up/down as cycling to left/right slips). If a slip or move would exit the grid, the agent remains in . Absorbing states transition to themselves with probability 1 and reward 0. The reward function is deterministic based on the next state: for entering the goal, for pitfalls, and otherwise.[10] Consider a uniform random policy , where each action is chosen with probability . The state-value function satisfies the Bellman expectation equation: for discount factor , with for absorbing states. To illustrate computation, focus on state , adjacent to the goal. Under random policy, the expected update is the average over actions. For action right (): 0.8 probability to goal (, ), 0.1 up to (pit, ), 0.1 down (stay, ). Thus, expected for : . Similar calculations for other actions yield lower values due to risks (e.g., left to , up/down slips toward pit). Solving the full system gives , reflecting the policy's inefficiency from random slips toward the pit. For a distant state like , , as random walks prolong steps and increase pitfall risk.[10] The optimal value function and policy maximize expected returns, solved via methods like value iteration (detailed elsewhere). In this gridworld, increases toward the goal while dropping sharply near pitfalls: e.g., , (avoiding left slip to safe path), and . The optimal policy directs actions toward the goal while detouring pitfalls, visualized as arrows: from right to , down from to (bypassing pit), and right/down near goal. A table summarizes approximate values:| (1,1) | (1,2) | (1,3) | (1,4) | |
|---|---|---|---|---|
| Row 1 | -3.5 | -2.9 | -2.4 | -2.0 |
| Row 2 | -4.1 | -10.0* | -1.9 | -3.2 |
| Row 3 | -2.8 | -3.4 | -10.0* | 0.1 |
| Row 4 | -1.8 | -1.2 | 0.62 | 0.0* |
Real-World Applications
Markov decision processes (MDPs) have found extensive application in robotics, particularly for path planning and control in uncertain environments. In robot navigation, MDPs model states as the robot's position and sensor readings, actions as movement directions, and rewards based on progress toward goals while penalizing collisions or inefficiencies. Post-2000 advancements in approximate methods, such as focussed processing of MDPs, have enabled scalable solutions for large state spaces in dynamic settings, allowing robots to compute near-optimal policies efficiently.[13] For instance, real-time path planning algorithms using MDPs handle stochastic obstacles by iteratively updating value functions to balance exploration and exploitation.[14] In inventory management, MDPs optimize dynamic stocking policies under demand uncertainty, where states represent current inventory levels, actions involve ordering quantities, and rewards are defined as sales profits minus holding and shortage costs. This formulation allows for periodic review systems that minimize long-term costs by solving the Bellman equation for optimal ordering thresholds. Seminal work has demonstrated how value iteration on MDPs yields policies superior to traditional economic order quantity models in stochastic environments.[15] Recent extensions incorporate supply chain disruptions, showing up to 15-20% cost reductions in multi-echelon inventories through MDP-based decision rules.[16] Financial portfolio optimization leverages MDPs to manage asset allocation over time, with states capturing current portfolio values and market conditions, actions as rebalancing decisions, and rewards tied to risk-adjusted returns. This sequential framework accounts for transaction costs and market volatility, enabling dynamic strategies that outperform static buy-and-hold approaches. A influential book on MDPs in finance details how discounted infinite-horizon models derive optimal policies via linear programming, applied to equity-bond portfolios with empirical backtests showing improved Sharpe ratios.[17] In healthcare, MDPs facilitate treatment sequencing, such as dosing regimens for chronic diseases like diabetes, where states include patient health metrics and time since last dose, actions are dosage adjustments, and rewards reflect improved outcomes minus side effects. Recent 2020s applications use MDPs along with electronic health records to develop personalized treatment recommendations that enhance glycemic control.[18] For depression management, MDP models optimize sequential therapy choices by reformulating state-transition models, showing potential for dynamic treatment comparisons superior to static guidelines.[19] MDPs underpin AI in games like backgammon, modeling board positions as states, legal moves as actions, and win/loss outcomes as rewards to learn optimal strategies through reinforcement learning. The seminal TD-Gammon program, using temporal-difference methods on the MDP framework, achieved superhuman performance by self-play, revolutionizing game AI with over 1.5 million training games.[20] Modern applications include autonomous driving, where MDPs guide decision-making in uncertain traffic, as seen in systems like Waymo's since the 2010s, modeling vehicle states and actions to minimize collision risks while maximizing efficiency. In climate policy modeling, MDPs support sequential decisions on emissions reductions and adaptation, aligning with IPCC frameworks for socio-economic impacts; a 2020 model formulates government actions in response to climate states, optimizing welfare under uncertainty.[21]Solution Algorithms
Value Iteration
Value iteration is a dynamic programming algorithm used to solve finite Markov decision processes (MDPs) by iteratively approximating the optimal value function until convergence. It directly computes the value of each state under the optimal policy by successive approximations based on the Bellman optimality equation, which relates the value of a state to the maximum expected reward plus discounted future value over possible actions. The algorithm begins by initializing a value function for all states . In each iteration , the value function is updated for every state according to the Bellman optimality operator: where is the action set, is the transition probability and reward distribution, and is the discount factor. This update is performed synchronously across all states in each iteration, repeating until the maximum change in value across states falls below a small threshold , indicating convergence to the optimal value function . Here is pseudocode for the value iteration algorithm:Initialize V(s) = 0 for all s in S
While Δ > ε (some small threshold):
Δ = 0
For each s in S:
V_old = V(s)
V(s) = max_a ∑_{s',r} P(s',r|s,a) [r + γ V(s')]
Δ = max(Δ, |V(s) - V_old|)
Return V
Initialize V(s) = 0 for all s in S
While Δ > ε (some small threshold):
Δ = 0
For each s in S:
V_old = V(s)
V(s) = max_a ∑_{s',r} P(s',r|s,a) [r + γ V(s')]
Δ = max(Δ, |V(s) - V_old|)
Return V
