Markov Decision Process (MDP) in Reinforcement Learning
Last Updated : 24 Feb, 2025
Markov Decision Process is a mathematical framework used to describe an environment in decision-making scenarios where outcomes are partly random and partly under the control of a decision-maker. MDPs provide a formalism for modeling decision-making in situations where outcomes are uncertain, making them essential for reinforcement learning.
Components of an MDP
An MDP is defined by a tuple (S, A, P, R, \gamma) where:
- S (State Space): A finite or infinite set of states representing the environment.
- A (Action Space): A set of actions available to the agent in each state.
- P (Transition Probability): A probability function P(s' | s, a) that defines the likelihood of transitioning from state s to s' after taking action a.
- R (Reward Function): A function R(s, a, s') that assigns a reward for moving from state s to s' via action a.
- γ (Discount Factor): A value in the range [0,1] that determines the importance of future rewards.
Markov Property
MDP follows the Markov Property, which states that the next state depends only on the current state and action, not on past states.
Mathematically,
P(s' | s, a, s_{t-1}, a_{t-1}, ...) = P(s' | s, a)
This property ensures that MDPs can be efficiently solved using mathematical techniques like dynamic programming.
Policy
A policy defines the agent’s strategy for selecting actions in each state. It can be:
- Deterministic (\pi (s)): Always selects a fixed action for a given state.
- Stochastic (\pi (a | s)): Assigns probabilities to different actions for a given state.
Value Functions and Optimality
MDPs aim to find an optimal policy that maximizes cumulative rewards. Two key functions help evaluate policies:
1. State Value Function (Vπ(s)):
Vπ(s) = \mathbb{E} \left[ \sum_{t=0}^{\infty} γ^t R(s_t, a_t, s_{t+1}) \right]
It represents the expected cumulative reward from state s under policy \pi .
2. Action Value Function (Qπ(s, a)):
Qπ(s, a) = \mathbb{E} \left[ R(s, a, s') + γ Vπ(s') \right]
It evaluates the expected reward of taking action a in state s and following \pi thereafter.
The Optimal Value Function is defined as:
- Optimal State Value Function (V(s)):
V^*(s) = \max_{a} Q^*(s, a)
- Optimal Action Value Function (Q(s, a)):
Q^*(s, a) = R(s, a, s') + γ \max_{a'} Q^*(s', a')
Solving MDPs in Reinforcement Learning
Several algorithms have been developed to solve MDPs within the RL framework. Here are a few key approaches:
1. Dynamic Programming
Dynamic programming methods, such as Value Iteration and Policy Iteration, are used to solve MDPs when the model of the environment (transition probabilities and rewards) is known.
- Value Iteration: Iteratively updates the value function until it converges to the optimal value function.
V_{k+1}(s) = \max_{a \epsilon A} P(s'|s,a) [R(s,a,s')+ \gamma V_k (s')]
- Policy Iteration: Alternates between policy evaluation and policy improvement until the policy converges to the optimal policy.
2. Monte Carlo Methods
Monte Carlo methods are used when the model of the environment is unknown. These methods rely on sampling to estimate value functions and optimize policies.
- First-Visit MC: Estimates the value of a state as the average return following the first visit to that state.
- Every-Visit MC: Estimates the value of a state as the average return following all visits to that state.
3. Temporal-Difference Learning
Temporal-Difference (TD) learning methods combine ideas from dynamic programming and Monte Carlo methods. TD learning updates value estimates based on the difference (temporal difference) between consecutive value estimates.
- SARSA (State-Action-Reward-State-Action): Updates the action-value function based on the action taken by the current policy.
Q(s_t,a_t)←Q(s_t,a_t)+α[R_{t+1}+γQ(s_{t+1},a_{t+1} )−Q(s_t,a_t )]
- Q-Learning: An off-policy TD control algorithm that updates the action-value function based on the maximum reward of the next state.
Q(s_t, a_t) ← Q(s_t, a_t) + \alpha [R_{t+1} + \gamma \max_a Q(s_{t+1}, a) - Q(s_t, a_t)]
Markov Decision Processes provide a powerful and flexible framework for modeling decision-making problems in uncertain environments. Their relevance to Reinforcement Learning cannot be overstated, as MDPs underpin the theoretical foundation of RL algorithms. By understanding MDPs, researchers and practitioners can develop more effective RL solutions, unlocking new possibilities in artificial intelligence and beyond.