Steady-state probabilities in Markov Chains
Last Updated : 29 Apr, 2025
A Markov chain is a statistical model that explains how a system transitions from one state to another, with the next state depending only on the present state and not on previous states. This Markov property makes it easier to study complex systems using probabilities.
An important component of a Markov chain is the transition matrix, which reflects the probabilities of transitions from one state to another. In the long term, the system can enter a steady-state, with the probabilities remaining constant.
Understanding Steady-State Probabilities
A Markov chain models a stochastic process in which the transition from one state to another is governed by a fixed probability distribution. The steady-state probabilities of a Markov chain are the long-run probabilities of the system being in a specific state. They do not change with time, that is, after enough transitions, the system settles into a constant distribution.
Mathematically, the vector of steady-state probabilities satisfies:
\pi P = \pi
where P is the transition probability matrix. Also, the sum of all steady-state probabilities should be 1:
\sum_{i} \pi_i = 1
This equation guarantees that the system will arrive at an balance in which the probability of transitioning between states does not change.
Markov Chains and Transition Matrices
A Markov chain consists of:
- A state space (set of possible states).
- A transition probability matrix P, where Pij represents the probability of moving from state i to j.
The system reaches a steady state when the probability of being in each state remains unchanged over time.
For ergodic Markov chains (both irreducible and aperiodic), a unique steady-state distribution always exists.
Methods for Calculating Steady-State Probabilities
There are several methods to calculate steady-state probabilities:
1. Solving the Linear System
Rewriting \pi P = \pi
as: (P^T - I) \pi = 0
We add the constraint \sum_{i} \pi_i = 1 and solve for \pi.
Computing Steady-State Probabilities of a Markov Chain Using Linear Algebra
Python import numpy as np def steady_state_linear(P): n = P.shape[0] A = np.transpose(P) - np.eye(n) A[-1] = np.ones(n) # Constraint: sum(pi) = 1 b = np.zeros(n) b[-1] = 1 return np.linalg.solve(A, b) # Example Transition Matrix P = np.array([[0.7, 0.2, 0.1], [0.3, 0.5, 0.2], [0.2, 0.3, 0.5]]) steady_probs = steady_state_linear(P) print("Steady-State Probabilities:", steady_probs)
OutputSteady-State Probabilities: [0.46341463 0.31707317 0.2195122 ]
2. Power Method (Iterative Approach)
This method iteratively updates \pi until convergence:
- Begin with an initial probability vector.
- Update iteratively using \pi^{(t+1)} = \pi^{(t)} P.
- Stop when \| \pi^{(t+1)} - \pi^{(t)} \| < \epsilon.
Computing Steady-State Probabilities Using the Power Method
Python import numpy as np P = np.array([[0.7, 0.2, 0.1], [0.3, 0.5, 0.2], [0.4, 0.3, 0.3]]) def steady_state_power_method(P, tol=1e-6, max_iter=1000): n = P.shape[0] pi = np.ones(n) / n # Start with uniform distribution for _ in range(max_iter): new_pi = np.dot(pi, P) if np.linalg.norm(new_pi - pi) < tol: break pi = new_pi return pi # Call the function after defining P steady_probs = steady_state_power_method(P) print("Steady-State Probabilities (Power Method):", steady_probs)
OutputSteady-State Probabilities (Power Method): [0.52727197 0.30909138 0.16363665]
3. Eigenvector Method
The steady-state distribution is the dominant eigenvector of P^T \pi = \picorresponding to eigenvalue 1.
Computing Steady-State Probabilities Using the Eigenvector Method
Python import numpy as np P = np.array([[0.7, 0.2, 0.1], [0.3, 0.5, 0.2], [0.4, 0.3, 0.3]]) def steady_state_eigenvector(P): eigenvalues, eigenvectors = np.linalg.eig(P.T) # Find the index of the eigenvalue closest to 1 index = np.abs(eigenvalues - 1).argmin() steady_state = eigenvectors[:, index].real # Normalize the vector to sum to 1 return steady_state / np.sum(steady_state) steady_probs = steady_state_eigenvector(P) print("Steady-State Probabilities (Eigenvector Method):", steady_probs)
OutputSteady-State Probabilities (Eigenvector Method): [0.52727273 0.30909091 0.16363636]
Applications in Machine Learning
1. Reinforcement Learning (RL)- In RL, the transition probability matrix characterizes state-action transitions. Steady-state distribution facilitates the study of long-run behavior of policies.
2. Hidden Markov Models (HMMs)- HMMs apply steady-state probabilities to represent long-run state distributions and assist in speech recognition and sequence prediction.
3. PageRank Algorithm- Google's PageRank algorithm relies on the steady-state probabilities of a web-link transition matrix.
4. Queuing Systems and Customer Behavior Modeling- Steady-state probabilities can be employed for modeling customer waiting times, load balancing in a system, and optimal resource scheduling.
5. Markov Decision Processes (MDPs)- MDPs assume the use of steady-state probabilities to analyze the stability of the policy in models of decision making.
Similar Reads
Finding the probability of a state at a given time in a Markov chain | Set 2 Given a Markov chain G, we have the find the probability of reaching the state F at time t = T if we start from state S at time t = 0.A Markov chain is a random process consisting of various states and the probabilities of moving from one state to another. We can represent it using a directed graph
11 min read
Toss Strange Coins Probability Given N biased coins and an array P[] such that P[i] denotes the probability of heads for ith coin. The task is to return the probability that the number of coins facing heads equals target if we toss every coin exactly once. Examples:Input: N = 1, P[] = {0.4}, target = 1Output: 0.40000Explanation:
7 min read
Markov Chains in NLP Markov chain is a mathematical model that is utilized to simulate random processes occurring over a duration of time. It consists of a set of states and the transitions between them. These transitions are probabilistic, which implies that the possibility of moving from one state to another solely de
12 min read
Probability of Knight to remain in the chessboard Given a n*n chessboard and the knight position (x, y), each time the knight is to move, it chooses one of eight possible moves uniformly at random (even if the piece would go off the chessboard) and moves there. The knight continues moving until it has made exactly k moves or has moved off the chess
15+ min read
A matrix probability question Given a rectangular matrix, we can move from current cell in 4 directions with equal probability. The 4 directions are right, left, top or bottom. Calculate the Probability that after N moves from a given position (i, j) in the matrix, we will not cross boundaries of the matrix at any point. We stro
10 min read