Skip to content
geeksforgeeks
  • Tutorials
    • Python
    • Java
    • Data Structures & Algorithms
    • ML & Data Science
    • Interview Corner
    • Programming Languages
    • Web Development
    • CS Subjects
    • DevOps And Linux
    • School Learning
    • Practice Coding Problems
  • Courses
    • DSA to Development
    • Get IBM Certification
    • Newly Launched!
      • Master Django Framework
      • Become AWS Certified
    • For Working Professionals
      • Interview 101: DSA & System Design
      • Data Science Training Program
      • JAVA Backend Development (Live)
      • DevOps Engineering (LIVE)
      • Data Structures & Algorithms in Python
    • For Students
      • Placement Preparation Course
      • Data Science (Live)
      • Data Structure & Algorithm-Self Paced (C++/JAVA)
      • Master Competitive Programming (Live)
      • Full Stack Development with React & Node JS (Live)
    • Full Stack Development
    • Data Science Program
    • All Courses
  • Data Science
  • Data Science Projects
  • Data Analysis
  • Data Visualization
  • Machine Learning
  • ML Projects
  • Deep Learning
  • NLP
  • Computer Vision
  • Artificial Intelligence
Open In App
Next Article:
Steady-state probabilities in Markov Chains
Next article icon

Steady-state probabilities in Markov Chains

Last Updated : 29 Apr, 2025
Comments
Improve
Suggest changes
Like Article
Like
Report

A Markov chain is a statistical model that explains how a system transitions from one state to another, with the next state depending only on the present state and not on previous states. This Markov property makes it easier to study complex systems using probabilities.

An important component of a Markov chain is the transition matrix, which reflects the probabilities of transitions from one state to another. In the long term, the system can enter a steady-state, with the probabilities remaining constant.

Understanding Steady-State Probabilities

A Markov chain models a stochastic process in which the transition from one state to another is governed by a fixed probability distribution. The steady-state probabilities of a Markov chain are the long-run probabilities of the system being in a specific state. They do not change with time, that is, after enough transitions, the system settles into a constant distribution.

Mathematically, the vector of steady-state probabilities satisfies:

\pi P = \pi

where P is the transition probability matrix. Also, the sum of all steady-state probabilities should be 1:

\sum_{i} \pi_i = 1

This equation guarantees that the system will arrive at an balance in which the probability of transitioning between states does not change.

Markov Chains and Transition Matrices

A Markov chain consists of:

  • A state space (set of possible states).
  • A transition probability matrix P, where Pij represents the probability of moving from state i to j.

The system reaches a steady state when the probability of being in each state remains unchanged over time.

For ergodic Markov chains (both irreducible and aperiodic), a unique steady-state distribution always exists.

Methods for Calculating Steady-State Probabilities

There are several methods to calculate steady-state probabilities:

1. Solving the Linear System

Rewriting \pi P = \pi
as: (P^T - I) \pi = 0
We add the constraint \sum_{i} \pi_i = 1 and solve for \pi.

Computing Steady-State Probabilities of a Markov Chain Using Linear Algebra

Python
import numpy as np  def steady_state_linear(P):     n = P.shape[0]     A = np.transpose(P) - np.eye(n)     A[-1] = np.ones(n)  # Constraint: sum(pi) = 1     b = np.zeros(n)     b[-1] = 1          return np.linalg.solve(A, b)  # Example Transition Matrix P = np.array([[0.7, 0.2, 0.1],               [0.3, 0.5, 0.2],               [0.2, 0.3, 0.5]])  steady_probs = steady_state_linear(P) print("Steady-State Probabilities:", steady_probs) 

Output
Steady-State Probabilities: [0.46341463 0.31707317 0.2195122 ] 

2. Power Method (Iterative Approach)

This method iteratively updates \pi until convergence:

  1. Begin with an initial probability vector.
  2. Update iteratively using \pi^{(t+1)} = \pi^{(t)} P.
  3. Stop when \| \pi^{(t+1)} - \pi^{(t)} \| < \epsilon.

Computing Steady-State Probabilities Using the Power Method

Python
import numpy as np  P = np.array([[0.7, 0.2, 0.1],                [0.3, 0.5, 0.2],                [0.4, 0.3, 0.3]])  def steady_state_power_method(P, tol=1e-6, max_iter=1000):     n = P.shape[0]     pi = np.ones(n) / n  # Start with uniform distribution          for _ in range(max_iter):         new_pi = np.dot(pi, P)         if np.linalg.norm(new_pi - pi) < tol:             break         pi = new_pi          return pi  # Call the function after defining P steady_probs = steady_state_power_method(P) print("Steady-State Probabilities (Power Method):", steady_probs) 

Output
Steady-State Probabilities (Power Method): [0.52727197 0.30909138 0.16363665] 

3. Eigenvector Method

The steady-state distribution is the dominant eigenvector of P^T \pi = \picorresponding to eigenvalue 1.

Computing Steady-State Probabilities Using the Eigenvector Method

Python
import numpy as np  P = np.array([[0.7, 0.2, 0.1],                [0.3, 0.5, 0.2],                [0.4, 0.3, 0.3]])  def steady_state_eigenvector(P):     eigenvalues, eigenvectors = np.linalg.eig(P.T)          # Find the index of the eigenvalue closest to 1     index = np.abs(eigenvalues - 1).argmin()     steady_state = eigenvectors[:, index].real          # Normalize the vector to sum to 1     return steady_state / np.sum(steady_state)  steady_probs = steady_state_eigenvector(P)  print("Steady-State Probabilities (Eigenvector Method):", steady_probs) 

Output
Steady-State Probabilities (Eigenvector Method): [0.52727273 0.30909091 0.16363636] 

Applications in Machine Learning

1. Reinforcement Learning (RL)- In RL, the transition probability matrix characterizes state-action transitions. Steady-state distribution facilitates the study of long-run behavior of policies.

2. Hidden Markov Models (HMMs)- HMMs apply steady-state probabilities to represent long-run state distributions and assist in speech recognition and sequence prediction.

3. PageRank Algorithm- Google's PageRank algorithm relies on the steady-state probabilities of a web-link transition matrix.

4. Queuing Systems and Customer Behavior Modeling- Steady-state probabilities can be employed for modeling customer waiting times, load balancing in a system, and optimal resource scheduling.

5. Markov Decision Processes (MDPs)- MDPs assume the use of steady-state probabilities to analyze the stability of the policy in models of decision making.


Next Article
Steady-state probabilities in Markov Chains

B

Bhumi Mittal
Improve
Article Tags :
  • Mathematical
  • DSA
  • AI-ML-DS
  • Statistics
  • Data Science
Practice Tags :
  • Mathematical

Similar Reads

    Finding the probability of a state at a given time in a Markov chain | Set 2
    Given a Markov chain G, we have the find the probability of reaching the state F at time t = T if we start from state S at time t = 0.A Markov chain is a random process consisting of various states and the probabilities of moving from one state to another. We can represent it using a directed graph
    11 min read
    Toss Strange Coins Probability
    Given N biased coins and an array P[] such that P[i] denotes the probability of heads for ith coin. The task is to return the probability that the number of coins facing heads equals target if we toss every coin exactly once. Examples:Input: N = 1, P[] = {0.4}, target = 1Output: 0.40000Explanation:
    7 min read
    Markov Chains in NLP
    Markov chain is a mathematical model that is utilized to simulate random processes occurring over a duration of time. It consists of a set of states and the transitions between them. These transitions are probabilistic, which implies that the possibility of moving from one state to another solely de
    12 min read
    Probability of Knight to remain in the chessboard
    Given a n*n chessboard and the knight position (x, y), each time the knight is to move, it chooses one of eight possible moves uniformly at random (even if the piece would go off the chessboard) and moves there. The knight continues moving until it has made exactly k moves or has moved off the chess
    15+ min read
    A matrix probability question
    Given a rectangular matrix, we can move from current cell in 4 directions with equal probability. The 4 directions are right, left, top or bottom. Calculate the Probability that after N moves from a given position (i, j) in the matrix, we will not cross boundaries of the matrix at any point. We stro
    10 min read
geeksforgeeks-footer-logo
Corporate & Communications Address:
A-143, 7th Floor, Sovereign Corporate Tower, Sector- 136, Noida, Uttar Pradesh (201305)
Registered Address:
K 061, Tower K, Gulshan Vivante Apartment, Sector 137, Noida, Gautam Buddh Nagar, Uttar Pradesh, 201305
GFG App on Play Store GFG App on App Store
Advertise with us
  • Company
  • About Us
  • Legal
  • Privacy Policy
  • In Media
  • Contact Us
  • Advertise with us
  • GFG Corporate Solution
  • Placement Training Program
  • Languages
  • Python
  • Java
  • C++
  • PHP
  • GoLang
  • SQL
  • R Language
  • Android Tutorial
  • Tutorials Archive
  • DSA
  • Data Structures
  • Algorithms
  • DSA for Beginners
  • Basic DSA Problems
  • DSA Roadmap
  • Top 100 DSA Interview Problems
  • DSA Roadmap by Sandeep Jain
  • All Cheat Sheets
  • Data Science & ML
  • Data Science With Python
  • Data Science For Beginner
  • Machine Learning
  • ML Maths
  • Data Visualisation
  • Pandas
  • NumPy
  • NLP
  • Deep Learning
  • Web Technologies
  • HTML
  • CSS
  • JavaScript
  • TypeScript
  • ReactJS
  • NextJS
  • Bootstrap
  • Web Design
  • Python Tutorial
  • Python Programming Examples
  • Python Projects
  • Python Tkinter
  • Python Web Scraping
  • OpenCV Tutorial
  • Python Interview Question
  • Django
  • Computer Science
  • Operating Systems
  • Computer Network
  • Database Management System
  • Software Engineering
  • Digital Logic Design
  • Engineering Maths
  • Software Development
  • Software Testing
  • DevOps
  • Git
  • Linux
  • AWS
  • Docker
  • Kubernetes
  • Azure
  • GCP
  • DevOps Roadmap
  • System Design
  • High Level Design
  • Low Level Design
  • UML Diagrams
  • Interview Guide
  • Design Patterns
  • OOAD
  • System Design Bootcamp
  • Interview Questions
  • Inteview Preparation
  • Competitive Programming
  • Top DS or Algo for CP
  • Company-Wise Recruitment Process
  • Company-Wise Preparation
  • Aptitude Preparation
  • Puzzles
  • School Subjects
  • Mathematics
  • Physics
  • Chemistry
  • Biology
  • Social Science
  • English Grammar
  • Commerce
  • World GK
  • GeeksforGeeks Videos
  • DSA
  • Python
  • Java
  • C++
  • Web Development
  • Data Science
  • CS Subjects
@GeeksforGeeks, Sanchhaya Education Private Limited, All rights reserved
We use cookies to ensure you have the best browsing experience on our website. By using our site, you acknowledge that you have read and understood our Cookie Policy & Privacy Policy
Lightbox
Improvement
Suggest Changes
Help us improve. Share your suggestions to enhance the article. Contribute your expertise and make a difference in the GeeksforGeeks portal.
geeksforgeeks-suggest-icon
Create Improvement
Enhance the article with your expertise. Contribute to the GeeksforGeeks community and help create better learning resources for all.
geeksforgeeks-improvement-icon
Suggest Changes
min 4 words, max Words Limit:1000

Thank You!

Your suggestions are valuable to us.

What kind of Experience do you want to share?

Interview Experiences
Admission Experiences
Career Journeys
Work Experiences
Campus Experiences
Competitive Exam Experiences