Deadlock, Starvation, and Livelock
Last Updated : 27 May, 2025
Deadlock, starvation, and livelock are problems that can occur in computer systems when multiple processes compete for resources. Deadlock happens when processes get stuck waiting for each other indefinitely, so none can proceed. Starvation occurs when a process is repeatedly denied access to resources because others with higher priority keep getting them first. Livelock is when processes keep changing their states to avoid conflict but still fail to make progress, similar to two people constantly stepping aside for each other without passing. Understanding these issues is important to design systems that run smoothly without getting stuck.
Deadlock
A deadlock is a situation where a set of processes is blocked because each process is holding a resource and waiting for another resource acquired by some other process. In this article, we will discuss deadlock, its necessary conditions, etc. in detail.
- Deadlock is a situation in computing where two or more processes are unable to proceed because each is waiting for the other to release resources.
- Key concepts include mutual exclusion, resource holding, circular wait, and no preemption.
DeadlockProcess 1 is holding Resource 1 and waiting for resource 2 which is acquired by process 2, and process 2 is waiting for resource 1.
Starvation
Starvation is the problem that occurs when high priority processes keep executing and low priority processes get blocked for indefinite time. In heavily loaded computer system, a steady stream of higher-priority processes can prevent a low-priority process from ever getting the CPU.
Causes of Starvation :
- Priority Scheduling: If there are always higher-priority processes available, then the lower-priority processes may never be allowed to run.
- Resource Utilization: We see that resources are always used by more significant priority processes and leave a lesser priority process starved.
Livelock
Livelock occurs when two or more processes continually repeat the same interaction in response to changes in the other processes without doing any useful work. These processes are not in the waiting state, and they are running concurrently. This is different from a deadlock because in a deadlock all processes are in the waiting state.
LIVELOCKThe diagram illustrates a livelock scenario in an operating system where two processes (Process A and Process B) are actively trying to perform an action (fork()
), but repeatedly fail because the process table is full.
Example: Imagine a pair of processes using two resources below:
Java /*package whatever //do not write package name here */ void processA() { enterReg(resource1); enterReg(resource2); useBothResources(); leaveReg(resource2); leaveReg(resource1); } void processB() { enterReg(resource1); enterReg(resource2); useBothResources(); leaveReg(resource2); leaveReg(resource1); }
C++ void process_A(void) { enter_reg(&resource_1); enter_reg(&resource_2); use_both_resources(); leave_reg(&resource_2); leave_reg(&resource_1); } void process_B(void) { enter_reg(&resource_1); enter_reg(&resource_2); use_both_resources(); leave_reg(&resource_2); leave_reg(&resource_1); }
Each of the two processes needs the two resources and they use the polling primitive enterReg to try to acquire the locks necessary for them. In case, the attempt fails, the process just tries again. If process A runs first and acquires resource 1 and then process B runs and acquires resource 2, no matter which one runs next, it will make no further progress, but neither of the two processes blocks. What actually happens is that it uses its CPU quantum over and over again without any progress being made but also without any sort of blocking. Thus, this situation is not that of a deadlock( as no process is being blocked) but we have something functionally equivalent to a deadlock: LIVELOCK.
What leads to Livelock
Livelock occurs when processes continuously change their state in response to each other, but make no actual progress unlike a deadlock, where they’re stuck waiting.
A classic cause is competition for finite resources, such as process table entries in a UNIX system. For example, imagine a system with 100 process slots. Ten programs each try to create 12 subprocesses. After each has spawned 9, all 100 slots are used (10 parents + 90 children). When the programs try to create more processes, fork()
fails due to the full table.
If each program waits a random time and tries again, they may all continuously fail and retry endlessly reacting but never succeeding. This is livelock: the system is active but makes no progress. Though rare, such situations are possible and difficult to detect.
Difference between Deadlock, Starvation and Livelock
A livelock is similar to a deadlock, except that the states of the processes involved in the livelock constantly change with regard to one another, none progressing. Livelock is a special case of resource starvation; the general definition states that a specific process is not progressing.
Feature | Deadlock | Starvation | Livelock |
---|
Definition | Processes are blocked forever, each waiting for a resource held by another. | A process waits indefinitely because it is always bypassed by others. | Processes keep executing but fail to make progress. |
Cause | Circular wait and resource holding. | Unfair resource allocation or scheduling. | Processes continuously respond to each other, preventing progress. |
Process State | Blocked (not executing). | Ready but not scheduled/executed. | Actively executing but not making progress. |
System Progress | No progress at all. | System progresses, but some processes do not. | System is busy, but no real work is done. |
Example | A waits for B’s resource; B waits for A’s resource. | A low-priority task never gets CPU time. | Two processes constantly yielding to each other. |
Resolution | Requires deadlock detection and recovery. | Use of fair scheduling (e.g., aging). | Needs better coordination or back-off strategies. |
Livelock:
CPP var l1 = .... // lock object like semaphore or mutex etc var l2 = .... // lock object like semaphore or mutex etc // Thread1 Thread.Start( ()=> { while (true) { if (!l1.Lock(1000)) { continue; } if (!l2.Lock(1000)) { continue; } /// do some work }); // Thread2 Thread.Start( ()=> { while (true) { if (!l2.Lock(1000)) { continue; } if (!l1.Lock(1000)) { continue; } // do some work });
Java import java.util.concurrent.locks.Lock; import java.util.concurrent.locks.ReentrantLock; public class Main { private static final Lock l1 = new ReentrantLock(); private static final Lock l2 = new ReentrantLock(); public static void main(String[] args) { // Thread1 new Thread(() -> { while (true) { try { if (!l1.tryLock(1000)) { continue; } if (!l2.tryLock(1000)) { l1.unlock(); continue; } // do some work l2.unlock(); l1.unlock(); } catch (InterruptedException e) { Thread.currentThread().interrupt(); } } }).start(); // Thread2 new Thread(() -> { while (true) { try { if (!l2.tryLock(1000)) { continue; } if (!l1.tryLock(1000)) { l2.unlock(); continue; } // do some work l1.unlock(); l2.unlock(); } catch (InterruptedException e) { Thread.currentThread().interrupt(); } } }).start(); } }
Deadlock:
CPP var p = new object(); lock(p) { lock(p) { // deadlock. Since p is previously locked // we will never reach here... }
Java public class Main { static final Object lock1 = new Object(); static final Object lock2 = new Object(); public static void main(String[] args) { Thread t1 = new Thread(() -> { synchronized (lock1) { System.out.println("Thread 1: Holding lock1..."); try { Thread.sleep(100); } catch (InterruptedException ignored) {} System.out.println("Thread 1: Waiting for lock2..."); synchronized (lock2) { System.out.println("Thread 1: Acquired lock2!"); } } }); Thread t2 = new Thread(() -> { synchronized (lock2) { System.out.println("Thread 2: Holding lock2..."); try { Thread.sleep(100); } catch (InterruptedException ignored) {} System.out.println("Thread 2: Waiting for lock1..."); synchronized (lock1) { System.out.println("Thread 2: Acquired lock1!"); } } }); t1.start(); t2.start(); } }
Starvation:
CPP Queue q = ..... while (q.Count > 0) { var c = q.Dequeue(); ......... // Some method in different thread accidentally // puts c back in queue twice within same time frame q.Enqueue(c); q.Enqueue(c); // leading to growth of queue twice then it // can consume, thus starving of computing }
Java /*package whatever //do not write package name here */ import java.util.LinkedList; import java.util.Queue; public class Main { public static void main(String[] args) { Queue<Object> q = new LinkedList<>(); while (!q.isEmpty()) { Object c; synchronized (q) { c = q.poll(); // Dequeue // Do something with c } // Some method in a different thread // accidentally puts c back in the queue twice // within the same time frame synchronized (q) { q.offer(c); // Enqueue once q.offer(c); // Enqueue again } // This can lead to the growth of the queue // twice as fast as it can consume, potentially // causing starvation of computing resources } } }
Starvation happens when "greedy" threads make shared resources unavailable for long periods.
Similar Reads
Need and Functions of Operating Systems The fundamental goal of an Operating System is to execute user programs and to make tasks easier. Various application programs along with hardware systems are used to perform this work. Operating System is software that manages and controls the entire set of resources and effectively utilizes every
9 min read
Introduction of Process Management Process Management for a single tasking or batch processing system is easy as only one process is active at a time. With multiple processes (multiprogramming or multitasking) being active, the process management becomes complex as a CPU needs to be efficiently utilized by multiple processes. Multipl
8 min read
States of a Process in Operating Systems In an operating system, a process is a program that is being executed. During its execution, a process goes through different states. Understanding these states helps us see how the operating system manages processes, ensuring that the computer runs efficiently. Please refer Process in Operating Sys
11 min read
CPU Scheduling in Operating Systems CPU scheduling is a process used by the operating system to decide which task or process gets to use the CPU at a particular time. This is important because a CPU can only handle one task at a time, but there are usually many tasks that need to be processed. The following are different purposes of a
8 min read
Preemptive and Non-Preemptive Scheduling In operating systems, scheduling is the method by which processes are given access the CPU. Efficient scheduling is essential for optimal system performance and user experience. There are two primary types of CPU scheduling: preemptive and non-preemptive. Understanding the differences between preemp
5 min read
Starvation and Aging in Operating Systems Starvation occurs when a process in the OS runs out of resources because other processes are using it. This is a problem with resource management while Operating systems employ aging as a scheduling approach to keep them from starving. It is one of the most common scheduling algorithms in batch syst
6 min read
Introduction of System Call A system call is a programmatic way in which a computer program requests a service from the kernel of the operating system on which it is executed. A system call is a way for programs to interact with the operating system. A computer program makes a system call when it requests the operating system'
11 min read
Difference between User Level thread and Kernel Level thread User-level threads are threads that are managed entirely by the user-level thread library, without any direct intervention from the operating system's kernel, whereas, Kernel-level threads are threads that are managed directly by the operating system's kernel. In this article, we will see the overvi
5 min read
Introduction of Process Synchronization Process Synchronization is used in a computer system to ensure that multiple processes or threads can run concurrently without interfering with each other.The main objective of process synchronization is to ensure that multiple processes access shared resources without interfering with each other an
10 min read
Critical Section in Synchronization A critical section is a segment of a program where shared resources, such as memory, files, or ports, are accessed by multiple processes or threads. To prevent issues like data inconsistency and race conditions, synchronization techniques ensure that only one process or thread accesses the critical
8 min read