Critical Section in Synchronization
Last Updated : 21 May, 2025
A critical section is a segment of a program where shared resources, such as memory, files, or ports, are accessed by multiple processes or threads. To prevent issues like data inconsistency and race conditions, synchronization techniques ensure that only one process or thread accesses the critical section at a time.
The critical section includes operations on shared variables or resources that must be executed atomically to maintain data consistency. For example, reading from or writing to a shared file or modifying a global variable requires exclusive access.
In concurrent programming, if one process modifies shared data while another reads it simultaneously, the outcome can be unpredictable. Therefore, access to shared resources must be synchronized to ensure correct program behavior.
Structure of a Critical Section
Entry Section
- The process requests permission to enter the critical section.
- Synchronization tools (e.g., mutex, semaphore) are used to control access.
Critical Section
- The actual code where shared resources are accessed or modified.
Exit Section
- The process releases the lock or semaphore, allowing other processes to enter the critical section.
Remainder Section
- The rest of the program that does not involve shared resource access.
Critical Section StructureCharacteristics Of Critical Section
There are some properties that should be followed by any code in the critical section :
1. Mutual Exclusion
Only one process or thread can execute in the critical section at a given time. If two or more processes access shared resources (like variables or files) at the same time without control, data inconsistency or corruption may occur. If process Pi is executing in its critical section, then no other processes can be executing in their critical sections.
Example:
If two threads update the same bank account balance at the same time, the final result may be incorrect due to race conditions.
We can achieve it by : Using synchronization tools like mutexes, locks, or semaphores to ensure exclusive access.
2. Progress
If no process is executing in its critical section and some processes wish to enter their critical sections, then only those processes that are not executing in their remainder sections can participate in deciding which will enter its critical section next, and this selection cannot be postponed indefinitely.
It's important to avoid idle CPU cycles or deadlock situations where all processes are waiting, even when they don’t have to enter.
Example:
If process A finishes its work and leaves the critical section, process B (waiting to enter) should not be delayed unnecessarily due to faulty design or logic in the algorithm.
Goal: To ensure that the system continues to make progress and doesn't freeze or hang.
3. Bounded Waiting
There must be a limit on how many times other processes are allowed to enter the critical section before a waiting process gets its turn. It is important to prevent starvation, where one process waits indefinitely while others repeatedly enter the critical section.
Example:
If process A is always skipped in favor of processes B and C, then A might never enter, even if it's ready.
Solution: Implement fair scheduling policies like FIFO queues or ticket-based systems.
Handling Critical Section
Two general approaches are used to handle critical sections :
1. Preemptive Kernels : A preemptive kernel allows the operating system to interrupt or preempt a process even when it is running in kernel mode.
- The OS can forcibly switch from one running process to another.
- Even if a process is performing a system call or executing kernel code, it can be paused, and another process can be scheduled.
Advantages:
- Better responsiveness, especially for real-time systems.
- Higher CPU utilization and fairness among processes.
Disadvantages:
- Increased complexity due to the need to manage race conditions and data consistency when kernel data structures are accessed by multiple processes.
Example Use Case: Modern desktop and server operating systems like Linux, Windows, and macOS use preemptive kernels for better multitasking.
2. Non-Preemptive Kernels : A non-preemptive kernel does not allow interruption of a process that is running in kernel mode. The CPU control is explicitly released by the process. The kernel ensures that only one process is active in the kernel at any given time. The process continues until it:
- Exits the kernel,
- Blocks (e.g., waits for I/O), or
- Voluntarily yields the CPU.
Advantages:
- Simplicity: Easier to program and maintain.
- No race conditions on kernel data since access is automatically serialized.
Disadvantages:
- Poor responsiveness, especially if a long-running kernel operation delays other processes.
- Not suitable for real-time or interactive systems.
Example Use Case: Older operating systems or embedded systems where simplicity and reliability outweigh responsiveness.
Critical Section Problem
The use of critical sections in a program can cause a number of issues, including:
- Deadlock: When two or more threads or processes wait for each other to release a critical section, it can result in a deadlock situation in which none of the threads or processes can move. Deadlocks can be difficult to detect and resolve, and they can have a significant impact on a program's performance and reliability.
- Starvation: When a thread or process is repeatedly prevented from entering a critical section, it can result in starvation, in which the thread or process is unable to progress. This can happen if the critical section is held for an unusually long period of time, or if a high-priority thread or process is always given priority when entering the critical section.
- Overhead: When using critical sections, threads or processes must acquire and release locks or semaphores, which can take time and resources. This may reduce the program's overall performance.
It could be visualized using the pseudo-code below
do{
flag=1;
while(flag); // (entry section)
// critical section
if (!flag)
// remainder section
} while(true);
Solution to Critical Section Problem : A simple solution to the critical section can be thought of as shown below,
acquireLock();
Process Critical Section
releaseLock();
A thread must acquire a lock prior to executing a critical section. The lock can be acquired by only one thread. There are various ways to implement locks in the above pseudo-code.
To read about detailed solution to Critical Section Problem read - Solution to Critical Section Problem
Examples of critical sections in real-world applications
Banking System (ATM or Online Banking)
- Critical Section: Updating an account balance during a deposit or withdrawal.
- Issue if not handled: Two simultaneous withdrawals could result in an incorrect final balance due to race conditions.
Ticket Booking System (Airlines, Movies, Trains)
- Critical Section: Reserving the last available seat.
- Issue if not handled: Two users may be shown the same available seat and both may book it, leading to overbooking.
Print Spooler in a Networked Printer
- Critical Section: Sending print jobs to the printer queue.
- Issue if not handled: Print jobs may get mixed up or skipped if multiple users send jobs simultaneously.
File Editing in Shared Documents (e.g., Google Docs, MS Word with shared access)
- Critical Section: Saving or writing to the shared document.
- Issue if not handled: Simultaneous edits could lead to conflicting versions or data loss.
Online Multiplayer Gaming Servers
- Critical Section: Updating a player's score, health, or game state in real time.
- Issue if not handled: Game logic becomes inconsistent; players may see outdated or incorrect data.
Inventory Management in E-Commerce
- Critical Section: Reducing stock quantity when a product is purchased.
- Issue if not handled: Overselling of items, customer dissatisfaction.
Advantages of Critical Section
- Prevents race conditions: By ensuring that only one process can execute the critical section at a time, race conditions are prevented, ensuring consistency of shared data.
- Provides mutual exclusion: Critical sections provide mutual exclusion to shared resources, preventing multiple processes from accessing the same resource simultaneously and causing synchronization-related issues.
- Reduces CPU utilization: By allowing processes to wait without wasting CPU cycles, critical sections can reduce CPU utilization, improving overall system efficiency.
- Simplifies synchronization: Critical sections simplify the synchronization of shared resources, as only one process can access the resource at a time, eliminating the need for more complex synchronization mechanisms.
Disadvantages of Critical Section
- Overhead: Implementing critical sections using synchronization mechanisms like semaphores and mutexes can introduce additional overhead, slowing down program execution.
- Deadlocks: Poorly implemented critical sections can lead to deadlocks, where multiple processes are waiting indefinitely for each other to release resources.
- Can limit parallelism: If critical sections are too large or are executed frequently, they can limit the degree of parallelism in a program, reducing its overall performance.
- Can cause contention: If multiple processes frequently access the same critical section, contention for the critical section can occur, reducing performance.
Similar Reads
Operating System Tutorial An Operating System(OS) is a software that manages and handles hardware and software resources of a computing device. Responsible for managing and controlling all the activities and sharing of computer resources among different running applications.A low-level Software that includes all the basic fu
4 min read
OS Basics
Structure of Operating System
Types of OS
Batch Processing Operating SystemIn the beginning, computers were very large types of machinery that ran from a console table. In all-purpose, card readers or tape drivers were used for input, and punch cards, tape drives, and line printers were used for output. Operators had no direct interface with the system, and job implementat
6 min read
Multiprogramming in Operating SystemAs the name suggests, Multiprogramming means more than one program can be active at the same time. Before the operating system concept, only one program was to be loaded at a time and run. These systems were not efficient as the CPU was not used efficiently. For example, in a single-tasking system,
5 min read
Time Sharing Operating SystemMultiprogrammed, batched systems provide an environment where various system resources were used effectively, but it did not provide for user interaction with computer systems. Time-sharing is a logical extension of multiprogramming. The CPU performs many tasks by switches that are so frequent that
5 min read
What is a Network Operating System?The basic definition of an operating system is that the operating system is the interface between the computer hardware and the user. In daily life, we use the operating system on our devices which provides a good GUI, and many more features. Similarly, a network operating system(NOS) is software th
2 min read
Real Time Operating System (RTOS)Real-time operating systems (RTOS) are used in environments where a large number of events, mostly external to the computer system, must be accepted and processed in a short time or within certain deadlines. such applications are industrial control, telephone switching equipment, flight control, and
6 min read
Process Management
Introduction of Process ManagementProcess Management for a single tasking or batch processing system is easy as only one process is active at a time. With multiple processes (multiprogramming or multitasking) being active, the process management becomes complex as a CPU needs to be efficiently utilized by multiple processes. Multipl
8 min read
Process Table and Process Control Block (PCB)While creating a process, the operating system performs several operations. To identify the processes, it assigns a process identification number (PID) to each process. As the operating system supports multi-programming, it needs to keep track of all the processes. For this task, the process control
6 min read
Operations on ProcessesProcess operations refer to the actions or activities performed on processes in an operating system. These operations include creating, terminating, suspending, resuming, and communicating between processes. Operations on processes are crucial for managing and controlling the execution of programs i
5 min read
Process Schedulers in Operating SystemA process is the instance of a computer program in execution. Scheduling is important in operating systems with multiprogramming as multiple processes might be eligible for running at a time.One of the key responsibilities of an Operating System (OS) is to decide which programs will execute on the C
7 min read
Inter Process Communication (IPC)Processes need to communicate with each other in many situations. Inter-Process Communication or IPC is a mechanism that allows processes to communicate. It helps processes synchronize their activities, share information, and avoid conflicts while accessing shared resources.Types of Process Let us f
5 min read
Context Switching in Operating SystemContext Switching in an operating system is a critical function that allows the CPU to efficiently manage multiple processes. By saving the state of a currently active process and loading the state of another, the system can handle various tasks simultaneously without losing progress. This switching
4 min read
Preemptive and Non-Preemptive SchedulingIn operating systems, scheduling is the method by which processes are given access the CPU. Efficient scheduling is essential for optimal system performance and user experience. There are two primary types of CPU scheduling: preemptive and non-preemptive. Understanding the differences between preemp
5 min read
CPU Scheduling in OS
Threads in OS
Process Synchronization
Critical Section Problem Solution
Peterson's Algorithm in Process SynchronizationPeterson's Algorithm is a classic solution to the critical section problem in process synchronization. It ensures mutual exclusion meaning only one process can access the critical section at a time and avoids race conditions. The algorithm uses two shared variables to manage the turn-taking mechanis
15+ min read
Semaphores in Process SynchronizationSemaphores are a tool used in operating systems to help manage how different processes (or programs) share resources, like memory or data, without causing conflicts. A semaphore is a special kind of synchronization data that can be used only through specific synchronization primitives. Semaphores ar
15+ min read
Semaphores and its typesA semaphore is a tool used in computer science to manage how multiple programs or processes access shared resources, like memory or files, without causing conflicts. Semaphores are compound data types with two fields one is a Non-negative integer S.V(Semaphore Value) and the second is a set of proce
6 min read
Producer Consumer Problem using Semaphores | Set 1The Producer-Consumer problem is a classic synchronization issue in operating systems. It involves two types of processes: producers, which generate data, and consumers, which process that data. Both share a common buffer. The challenge is to ensure that the producer doesn't add data to a full buffe
4 min read
Readers-Writers Problem | Set 1 (Introduction and Readers Preference Solution)The readers-writer problem in operating systems is about managing access to shared data. It allows multiple readers to read data at the same time without issues but ensures that only one writer can write at a time, and no one can read while writing is happening. This helps prevent data corruption an
7 min read
Dining Philosopher Problem Using SemaphoresThe Dining Philosopher Problem states that K philosophers are seated around a circular table with one chopstick between each pair of philosophers. There is one chopstick between each philosopher. A philosopher may eat if he can pick up the two chopsticks adjacent to him. One chopstick may be picked
11 min read
Hardware Synchronization Algorithms : Unlock and Lock, Test and Set, SwapProcess Synchronization problems occur when two processes running concurrently share the same data or same variable. The value of that variable may not be updated correctly before its being used by a second process. Such a condition is known as Race Around Condition. There are a software as well as
4 min read
Deadlocks & Deadlock Handling Methods
Introduction of Deadlock in Operating SystemA deadlock is a situation where a set of processes is blocked because each process is holding a resource and waiting for another resource acquired by some other process. In this article, we will discuss deadlock, its necessary conditions, etc. in detail.Deadlock is a situation in computing where two
11 min read
Conditions for Deadlock in Operating SystemA deadlock is a situation where a set of processes is blocked because each process is holding a resource and waiting for another resource acquired by some other process. In this article, we will discuss what deadlock is and the necessary conditions required for deadlock.What is Deadlock?Deadlock is
8 min read
Banker's Algorithm in Operating SystemBanker's Algorithm is a resource allocation and deadlock avoidance algorithm used in operating systems. It ensures that a system remains in a safe state by carefully allocating resources to processes while avoiding unsafe states that could lead to deadlocks.The Banker's Algorithm is a smart way for
8 min read
Wait For Graph Deadlock Detection in Distributed SystemDeadlocks are a fundamental problem in distributed systems. A process may request resources in any order and a process can request resources while holding others. A Deadlock is a situation where a set of processes are blocked as each process in a Distributed system is holding some resources and that
5 min read
Handling DeadlocksDeadlock is a situation where a process or a set of processes is blocked, waiting for some other resource that is held by some other waiting process. It is an undesirable state of the system. In other words, Deadlock is a critical situation in computing where a process, or a group of processes, beco
8 min read
Deadlock Prevention And AvoidanceDeadlock prevention and avoidance are strategies used in computer systems to ensure that different processes can run smoothly without getting stuck waiting for each other forever. Think of it like a traffic system where cars (processes) must move through intersections (resources) without getting int
5 min read
Deadlock Detection And RecoveryDeadlock Detection and Recovery is the mechanism of detecting and resolving deadlocks in an operating system. In operating systems, deadlock recovery is important to keep everything running smoothly. A deadlock occurs when two or more processes are blocked, waiting for each other to release the reso
6 min read
Deadlock Ignorance in Operating SystemIn this article we will study in brief about what is Deadlock followed by Deadlock Ignorance in Operating System. What is Deadlock? If each process in the set of processes is waiting for an event that only another process in the set can cause it is actually referred as called Deadlock. In other word
5 min read
Recovery from Deadlock in Operating SystemIn today's world of computer systems and multitasking environments, deadlock is an undesirable situation that can bring operations to a halt. When multiple processes compete for exclusive access to resources and end up in a circular waiting pattern, a deadlock occurs. To maintain the smooth function
8 min read