CPU scheduling is essential for the system's performance and ensures that processes are executed correctly and on time. Different CPU scheduling algorithms have other properties and the choice of a particular algorithm depends on various factors. Many criteria have been suggested for comparing CPU scheduling algorithms.
What is CPU scheduling?
CPU Scheduling is a process that allows one process to use the CPU while another process is delayed due to unavailability of any resources such as I / O etc, thus making full use of the CPU. In short, CPU scheduling decides the order and priority of the processes to run and allocates the CPU time based on various parameters such as CPU usage, throughput, turnaround, waiting time, and response time. The purpose of CPU Scheduling is to make the system more efficient, faster, and fairer.
Criteria of CPU Scheduling
Now let's discuss CPU Scheduling has several criteria. Some of them are mentioned below.
1. CPU utilization
The main objective of any CPU scheduling algorithm is to keep the CPU as busy as possible. Theoretically, CPU utilization can range from 0 to 100 but in a real-time system, it varies from 40 to 90 percent depending on the load upon the system.
2. Throughput
A measure of the work done by the CPU is the number of processes being executed and completed per unit of time. This is called throughput. The throughput may vary depending on the length or duration of the processes.
CPU Scheduling Criteria3. Turnaround Time
For a particular process, an important criterion is how long it takes to execute that process. The time elapsed from the time of submission of a process to the time of completion is known as the turnaround time. Turn-around time is the sum of times spent waiting to get into memory, waiting in the ready queue, executing in CPU, and waiting for I/O.
Turn Around Time = Completion Time - Arrival Time.
4. Waiting Time
A scheduling algorithm does not affect the time required to complete the process once it starts execution. It only affects the waiting time of a process i.e. time spent by a process waiting in the ready queue.
Waiting Time = Turnaround Time - Burst Time.
5. Response Time
In an interactive system, turn-around time is not the best criterion. A process may produce some output fairly early and continue computing new results while previous results are being output to the user. Thus another criterion is the time taken from submission of the process of the request until the first response is produced. This measure is called response time.
Response Time = CPU Allocation Time(when the CPU was allocated for the first) - Arrival Time
6. Completion Time
The completion time is the time when the process stops executing, which means that the process has completed its burst time and is completely executed.
7. Priority
If the operating system assigns priorities to processes, the scheduling mechanism should favor the higher-priority processes.
8. Predictability
A given process always should run in about the same amount of time under a similar system load.
Importance of Selecting the Right CPU Scheduling Algorithm for Specific Situations
It is important to choose the correct CPU scheduling algorithm because different algorithms have different priorities for different CPU scheduling criteria.Different algorithms have different strengths and weaknesses. Choosing the wrong CPU scheduling algorithm in a given situation can result in suboptimal performance of the system.
Example: Here are some examples of CPU scheduling algorithms that work well in different situations.
Round Robin scheduling algorithm works well in a time-sharing system where tasks have to be completed in a short period of time. SJF scheduling algorithm works best in a batch processing system where shorter jobs have to be completed first in order to increase throughput.Priority scheduling algorithm works better in a real-time system where certain tasks have to be prioritized so that they can be completed in a timely manner.
Factors Influencing CPU Scheduling Algorithms
There are many factors that influence the choice of CPU scheduling algorithm. Some of them are listed below.
- The number of processes.
- The processing time required.
- The urgency of tasks.
- The system requirements.
Selecting the correct algorithm will ensure that the system will use system resources efficiently, increase productivity, and improve user satisfaction.
CPU Scheduling Algorithms
There are several CPU Scheduling Algorithms, that are listed below.
Conclusion
In Conclusion, CPU scheduling criteria play an important role in improving system performance. CPU scheduling techniques encourage efficient use of system resource and effective task processing by analysing and prioritising criteria such as CPU Utilization, Throughput, Turnaround Time, Waiting Time, and Response Time. Selecting the appropriate algorithm for a given situation is crucial for increasing system efficiency and production.
Question for Practice
Which of the following process scheduling algorithm may lead to starvation?
(A) FIFO
(B) Round Robin
(C) Shortest Job Next
(D) None of the above
Correct option is (C)
Explanation: Shortest job next may lead to process starvation for processes which will require
a long time to complete if short processes are continually added.
Similar Reads
Operating System Tutorial An Operating System(OS) is a software that manages and handles hardware and software resources of a computing device. Responsible for managing and controlling all the activities and sharing of computer resources among different running applications.A low-level Software that includes all the basic fu
4 min read
OS Basics
Structure of Operating System
Types of OS
Batch Processing Operating SystemIn the beginning, computers were very large types of machinery that ran from a console table. In all-purpose, card readers or tape drivers were used for input, and punch cards, tape drives, and line printers were used for output. Operators had no direct interface with the system, and job implementat
6 min read
Multiprogramming in Operating SystemAs the name suggests, Multiprogramming means more than one program can be active at the same time. Before the operating system concept, only one program was to be loaded at a time and run. These systems were not efficient as the CPU was not used efficiently. For example, in a single-tasking system,
5 min read
Time Sharing Operating SystemMultiprogrammed, batched systems provide an environment where various system resources were used effectively, but it did not provide for user interaction with computer systems. Time-sharing is a logical extension of multiprogramming. The CPU performs many tasks by switches that are so frequent that
5 min read
What is a Network Operating System?The basic definition of an operating system is that the operating system is the interface between the computer hardware and the user. In daily life, we use the operating system on our devices which provides a good GUI, and many more features. Similarly, a network operating system(NOS) is software th
2 min read
Real Time Operating System (RTOS)Real-time operating systems (RTOS) are used in environments where a large number of events, mostly external to the computer system, must be accepted and processed in a short time or within certain deadlines. such applications are industrial control, telephone switching equipment, flight control, and
6 min read
Process Management
Introduction of Process ManagementProcess Management for a single tasking or batch processing system is easy as only one process is active at a time. With multiple processes (multiprogramming or multitasking) being active, the process management becomes complex as a CPU needs to be efficiently utilized by multiple processes. Multipl
8 min read
Process Table and Process Control Block (PCB)While creating a process, the operating system performs several operations. To identify the processes, it assigns a process identification number (PID) to each process. As the operating system supports multi-programming, it needs to keep track of all the processes. For this task, the process control
6 min read
Operations on ProcessesProcess operations refer to the actions or activities performed on processes in an operating system. These operations include creating, terminating, suspending, resuming, and communicating between processes. Operations on processes are crucial for managing and controlling the execution of programs i
5 min read
Process Schedulers in Operating SystemA process is the instance of a computer program in execution. Scheduling is important in operating systems with multiprogramming as multiple processes might be eligible for running at a time.One of the key responsibilities of an Operating System (OS) is to decide which programs will execute on the C
7 min read
Inter Process Communication (IPC)Processes need to communicate with each other in many situations. Inter-Process Communication or IPC is a mechanism that allows processes to communicate. It helps processes synchronize their activities, share information, and avoid conflicts while accessing shared resources.Types of Process Let us f
5 min read
Context Switching in Operating SystemContext Switching in an operating system is a critical function that allows the CPU to efficiently manage multiple processes. By saving the state of a currently active process and loading the state of another, the system can handle various tasks simultaneously without losing progress. This switching
4 min read
Preemptive and Non-Preemptive SchedulingIn operating systems, scheduling is the method by which processes are given access the CPU. Efficient scheduling is essential for optimal system performance and user experience. There are two primary types of CPU scheduling: preemptive and non-preemptive. Understanding the differences between preemp
5 min read
CPU Scheduling in OS
Threads in OS
Process Synchronization
Critical Section Problem Solution
Peterson's Algorithm in Process SynchronizationPeterson's Algorithm is a classic solution to the critical section problem in process synchronization. It ensures mutual exclusion meaning only one process can access the critical section at a time and avoids race conditions. The algorithm uses two shared variables to manage the turn-taking mechanis
15+ min read
Semaphores in Process SynchronizationSemaphores are a tool used in operating systems to help manage how different processes (or programs) share resources, like memory or data, without causing conflicts. A semaphore is a special kind of synchronization data that can be used only through specific synchronization primitives. Semaphores ar
15+ min read
Semaphores and its typesA semaphore is a tool used in computer science to manage how multiple programs or processes access shared resources, like memory or files, without causing conflicts. Semaphores are compound data types with two fields one is a Non-negative integer S.V(Semaphore Value) and the second is a set of proce
6 min read
Producer Consumer Problem using Semaphores | Set 1The Producer-Consumer problem is a classic synchronization issue in operating systems. It involves two types of processes: producers, which generate data, and consumers, which process that data. Both share a common buffer. The challenge is to ensure that the producer doesn't add data to a full buffe
4 min read
Readers-Writers Problem | Set 1 (Introduction and Readers Preference Solution)The readers-writer problem in operating systems is about managing access to shared data. It allows multiple readers to read data at the same time without issues but ensures that only one writer can write at a time, and no one can read while writing is happening. This helps prevent data corruption an
7 min read
Dining Philosopher Problem Using SemaphoresThe Dining Philosopher Problem states that K philosophers are seated around a circular table with one chopstick between each pair of philosophers. There is one chopstick between each philosopher. A philosopher may eat if he can pick up the two chopsticks adjacent to him. One chopstick may be picked
11 min read
Hardware Synchronization Algorithms : Unlock and Lock, Test and Set, SwapProcess Synchronization problems occur when two processes running concurrently share the same data or same variable. The value of that variable may not be updated correctly before its being used by a second process. Such a condition is known as Race Around Condition. There are a software as well as
4 min read
Deadlocks & Deadlock Handling Methods
Introduction of Deadlock in Operating SystemA deadlock is a situation where a set of processes is blocked because each process is holding a resource and waiting for another resource acquired by some other process. In this article, we will discuss deadlock, its necessary conditions, etc. in detail.Deadlock is a situation in computing where two
11 min read
Conditions for Deadlock in Operating SystemA deadlock is a situation where a set of processes is blocked because each process is holding a resource and waiting for another resource acquired by some other process. In this article, we will discuss what deadlock is and the necessary conditions required for deadlock.What is Deadlock?Deadlock is
8 min read
Banker's Algorithm in Operating SystemBanker's Algorithm is a resource allocation and deadlock avoidance algorithm used in operating systems. It ensures that a system remains in a safe state by carefully allocating resources to processes while avoiding unsafe states that could lead to deadlocks.The Banker's Algorithm is a smart way for
8 min read
Wait For Graph Deadlock Detection in Distributed SystemDeadlocks are a fundamental problem in distributed systems. A process may request resources in any order and a process can request resources while holding others. A Deadlock is a situation where a set of processes are blocked as each process in a Distributed system is holding some resources and that
5 min read
Handling DeadlocksDeadlock is a situation where a process or a set of processes is blocked, waiting for some other resource that is held by some other waiting process. It is an undesirable state of the system. In other words, Deadlock is a critical situation in computing where a process, or a group of processes, beco
8 min read
Deadlock Prevention And AvoidanceDeadlock prevention and avoidance are strategies used in computer systems to ensure that different processes can run smoothly without getting stuck waiting for each other forever. Think of it like a traffic system where cars (processes) must move through intersections (resources) without getting int
5 min read
Deadlock Detection And RecoveryDeadlock Detection and Recovery is the mechanism of detecting and resolving deadlocks in an operating system. In operating systems, deadlock recovery is important to keep everything running smoothly. A deadlock occurs when two or more processes are blocked, waiting for each other to release the reso
6 min read
Deadlock Ignorance in Operating SystemIn this article we will study in brief about what is Deadlock followed by Deadlock Ignorance in Operating System. What is Deadlock? If each process in the set of processes is waiting for an event that only another process in the set can cause it is actually referred as called Deadlock. In other word
5 min read
Recovery from Deadlock in Operating SystemIn today's world of computer systems and multitasking environments, deadlock is an undesirable situation that can bring operations to a halt. When multiple processes compete for exclusive access to resources and end up in a circular waiting pattern, a deadlock occurs. To maintain the smooth function
8 min read