Difference between Preemptive and Non-Preemptive Scheduling in Operating Systems

Processor Scheduling (or CPU Scheduling) determines which processes are assigned to, and removed from the CPU, based on scheduling models such as Preemptive and Non-Preemptive Scheduling (also known as Co-operative Scheduling).

Older systems could operate in simple stand-alone modes but with the increasing need for responsive, flexible systems, as well as virtualization, managing multi-processing efficiently provides rapid response to all task processing requests.

Scheduling units are often referred to as a task and it is the Scheduler’s job to run and manage these tasks whenever required; the Scheduler selects the task to be removed and assigned to the CPU for processing, according to the scheduling model used.

How does the Scheduler know which tasks are priority?

The Scheduler needs to run a fair and efficient selection process, taking into account variable, dynamic processing requests, and making the most of the CPU cycles.

Tasks can be in two states while in processing:

  1. In a CPU Burst where the CPU is performing calculations to process the task (the period for a CPU Burst varies from task to task, and program to program).
  2. In an Input/Output (I/O) Burst waiting for data to be received or sent from the system.

When the CPU is idle, the Scheduler reads the Ready queue, and selects the next task to be run.  Then, it is the Dispatcher that gives the selected task control of the CPU, so it needs to be fast!  Any time taken up by the Dispatcher is known as Dispatch Latency.

There are different structures and custom parameters to define the Ready queue, as well as several methods that can be used to manage the complexities of the scheduling process.

Generally, its about optimizing and maximizing CPU utilization, throughput, etc.

The Scheduler has to make a decision during one of the following stages:

  1. When the Task changes from a Running to a Waiting State (for example, waiting during an I/O request).
  2. When the Task changes from Running to Ready (for example responding to an interrupt).
  3. When the Task changes from Waiting to Ready (for example an I/O request is completed).
  4. When the Task

A new Task must be selected if stage 1 or 4 happens to ensure full utilization of the CPU, and in both stage 2 and 3, the task can continue running or a new one is selected.

After understanding how a task is processed, let’s look at two scheduling models that deal with CPU interrupts.

Both have similar features with tasks, task states, queues, and priorities (static or dynamic):

  • Non-Preemptive Scheduling is when a task runs until it stops (voluntarily), or finishes. Windows® had Non-Preemptive Scheduling till Windows 3.x, after which it changed to Preemptive from Windows 95.
  • Preemptive Scheduling is where a task can be forcibly suspended by a CPU interrupt, unlike Non-Preemptive where the task runs until it releases control of the CPU.

Non-Preemptive Scheduling

Tasks within a Non-Preemptive system will run until completed.

The Scheduler then checks all tasks’ states and schedules the next highest priority task with a Ready state.

With Non-Preemptive Scheduling, once a task has its assignment to the CPU, it cannot be taken away, even if short tasks have to wait for longer tasks to complete.

The scheduling management across all tasks is “fair” and response times are predictable as high priority tasks cannot bump waiting tasks further down the queue.

The Scheduler ensures each task gets its’ share of the CPU, avoiding any delay with any task.  The ‘amount of time’ allocated to the CPU may not necessarily be equal, as it depends on how long the task takes to complete.

Preemptive Scheduling

This scheduling model allows tasks to be interrupted – in contrast to Non-Preemptive Scheduling that has a “run-to-completion” approach.

The interrupts, which could be initiated from external calls, invokes the Scheduler to pause a running task to manage another higher priority task – so the control of the CPU can be preempted.

The highest priority task in a Ready state is executed, allowing rapid response to real-time events.

Some of the cons with Preemptive Scheduling involve the increase of overheads on resources when using interrupts and issues can occur with two tasks sharing data, as one may be interrupted while updating shared data structures, and could negatively affect data integrity.

On the other hand, it is practical to be able to pause a task to manage another one that could be critical.

In Summary

Many variances and dependencies in different policies can be defined, such as using a “Round Robin Policy[i]” where every task (with equal priority) runs once, and then placed at the end of the queue, for the next cycle.

Other policies include First-In-First-Out, Shortest-Job-First, Shortest-Job-Next, Shortest Remaining Time, etc.

Analysis of historical data can provide information on aspects, like the rate at which new tasks arrive, the CPU and I/O Bursts etc so probability distributions can calculate characteristics of tasks’ waiting times, thus arming administrators with relevant data to define scheduling models.