What is Process Scheduling?
The act of determining which process is in the ready state, and should be moved to the running state is known as Process Scheduling.
The prime aim of the process scheduling system is to keep the CPU busy all the time and to deliver minimum response time for all programs. For achieving this, the scheduler must apply appropriate rules for swapping processes
OUT of CPU.
Scheduling fell into one of the two general categories:
- Non Pre-emptive Scheduling: When the currently executing process gives up the CPU voluntarily.
- Pre-emptive Scheduling: When the operating system decides to favour another process, pre-empting the currently executing process.
What are Scheduling Queues?
- All processes, upon entering into the system, are stored in the Job Queue.
- Processes in the
Ready state are placed in the Ready Queue.
- Processes waiting for a device to become available are placed in Device Queues. There are unique device queues available for each I/O device.
A new process is initially put in the Ready queue. It waits in the ready queue until it is selected for execution(or dispatched). Once the process is assigned to the CPU and is executing, one of the following several events can occur:
- The process could issue an I/O request, and then be placed in the I/O queue.
- The process could create a new subprocess and wait for its termination.
- The process could be removed forcibly from the CPU, as a result of an interrupt, and be put back in the ready queue.
In the first two cases, the process eventually switches from the waiting state to the ready state, and is then put back in the ready queue. A process continues this cycle until it terminates, at which time it is removed from all queues and has its PCB and resources deallocated.
Types of Schedulers
There are three types of schedulers available:
- Long Term Scheduler
- Short Term Scheduler
- Medium Term Scheduler
Let's discuss about all the different types of Schedulers in detail:
Long Term Scheduler
Long term scheduler runs less frequently. Long Term Schedulers decide which program must get into the job queue. From the job queue, the Job Processor, selects processes and loads them into the memory for execution. Primary aim of the Job Scheduler is to maintain a good degree of Multiprogramming. An optimal degree of Multiprogramming means the average rate of process creation is equal to the average departure rate of processes from the execution memory.
Short Term Scheduler
This is also known as CPU Scheduler and runs very frequently. The primary aim of this scheduler is to enhance CPU performance and increase process execution rate.
Medium Term Scheduler
This scheduler removes the processes from memory (and from active contention for the CPU), and thus reduces the degree of multiprogramming. At some later time, the process can be reintroduced into memory and its execution van be continued where it left off. This scheme is called swapping. The process is swapped out, and is later swapped in, by the medium term scheduler.
Swapping may be necessary to improve the process mix, or because a change in memory requirements has overcommitted available memory, requiring memory to be freed up. This complete process is descripted in the below diagram:
Addition of Medium-term scheduling to the queueing diagram.
What is Context Switch?
- Switching the CPU to another process requires saving the state of the old process and loading the saved state for the new process. This task is known as a Context Switch.
- The context of a process is represented in the Process Control Block(PCB) of a process; it includes the value of the CPU registers, the process state and memory-management information. When a context switch occurs, the Kernel saves the context of the old process in its PCB and loads the saved context of the new process scheduled to run.
- Context switch time is pure overhead, because the system does no useful work while switching. Its speed varies from machine to machine, depending on the memory speed, the number of registers that must be copied, and the existence of special instructions(such as a single instruction to load or store all registers). Typical speeds range from 1 to 1000 microseconds.
- Context Switching has become such a performance bottleneck that programmers are using new structures(threads) to avoid it whenever and wherever possible.
Operations on Process
Below we have discussed the two major operation Process Creation and Process Termination.
Through appropriate system calls, such as fork or spawn, processes may create other processes. The process which creates other process, is termed the parent of the other process, while the created sub-process is termed its child.
Each process is given an integer identifier, termed as process identifier, or PID. The parent PID (PPID) is also stored for each process.
On a typical UNIX systems the process scheduler is termed as
sched, and is given PID 0. The first thing done by it at system start-up time is to launch
init, which gives that process PID 1. Further Init launches all the system daemons and user logins, and becomes the ultimate parent of all other processes.
A child process may receive some amount of shared resources with its parent depending on system implementation. To prevent runaway children from consuming all of a certain system resource, child processes may or may not be limited to a subset of the resources originally allocated to the parent.
There are two options for the parent process after creating the child :
- Wait for the child process to terminate before proceeding. Parent process makes a
wait() system call, for either a specific child process or for any particular child process, which causes the parent process to block until the
wait() returns. UNIX shells normally wait for their children to complete before issuing a new prompt.
- Run concurrently with the child, continuing to process without waiting. When a UNIX shell runs a process as a background task, this is the operation seen. It is also possible for the parent to run for a while, and then wait for the child later, which might occur in a sort of a parallel processing operation.
There are also two possibilities in terms of the address space of the new process:
- The child process is a duplicate of the parent process.
- The child process has a program loaded into it.
To illustrate these different implementations, let us consider the UNIX operating system. In UNIX, each process is identified by its process identifier, which is a unique integer. A new process is created by the fork system call. The new process consists of a copy of the address space of the original process. This mechanism allows the parent process to communicate easily with its child process. Both processes (the parent and the child) continue execution at the instruction after the fork system call, with one difference: The return code for the fork system call is zero for the new(child) process, whereas the(non zero) process identifier of the child is returned to the parent.
Typically, the execlp system call is used after the fork system call by one of the two processes to replace the process memory space with a new program. The execlp system call loads a binary file into memory - destroying the memory image of the program containing the execlp system call – and starts its execution. In this manner the two processes are able to communicate, and then to go their separate ways.
Below is a C program to illustrate forking a separate process using UNIX(made using Ubuntu):
void main(int argc, char *argv)
/* Fork another process */
pid = fork();
if(pid < 0)
fprintf(stderr, "Fork Failed");
else if (pid == 0)
//Parent will wait for the child to complete
GATE Numerical Tip: If fork is called for
n times, the number of child processes or new processes created will be:
2n – 1.
By making the
exit(system call), typically returning an int, processes may request their own termination. This int is passed along to the parent if it is doing a
wait(), and is typically zero on successful completion and some non-zero code in the event of any problem.
Processes may also be terminated by the system for a variety of reasons, including :
- The inability of the system to deliver the necessary system resources.
- In response to a KILL command or other unhandled process interrupts.
- A parent may kill its children if the task assigned to them is no longer needed i.e. if the need of having a child terminates.
- If the parent exits, the system may or may not allow the child to continue without a parent (In UNIX systems, orphaned processes are generally inherited by
init, which then proceeds to kill them.)
When a process ends, all of its system resources are freed up, open files flushed and closed, etc. The process termination status and execution times are returned to the parent if the parent is waiting for the child to terminate, or eventually returned to init if the process already became an orphan.
The processes which are trying to terminate but cannot do so because their parent is not waiting for them are termed zombies. These are eventually inherited by init as orphans and killed off.