If you are struggling with the right track and a 10-min read for Operating systems from begin to end, you're at right place. Even I am not sure where to start, but we would figure it and if you are reading this, I have successfully completed these notes or at least am on right track.
Let's start our struggle for OS:
The Track
I found many resources to learn, let's just list all:
[Galvin Concise PPTs)[https://www.os-book.com/OS9/slide-dir/index.html]: These were great but I felt that these are a little bit too much, so here are the chapters we would do:
- Processes
- Threads
- Process Synchronization
- CPU Scheduling Algorithms
- Deadlocks
- Main memory
- Virtual memory
- Virtual Machines
I am skipping introduction of OS for now as it was not that important, this is going to be a fast article which works like a last night dose.
Processes
Processes are a program in execution. It has its own memory which is divided into four parts: Text, Data, Heap and Stack.
Brief explanation about these
Text: Stores the compiled program code.
Data: stores global and static variables
Heap: Stores dynamically allocated data
Stack: Stores local variables.
Process, as the name suggests, have different states (just like making a joint). Here is a diagram which says it all:
Process Control Block(PCB)
It is a data structure that stores all information about a process.
It contains these:
Process State: We already saw these
Process ID: self-explanatory
CPU registers and Program Counter: It is basically a pointer to next instruction to be executed.
CPU scheduling information: priority and pointers to scheduling queues(will see later)
Memory management information: self-explanatory
Accounting Information: CPU time consumed, limits, etc.
I/o Status information: Devices allocated, open file tables, etc.
Process Scheduling Queues
This handles the order of execution of the process. We have different queues for different states. When the state of a process is changed, it is simply detached from the current queue and added to it's new state's queue.
Please note that we are not actually studying scheduling algorithms, we are just assuming that we have some given order and how the processes are executed.
Schedulers
There are three types of schedulers, long term, short term and mid term schedulers.
Brief explanation
Long term Schedulers: These are responsible for sending a process from new to ready queue. It has to maintain degree of multi programming which is simply this: Average rate of incoming = average rate of outgoing.
Short term schedulers/ CPU schedulers: From ready queue to running queue. (I think it's clear)
Mid term schedulers: These are special type of schedulers with a unique functionality: when a process is paused(maybe it needs some input) it, swaps in the process and allocates this position to some other process and when it arrives again, swaps out the other process and resume the previous process.
Scheduling Algorithms
Terminology
Arrival Time: Time at which process arrives in ready queue
Completion Time: Time at which process completes its execution.
Burst Time: Time required by process for CPU execution
Turn Around Time: Time difference between completion and arrival time.(How it's different from burst time?)(Add waiting time here as well)
Waiting Time: Turnaround time-burst time.
Throughput: Total number of processes completed per unit time.
Non-preemptive Algorithms: Once a processes is in ready queue we cannot pause it's execution until it's completely executed.
Preemptive Algorithms: Opposite of non-preemptive. Period.
Let's start with algorithms.
First Come First Serve(FCFS)
Widely used in college admissions, this algorithm is just what it says. Whichever process arrives first, it will execute the process first and then jump to second process. (Non-preemptive Algorithm)
Here is a simple C++ code
You seriously opened this? Please try this yourself. It is very simple just a sort will do the work.
This gag diagram will help you understand FCFS:
Gag diagram
Processes:
Processes | Burst Time
P1 | 7
P2 | 3
P3 | 10
P4 | 5
Gag Diagram:
Average waiting time:
Average turnaround time:
Why FCFS is not that good?
It's kind of obvious. If the arrival time of a process which takes a lot of time than others is less, it will slow down the operating system.
Shortest Job First(SJS)
Just sort the jobs according to their burst times. And we schedule the jobs according to that order only. It is a non-preemptive algorithm but there does exists a preemptive algorithm for SJS as well.
C++ Code for non-preemptive SJS
#include<bits/stdc++.h>
using namespace std;
#define int long long int
int32_t main()
{
int n;cin>>n;
vector<pair<int,int>> times(n);
vector<pair<int,int>> duration(n);
// vector<Time>a;
for(int i=0;i<n;i++)
{
cin>>times[i].first>>times[i].second;
duration[i].second=i;
duration[i].first=times[i].second-times[i].first;
}
sort(duration.begin(),duration.end());
// Order of jobs is the duration.second order
// Now we need to find turnaround time, waiting time and
// Completion time for every process
int total_waiting_time=0;
int total_turnaround_time=0;
int t=0;
for(int i=0;i<n;i++)
{
cout<<"Process "<<duration[i].second+1<<" in ready queue now!\n";
t=max(times[duration[i].second].first,t);
t+=duration[i].first;
int turnaround_time=t-times[duration[i].second].first;
total_turnaround_time+=turnaround_time;
cout<<"Turn Around Time: "<<turnaround_time<<"\n";
int waiting_time=turnaround_time-duration[i].first;
total_waiting_time+=waiting_time;
cout<<"Waiting Time: "<<waiting_time<<"\n";
}
double avg_turnaround_time=(double)total_turnaround_time/n;
double avg_waiting_time=(double)total_waiting_time/n;
cout<<"Average Waiting Time: "<<avg_waiting_time<<"\n";
cout<<"Average Turn Around Time: "<<avg_turnaround_time<<"\n";
return 0;
}
~~~~~
This gag diagram will help you understand SJS:
Gag diagram
Processes:
Processes | Burst Time
P1 | 7
P2 | 3
P3 | 10
P4 | 5
Gag Diagram:
Average waiting time:
Average turnaround time:
Disadvantages of SJS
Here we have certain time when our CPU is idle. Being idle is the worst thing we can have!(In general as well, that's why I am writing this).
It has a preemptive way also. In this, the trick is to fill those gaps which we were making. Whenever we are idle, we assign computer another process which can be filled.
C++ code for preemptive code
To be updated
~
This gag diagram will help you understand preemptive version of SJS:
Gag diagram
Processes:
Processes | Burst Time
P1 | 7
P2 | 3
P3 | 10
P4 | 5
Gag Diagram:
Average waiting time:
Average turnaround time:
Round Robin Algorithm
In this algorithm, we have fixed time slots and we execute the processes sequentially in the order given and switch between processes after every fixed time. This gag diagram will help you understand:
Gag diagram
Processes:
Processes | Burst Time
P1 | 7
P2 | 3
P3 | 10
P4 | 5
Gag Diagram:
Average waiting time:
Average turnaround time:
C++ Code
TBU
Process synchronization
Processes categorized on basis of synchronization
Independent Process : Execution of one process does not affects the execution of other processes.
Cooperative Process : Execution of one process affects the execution of other processes.
Process synchronization problem arises in the case of Cooperative process also because resources are shared in Cooperative processes.
Race Condition
When more than one processes are executing the same code or accessing the same memory or any shared variable in that condition there is a possibility that the output or the value of the shared variable is wrong so for that all the processes doing the race to say that my output is correct this condition known as a race condition. Several processes access and process the manipulations over the same data concurrently, then the outcome depends on the particular order in which the access takes place.
Example
Suppose we have two operations, cnt++ and cnt--, from two different processes acting on a global variable cnt.
++ Operation :
int reg=cnt;
reg=reg-1;
cnt=reg;
~~~~~
-- Operation:
int reg2=cnt;
reg2=reg2-1;
cnt=reg2;
~
Now, we need to do these operation in this order:
int reg=cnt;
reg=reg-1;
cnt=reg;
int reg2=cnt;
reg2=reg2-1;
cnt=reg2;
~~~~~
But as the resource is shared, it can happen in any order, maybe this one as well:
int reg=cnt;
reg=reg-1;
int reg2=cnt;
cnt=reg;
reg2=reg2-1;
cnt=reg2;
~
This will lead to cnt's final value as 4 if initial value is 5.
Critical Section
Critical section is a code segment which can be accessed by only one process at a time. This code segment is common in many processes and if many processes run simultaneously, we would have a hard time finding the process containing the error, if it happens.
Any solution to critical section should must satisfy these rules.
Mutual Exclusion : If a process is executing in its critical section, then no other process is allowed to execute in the critical section.
Progress : If no process is executing in the critical section and other processes are waiting outside the critical section, then only those processes that are not executing in their remainder section can participate in deciding which will enter in the critical section next, and the selection can not be postponed indefinitely.
Bounded Waiting : A bound must exist on the number of times that other processes are allowed to enter their critical sections after a process has made a request to enter its critical section and before that request is granted.
Here is what critical section looks like
Peterson's Solution
As we saw earlier, we need a solution for critical section of code, as it can lead to anomalies. This solution should satisfy the rules we mentioned before.
Simple Psuedo code
int turn;
bool flag[2];
do{
flag[i]=1;
turn=j;
while(flag[j]&&turn==j);
/////////////////////
// Critical section//
/////////////////////
flag[i]=0;
///////////////////////
// Remainder section///
///////////////////////
}
while(1)
~~~~~
How it works
Before actually seeing how it works, please have another look of the things we want it to satisfy.
These 3 rules:
1. Mutual exclusion
2. Progress
3. Bounded Waiting
Peterson solution simply assigns a turn and a flag to each process. Initially flag is 0 in all processes and turn is either 0 or 1. Flag array's on index means that this process is waiting for its turn now. And would work only if the initial process has completed it's execution. We have the while loop for this in the code. Please have a look at the code now and if you still find any difficulty, please post in comments.
**Semaphores**
Semaphore is nothing but an integer variable, and this can be changed by using only these two operations:
1. Wait: It is like a decrement operation.
wait(S){
while(S<=0);
S--;
}
~
- Signal:
Signal(S){
S++
}
~~~~~
Semaphores can be counting or binary(0 or 1).
Binary Semaphores are generally called **mutex locks** as they provide mutual exclusion.
Counting Semaphores are used to control access to given resource for multiple processes.
Use of semaphores in handling critical section of N processes
Shared data: semaphore mutex// Initially mutex=1
Process p[i]:
do{
wait(mutex);
////////////////////////
////critical section////
////////////////////////
signal(mutex);
////////////////////////
////remainder section///
////////////////////////
}while(1);
~
How does this work?
Whenever a process arrives we wait for the semaphore to turn to 1. Initially semaphore is 1 already so Process P1 has no wait and executes the critical section. But if second process comes now, wait function runs a while loop which kind of halt the process. Now when the first process finishes, second process continues and so on.
Busy Waiting problem's solution
problem with semaphores
Unbounded Wait time is the main problem here.
How we achieve busy waiting problem
What we do is simple to achieve bounded wait time. We are currently holding suppose n processes due to 1 process which is running. We simply can just block the processes which are in waiting. Like, we can contain them in a list. Okay, to be very honest, I truly don't understand how blocking the processes make them bounded. Would update this when I find this. Please write on comments if you know this.
Deadlocks
Perfect Example:
Threads
To be updated
Memory Management
To be updated
Top comments (0)