These are just for revision and are not exhaustive. You should learn the concepts in depth before following the notes! Do let me know if I have missed any topics :)
Overview
Kernel v/s OS
- Kernel: A bridge between the software and hardware of the system.
- OS: A system program to provide an interface to the computer user so that they can easily operate on the computer.
Types of OS
-
Batch Operating System
- An operator takes similar jobs and groups them into batches.
- The next job gets assigned only after the previous job is executed.
-
Multiprogramming
- When the current job is waiting for I/O, another process is assigned to the CPU.
- The CPU is never idle.
-
Multitasking / Time-Sharing
- Fixed time slice for different processes.
- Quick response time.
-
Distributed Operating System
- Remote access is enabled within the network.
- User has access to files not on his system.
-
Real-Time Operating System
- Complete set of tasks within deadlines.
- eg. Airline traffic control systems
Process Management
Overview
Types of Processing
- Preemptive: CPU is allocated to processes for a limited time.
- Non Preemptive: CPU is allocated to processes till the process terminates or waits for I/O.
Types of Schedulers
-
Long Term Schedulers:
- Loads processes from secondary memory to ready queue.
-
Short Term Scheduler:
- Assigns a process from ready queue to the CPU.
- Selects processes according to a scheduling algorithm.
-
Mid Term Scheduler
- Swaps process from main memory to secondary memory.
- It can again swap the process to resume execution.
System call: fork()
- Creates a child process
- For ‘n’
fork()
calls:2^n - 1
child processes are created - Returns an integer
- 0: Child Process
- +ve: Parent Process
- -ve: Error; No process created
Context Switching
- Saving the context of one process and loading another process.
- Allows multiple processes to share CPU.
- Interrupts cause CPU to switch to a kernel routine from the current task. (Switch done by OS)
Process v/s Threads
Processes | User Level Threads |
---|---|
System calls | No System calls |
Own memory space | Use the memory of parent process |
Do no share data or code | Share same copy of code and data |
Can be divided into multiple threads | Smallest unit of execution |
Blocking one process does not affect the other | Blocking a thread blocks the entire process |
Slow context switching | Fast context switching |
User Level v/s Kernel Level Threads
User Level Threads | Kernel Level Threads |
---|---|
Managed by user level library | Managed by OS |
Fast context switching | Slow context switching |
Blocking a thread blocks the entire process | Blocking a thread has no affect on other threads |
CPU Scheduling
Key terms
- Throughput: The number of processes that are completed per time unit.
- Turnaround Time: Sum of the periods spent waiting to get into memory, waiting in the ready queue, executing on the CPU, and doing I/O.
- Arrival Time: Time at which a process arrives in the ready queue.
- Waiting Time: Sum of periods spent waiting in the ready queue.
- Burst Time: Time required for a process to get executed on the CPU.
- Completion Time: Time at which a process completes its execution.
- Response Time: Time from submission of a request till the first response is produced.
Turn Around Time = Completion Time - Arrival Time
Waiting Time = Turn Around Time - Burst Time
Scheduling Algorithms
-
First Come First Serve (FCFS)
- The process that requests CPU first is allocated the CPU first (FIFO).
- Non preemptive.
- Avg. waiting time varies substantially according to the burst times of the processes.
- Not ideal for for time-sharing systems.
-
Shortest Job First Scheduling (SJFS)
- Shortest Next CPU Burst Algorithm
- When the CPU is available, the process that has the smallest CPU burst is assigned first.
- Preemptive or non preemptive
- Preemptive SJFS is also called **Shortest Remaining Time First Scheduling.
- Problems:
- Knowing the length of next CPU request.
- Although optimal, cannot be implemented at level of short term scheduling.
- There is no way to know the length of the next CPU burst.
- We expect the next CPU burst to be similar in length to the previous ones.
3. Priority Scheduling
- A priority is associated with each process, and the CPU is allocated to the process with the highest priority.
- Equal priority processes are scheduled in FCFS order.
- Preemptive or nonpreemptive
- Problem of Indefinite Blocking (Starvation)
- Solution: Aging (Gradually increasing the priority of processes that wait in the system for a long time)
-
Round Robin Scheduling (RR)
- Similar to FCFS scheduling, but preemption is added to switch between processes.
- The CPU scheduler goes around the ready queue, allocating the CPU to each process for a time interval of up to 1 time quantum.
-
Multilevel Queue Scheduling
- Partitions the ready queue into several separate queues.
- The processes are permanently assigned to one queue, generally based on some property of the process, such as memory size, process priority, or process type.
- Each queue has its own scheduling algorithm.
- In addition, there must be scheduling among the queues, which is commonly implemented as fixed-priority preemptive scheduling.
-
Multilevel Feedback Queue Scheduling
- Allows a process to move between queues.
- Separates processes according to the characteristics of their CPU bursts.
- If a process uses too much CPU time, it will be moved to a lower-priority queue.
- This scheme leaves I/O-bound and interactive processes in the higher-priority queues.
- In addition, a process that waits too long in a lower-priority queue may be moved to a higher-priority queue.
Processes Synchronization
Interprocess Communication (IPC)
Types of Processes
- Independent Process: Do not affect / get affected by other processes.
- Cooperating Processes: Affect and get affected by other processes.
Shared Memory
- Processes can exchange information by reading and writing to the shared region.
- Memory resides in address space of the process creating the shared memory.
- Other process that want to communicate must attach this to their address space.
Message Passing
- Communication takes place by exchanging messages.
-
Types of Messages
- Fixed: Easy system level implementation.
- Variable: Complex system level implementation.
-
Direct Communication
- Explicitly name the sender or receiver.
send(P, message)
receive(Q, message)
- Explicitly naming only the receiver.
send(P, message)
-
receive(id, message)
: id is set to name of process with which it is communicating
- Explicitly name the sender or receiver.
-
Indirect Communication
- Messages are sent and received from a mailbox.
- Mailbox may be owned by a process or the OS.
-
send(A, message)
: Send to mailbox A -
receive(A, message)
: Receive from mailbox A
-
Synchronous Communication
- Blocking Send: Sending process is blocked until message is received by the mailbox.
- Blocking Receive: Receiver is blocked until the message is available.
-
Asynchronous Communication
- Non-blocking Send: Sending process sends the message and resumes operation.
- Non-blocking Receive: Receiver retrieves a valid message or NULL.
- Buffers (Queue) can be implemented in 3 ways:
- Zero capacity: Blocking send
- Bounded capacity: Blocking send if queue is full
- Unbounded capacity: Sender is never blocked
Synchronization
Terms
- Race Condition: A situation where several processes access and manipulate the same data concurrently and the outcome of the execution depends on the particular order in which the access takes place.
Critical Section Problem
Critical section is a code segment where the shared data can be accessed.
Requirements for solution:
- Mutual exclusion: Only one process can access the critical section at any time.
- Progress: If a process is not using the critical section, then it should not stop any other process from accessing it.
- Bounded waiting: Each process must have a limited waiting time. It should not wait endlessly to access the critical section.
Solution:
-
Lock Variable
- Executed in user mode.
- Applicable for more than two processes also.
- No gurantee for mutual exclusion.
while (lock == 1);
lock = 1;
// Critical section
lock = 0;
-
Test and Set Lock
- Satisfies mutual exclusion.
- Does not satisfy bounded waiting.
while(TestAndSet(&lock));
// critical section
lock = false;
boolean TestAndSet(boolean *target) {
boolean rv = *target;
*target = true;
return rv;
}
-
Turn Variable
- Runs in user mode.
- Applicable only for two processes.
- Strict Alteration (Alternate turns to execute critical section)
while (turn != 0);
// Critical section.
turn = 1;
while (turn != 1);
// Critical section.
turn = 0;
-
Semaphores
Integer variable used by cooperative processes for synchronization.
-
Counting Semaphore
- -∞ to +∞
- Value of semaphore < 0: No. of processes in suspend list
-
Value of semaphore ≥ 0: No. of processes that can enter critical section
// Entry Code: P / Wait / Down Wait(Semaphore S) { --S; if (S < 0) { // Put process in suspend list } } // Exit Code: V / Release / Up / Signal Signal(Semaphore S) { ++S; if (S <= 0) { // Wake one process in suspend list } }
-
Binary Semaphore (Mutex)
- 0 / 1
// Entry Code: P / Wait / Down Wait(Semaphore S) { if (S == 1) { S = 0; } else { // Put process in suspend list } } // Exit Code: V / Release / Up / Signal Signal(Semaphore S) { if (<Suspend List is empty>) { S = 1; } else { // Wake one process in suspend list } }
-
Deadlock
Necessary Conditions
- Mutual Exclusion - At least one resource must be in a non sharable mode; that is only one process at a time can use the resource.
- No preemption - A resource can be released only voluntarily by the process holding it, after that process has completed its task.
- Hold & Wait - A process holding at least one resource is waiting to acquire additional resources held by other processes.
- Circular Wait
- In a single instance RAG (Resource Allocation Graph):
- Circular wait ⇒ Deadlock.
- No cycle ⇒ No deadlock.
Methods of prevention
- Deadlock Ignorance (Ostrich Method) - Ignore the problem and pretend that deadlocks never occur in the system.
-
Deadlock Prevention - At least one of the 4 necessary conditions is prevented.
- Mutual Exclusion: Make resources shareable
- No preemption: Allow preemption
- Hold and Wait:
- Must guarantee that whenever a process requests a resource, it does not hold any other resources.
- Require process to request and be allocated all its resources before it begins execution.
-
Circular Wait: Impose a total ordering of all resource types, and require that each process requests resources in an increasing order of enumeration.
Eg. A process cannot request resource no. 2 after resource no. 3.
-
Deadlock Avoidance - A decision is made dynamically whether the current resource allocation request will, if granted, potentially lead to a deadlock.
Disadvantages:
- Low device Utilisation
- Reduced system throughput
-
Deadlock Detection - Deadlock situation may occur. The system may provide:
- An algorithm that examines the state of the system to determine whether a deadlock has occurred.
- An algorithm to recover from the deadlock.
- Banker's Algo (Columns: Allocated, Maximum, Remaining)
Deadlock Free condition:
Total Resources + Total Processes > Total Demand
R > (d1) + (d2 - 1) + ... + (dn - 1)
Memory Management
Terms
- Degree of Multiprogramming - The number of processes currently in memory.
- Internal Fragmentation - The memory allocated to a process may be slightly larger than the requested memory. The difference between these two numbers is *****internal fragmentation *-** unused memory that is internal to a partition.
- External fragmentation - It exists when there is enough total memory space to satisfy a request but the available spaces are not contiguous: storage is fragmented into a large number of small holes.
Memory Management Techniques
Contiguous Partitioning
-
Fixed Partitioning (Static)
- No. of partitions are fixed.
- Size of each partition may/may not be same.
- Spanning is not allowed.
- Problems:
- Limit in process size.
- Limit in degree of multiprogramming.
- Internal Fragmentation: Diff. between allocated and requested memory.
- External Fragmentation: Process’s memory request cannot be fulfilled because the memory offered is in a non-contiguous manner.
-
Variable Partitioning (Dynamic)
- No internal fragmentation.
- No limit in degree of multiprogramming.
- No limit in process size.
- Problems:
- External Fragmentation - Must use compaction to shift processes so they are contiguous and all free memory is in one block.
- Allocation/Deallocation is complex.
-
Dynamic Partitioning Placement Algorithm
- First Fit - Allocate the first hole that is big enough.
- Next Fit - Same as first fit but start search from last allocated hole.
- Best Fit - Allocate the smallest hole that is big enough.
- Worst Fit - Allocate the biggest hole.
Paging
Memory management technique that allows OS to retrieve processes from secondary storage into main memory.
Logical Address = Page No. + Page Size (Offset)
Physical Address = Frame no. + Frame Size
- Frame Size = Page Size
- No. of entries in page table = No. of Pages
-
Page Table Entry:
Frame No. Valid(1) / Invalid(0) Protection (RWX) Reference (0/1) Caching Dirty (Modified) -
Inverted Paging:
Frame No. Page No. Process ID
Some Important Topics
Thrashing
CPU spends more time swapping or paging activities rather than its execution.
Segmentation v/s Paging
Paging | Segmentation |
---|---|
Invisible to programmer | Visible to programmer |
Fixed page size | Segment sizes may vary |
Faster | Slower |
OS need to maintain list of free frames | OS needs to maintain a list of free holes |
Internal fragmentation | External Fragmentation |
Overlays
- Running a program that is bigger than the size of the physical memory by keeping only those instructions and data that are needed at any given time.
- Divide the program into modules in such a way that not all modules need to be in the memory at the same time.
-
Advantages
- Reduce memory requirement
- Reduce time requirement
-
Disadvantages
- Overlap map must be specified by programmer
- Overlapped module must be completely disjoint
Important Terms
-
Zombie Process
- A process whose execution is completed but it still has an entry in the process table.
- Child processes becomes a zombie before being removed from the process table.
- Orphan Process : A process whose parent process no more exists i.e. either finished or terminated without waiting for its child process to terminate is called an orphan process.
- Cascading Termination: If the parent process is exiting or terminating then the children process will also get terminated.
-
Starvation
- When a process has not been able to get the required resources it needs for progress with its execution for a long period of time.
- Low priority processes get blocked and only high priority processes are executed.
- Aging: To overcome starvation, increase the priority of processes that wait in the system for resources for a long period of time.
- Virtual Memory: Storage allocation scheme in which secondary memory can be addressed as though it were part of the main memory.
References
- GeeksForGeeks
- Gate Smashers - YouTube
- Neso Academy - YouTube
- InterviewBit
- TutorialsPoint
Top comments (0)