Hey Dev.to! π
I'm Rajat, a 3rd-year CS student, and placement season is scary. π°
Two months ago, I panicked when a senior told me: "They'll ask you to calculate average waiting time for Round Robin with quantum 2 in the interview."
Me: "What's Round Robin again?" π€¦
I realized I'd forgotten 80% of what we "studied" in OS class. You know the drill - cram for exams, pass, forget everything.
But interviews are different. They actually TEST if you understand!
So I re-learned Operating Systems from scratch. This time, I made sure I actually got it (not just "exam mein nikal jayega" mode).
This is my complete study guide - written by a student who's learning with you, not a professor.
π€ Why This Guide?
Full honesty:
- I'm NOT placed yet (still preparing!)
- I don't have interview experience (yet!)
- These are my actual study notes
- I'm learning this alongside you
- If something's confusing, comment - we'll figure it out!
What makes this different:
- Written in student language (no textbook jargon)
- Every concept explained like explaining to a friend
- Real problems seniors got in interviews
- Code you can actually run on your laptop
- No topic skipped - everything covered!
Who needs this:
- 3rd/4th year preparing for placements
- Anyone who "passed OS" but doesn't remember it
- Students who panic when asked "explain deadlock"
- People who want to UNDERSTAND, not memorize
π Complete Coverage (Everything!)
Week 1: Process Management
- Process vs Thread (memory layout explained)
- Context Switching & PCB structure
- CPU Scheduling (FCFS, SJF, SRTF, Round Robin, Priority, Multilevel)
- Practice problems with solutions
- Zombie & Orphan processes
Week 2: Concurrency
- Critical Section Problem
- Mutex vs Semaphore (with code!)
- Deadlocks (4 conditions, prevention, Banker's Algorithm)
- Producer-Consumer Problem
- Dining Philosophers Problem
Week 3: Memory Management
- Paging vs Segmentation
- Virtual Memory & Demand Paging
- Page Faults & Thrashing
- Page Replacement (FIFO, LRU, Optimal)
- TLB (Translation Lookaside Buffer)
Week 4: Linux Practicals
- File Permissions (chmod, chown, chgrp)
- Process Monitoring (top, htop, ps, kill)
- Networking (netstat, lsof, ping, curl)
- Text Processing (grep, awk, sed)
- Disk Management (df, du)
Time: 2-3 hours daily for 4 weeks
Prerequisites: Basic C programming
My promise: After this, you'll confidently answer ANY OS interview question!
π Table of Contents
Jump to any section:
π PART 1: Process Management
Process vs Thread
My confusion: "Aren't they the same thing?"
What clicked: Think process = house, thread = people in house.
What is a Process?
Simple: A running program.
Memory Layout:
High Address β βββββββββββββββ
β Stack β Function calls, local vars
β β β (grows down)
β β
β β β
β Heap β malloc(), new
β β (grows up)
βββββββββββββββ€
β Data β Global variables
βββββββββββββββ€
β Code β Program instructions
Low Address β βββββββββββββββ
Each section:
// CODE section - your compiled program
int main() { ... }
// DATA section - global/static variables
int globalVar = 100;
// HEAP section - dynamic memory
int* ptr = malloc(sizeof(int));
// STACK section - local variables
void function() {
int local = 10; // On stack
}
Key: Each process has separate memory. Process 1 can't access Process 2's variables!
What is a Thread?
Simple: Mini-process inside a process.
What threads SHARE:
- β Code (same program)
- β Data (same globals)
- β Heap (same malloc memory)
- β Files (same file descriptors)
What threads DON'T share:
- β Stack (each has own!)
- β Program Counter (where in code)
- β Registers (CPU state)
- β Thread ID
Visual:
ONE PROCESS, MULTIPLE THREADS:
ββββββββββββββββββββββββββββ
β Process β
β ββββββββββββββββββββββ β
β β Shared Resources β β
β β - Code β β
β β - Data (globals) β β
β β - Heap β β
β β - Files β β
β ββββββββββββββββββββββ β
β β
β Thread 1 Thread 2 β
β ββββββββ ββββββββ β
β βStack β βStack β β Each separate!
β β PC β β PC β β
β ββββββββ ββββββββ β
ββββββββββββββββββββββββββββ
Real Code Example:
#include <stdio.h>
#include <pthread.h>
int counter = 0; // Shared by all threads!
void* thread_function(void* arg) {
int local = 10; // Each thread has own local
counter++; // All threads modify SAME counter
// β οΈ Race condition! (we'll fix with mutex later)
printf("Thread %ld: counter=%d, local=%d\n",
pthread_self(), counter, local);
return NULL;
}
int main() {
pthread_t t1, t2;
pthread_create(&t1, NULL, thread_function, NULL);
pthread_create(&t2, NULL, thread_function, NULL);
pthread_join(t1, NULL);
pthread_join(t2, NULL);
printf("Final counter: %d\n", counter);
// Expected: 2
// Actual: Sometimes 1! (race condition)
return 0;
}
To compile and run:
gcc -pthread example.c -o example
./example
When to Use What?
| Feature | Process | Thread |
|---|---|---|
| Memory | Separate | Shared |
| Speed | Slow creation | Fast creation |
| Communication | Hard (IPC needed) | Easy (shared vars) |
| Isolation | High | Low |
| Crash | One dies, others ok | One dies, all die |
Use PROCESSES:
- Browser tabs (isolation - one crash β all crash)
- Different applications
- Need security/isolation
Use THREADS:
- Web server (many requests)
- Video encoding (parallel tasks)
- Need speed + shared data
Interview Question: "Why threads faster than processes?"
My answer: "Three reasons:
- Creation: No memory copy needed
- Context switch: Less state to save (shared memory stays)
- Communication: Direct shared memory access
But trade-off: Need synchronization (mutex/semaphore) for shared data!"
Context Switching
What happens: OS saves everything about current process and loads everything about next.
Step-by-Step:
1. Interrupt occurs (timer / I/O / system call)
β
2. Save current process to PCB:
- Program Counter (where in code)
- All CPU registers (32-64 of them!)
- Stack Pointer
- Process state (running β ready)
β
3. Scheduler picks next process
β
4. Load next process from PCB:
- Restore Program Counter
- Restore registers
- Restore Stack Pointer
- Change state (ready β running)
β
5. Resume execution
Process Control Block (PCB):
struct PCB {
int pid; // Process ID
int state; // NEW, READY, RUNNING, WAITING, TERMINATED
int program_counter; // Next instruction address
int registers[32]; // All CPU register values
int priority; // For scheduling
int* page_table; // Memory mappings
int* open_files; // File descriptors
int parent_pid; // Parent process ID
// ... more
};
Why It's Expensive:
Context switch time: 1-10 microseconds
What takes time:
1. Save state : ~1 Β΅s
2. Schedule : ~0.5 Β΅s
3. Load state : ~1 Β΅s
4. Cache miss : ~5 Β΅s β BIGGEST COST!
5. TLB flush : ~2 Β΅s
Total: ~10 Β΅s
Cache miss is killer:
- Old process had data in cache
- New process needs different data
- Cache miss rate HIGH
- Memory access 100x slower!
Thread vs Process Switch:
Process Switch (~10 Β΅s):
β
Save all registers
β
Flush TLB (new memory space)
β
Flush cache (different memory)
β
Switch page tables
Thread Switch (~1 Β΅s):
β
Save registers
β TLB stays same (shared memory)
β
Cache mostly valid (shared data)
β No page table switch
Thread is 10x faster!
CPU Scheduling Algorithms
This ALWAYS comes with calculation problems!
Setup for All Examples:
Process | Arrival | Burst
--------|---------|------
P1 | 0 | 4
P2 | 1 | 3
P3 | 2 | 1
P4 | 3 | 2
What we calculate:
- Completion Time: When process finishes
- Turnaround Time: Completion - Arrival
- Waiting Time: Turnaround - Burst
- Response Time: First CPU time - Arrival
1. FCFS (First Come First Serve)
Rule: Execute in arrival order. Non-preemptive.
Solution:
Gantt Chart:
0 4 7 8 10
|--P1--|--P2--|P3|---P4---|
Process | Arrival | Burst | Completion | TAT | WT
--------|---------|-------|------------|-----|----
P1 | 0 | 4 | 4 | 4 | 0
P2 | 1 | 3 | 7 | 6 | 3
P3 | 2 | 1 | 8 | 6 | 5
P4 | 3 | 2 | 10 | 7 | 5
Avg TAT = (4+6+6+7)/4 = 5.75
Avg WT = (0+3+5+5)/4 = 3.25
Pros: Simple, no starvation
Cons: Convoy effect (short wait for long)
2. SJF (Shortest Job First)
Rule: Shortest burst first. Non-preemptive.
Solution:
Time 0: Only P1 β run P1
Time 4: P2, P3, P4 available
Shortest = P3 (1) β run P3
Time 5: P2, P4 available
Shortest = P4 (2) β run P4
Time 7: Only P2 β run P2
Gantt:
0 4 5 7 10
|--P1--|P3|P4|---P2---|
Process | Arrival | Burst | Completion | TAT | WT
--------|---------|-------|------------|-----|----
P1 | 0 | 4 | 4 | 4 | 0
P2 | 1 | 3 | 10 | 9 | 6
P3 | 2 | 1 | 5 | 3 | 2
P4 | 3 | 2 | 7 | 4 | 2
Avg WT = (0+6+2+2)/4 = 2.5 β
Better!
Pros: Minimum avg waiting time (proven!)
Cons: Starvation of long processes
Interview Q: "How know burst time?"
Answer: "Estimate using past bursts:
Predicted = Ξ± Γ LastBurst + (1-Ξ±) Γ LastPrediction
Ξ± usually 0.5 (exponential averaging)"
3. SRTF (Shortest Remaining Time First)
Rule: Like SJF but preemptive.
Solution:
Time 0: P1 starts (rem=4)
Time 1: P2 arrives (rem=3), P1 rem=3, continue P1
Time 2: P3 arrives (rem=1), P1 rem=2
Shortest remaining = P3 β PREEMPT, run P3
Time 3: P3 done, P4 arrives
P1 rem=2, P4 rem=2, run P1
Time 5: P1 done, run P4
Time 7: P4 done, run P2
Gantt:
0 2 3 5 7 10
|P1|P3|P1|P4|----P2----|
Avg WT = (1+6+0+2)/4 = 2.25 β
Even better!
4. Round Robin (Real World!)
Rule: Each gets time quantum. Preemptive.
Given: Quantum = 2
Solution:
Queue initially: []
Time 0: P1 arrives β [P1]
Run P1 for 2 β rem=2
Time 2: P2, P3 arrived β [P2, P3, P1]
Run P2 for 2 β rem=1
Time 4: P4 arrived β [P3, P1, P2, P4]
Run P3 for 1 (done!)
Time 5: Run P1 for 2 (done!)
Time 7: Run P2 for 1 (done!)
Time 8: Run P4 for 2 (done!)
Gantt:
0 2 4 5 7 8 10
|P1|P2|P3|P1|P2|P4|
Avg WT = (3+4+2+5)/4 = 3.5
Choosing Quantum:
Too small (1ms): Many context switches, high overhead
Too large (100ms): Behaves like FCFS, poor response
Sweet spot: 10-100ms
Rule: 80% processes finish within quantum
Interview Q: "What if quantum very small?"
Answer: "System spends more time switching than working! If quantum=1ms, switch=0.1ms β 10% overhead. With quantum=10ms β only 1% overhead."
5. Priority Scheduling
Rule: Highest priority first.
Given:
Process | Arrival | Burst | Priority
--------|---------|-------|----------
P1 | 0 | 4 | 2
P2 | 1 | 3 | 1 β Highest
P3 | 2 | 1 | 3
P4 | 3 | 2 | 2
Solution (Preemptive):
Time 0: P1 (pri 2) starts
Time 1: P2 (pri 1) arrives β PREEMPT
Time 4: P2 done, run P1 or P4 (both pri 2) β P1
Time 7: P1 done, run P4 (pri 2 > P3's pri 3)
Time 9: P4 done, run P3
Gantt:
0 1 4 7 9 10
|P1|P2|--P1--|P4|P3|
Problem: Starvation! Low priority never runs.
Solution: Aging - increase priority of waiting processes.
while (true) {
for (p in queue) {
if (p.wait_time > 5_min) {
p.priority += 1; // Boost priority
}
}
}
6. Multilevel Queue
Real systems use this!
Priority 0: ββββββββββββββββββ
β System β β FCFS
ββββββββββββββββββ
Priority 1: ββββββββββββββββββ
β Interactive β β Round Robin (q=5ms)
ββββββββββββββββββ
Priority 2: ββββββββββββββββββ
β Batch β β Round Robin (q=10ms)
ββββββββββββββββββ
Priority 3: ββββββββββββββββββ
β Student β β FCFS
ββββββββββββββββββ
Rules:
- Higher queue runs first
- Lower queue only if all higher empty
- Preemption between queues
Multilevel Feedback Queue (Better!):
Processes move between queues:
New β Queue 0 (high priority, small quantum)
β (uses full quantum)
β Queue 1 (medium priority, larger quantum)
β (uses full quantum)
β Queue 2 (low priority, FCFS)
Benefits:
- Short processes finish fast in Q0
- Long processes move down
- Aging prevents starvation
Zombie & Orphan Processes
Zombie (The Undead π§)
What: Process finished but still in process table.
How it happens:
int main() {
pid_t pid = fork();
if (pid == 0) {
// CHILD
printf("Child working...\n");
exit(0); // β Child exits, BECOMES ZOMBIE
}
else {
// PARENT doesn't call wait()!
sleep(10); // Child zombie for 10 seconds!
}
}
Why: OS keeps exit status for parent to read. If parent doesn't wait() β zombie!
Identify:
$ ps aux | grep Z
john 1234 0.0 0.0 0 0 Z+ <defunct>
β Zombie!
How to "kill":
WRONG: kill -9 1234 (already dead!)
RIGHT:
Method 1: Kill parent
ps -o ppid= -p 1234 # Find parent
kill -9 <parent_pid>
# Parent dies β init adopts β init cleans zombie
Method 2: Fix code
wait(&status); // Parent reads exit status
Method 3: Signal handler
signal(SIGCHLD, handler); // Auto-clean zombies
Orphan (The Adopted)
What: Parent died, child still running.
How:
if (pid == 0) {
sleep(60); // Child works long time
}
else {
exit(0); // Parent dies immediately!
}
What happens:
- Parent dies
- Child becomes orphan
- init (PID 1) adopts child
- Child finishes
- init cleans up automatically
NOT A PROBLEM! init handles cleanup.
Identify:
$ ps -ef | grep myprocess
john 1234 1 ... # PPID=1 means orphan
Comparison:
| Zombie | Orphan | |
|---|---|---|
| State | Finished | Running |
| Parent | Alive | Dead |
| Problem? | YES (wastes slots) | NO |
| Solution | Kill parent/fix code | None needed |
π PART 2: Concurrency & Synchronization
Critical Section Problem
The Race Condition:
int counter = 0;
void* increment() {
for (int i = 0; i < 100000; i++) {
counter++; // NOT atomic!
}
}
// Run 2 threads
// Expected: 200000
// Actual: 150000??? π±
Why:
counter++ // Actually THREE operations:
LOAD counter, R1 # 1. Load from memory
ADD R1, 1 # 2. Increment
STORE R1, counter # 3. Store back
Timeline:
T1: LOAD 0 T2: LOAD 0
T1: ADD 1
T2: ADD 1
T1: STORE 1
T2: STORE 1
Result: counter = 1 (should be 2!)
Solution Requirements:
- Mutual Exclusion: Only one in critical section
- Progress: If none inside, one waiting should enter
- Bounded Waiting: No infinite wait (no starvation)
Mutex vs Semaphore
Mutex (Lock)
Think: Bathroom key. One person at a time.
pthread_mutex_t lock;
int counter = 0;
void* increment() {
for (int i = 0; i < 100000; i++) {
pthread_mutex_lock(&lock); // π Lock
counter++; // Critical section
pthread_mutex_unlock(&lock); // π Unlock
}
}
int main() {
pthread_mutex_init(&lock, NULL);
pthread_t t1, t2;
pthread_create(&t1, NULL, increment, NULL);
pthread_create(&t2, NULL, increment, NULL);
pthread_join(t1, NULL);
pthread_join(t2, NULL);
printf("Counter: %d\n", counter); // Now 200000! β
pthread_mutex_destroy(&lock);
}
Properties:
- Binary (locked/unlocked)
- Locker must unlock
- Same thread can't lock twice (deadlock!)
Semaphore (Counter)
Think: Parking lot with N spaces.
Binary Semaphore:
sem_t sem;
void* worker() {
sem_wait(&sem); // Decrement (wait if 0)
// critical section
sem_post(&sem); // Increment
}
int main() {
sem_init(&sem, 0, 1); // Value = 1 (binary)
}
Counting Semaphore (Real Power!):
Example: Connection pool (max 5)
sem_t pool;
void* worker(void* arg) {
int id = *(int*)arg;
printf("Thread %d: Requesting connection...\n", id);
sem_wait(&pool); // Get connection
// If 0, BLOCKS until available
printf("Thread %d: Got connection!\n", id);
sleep(2); // Use connection
sem_post(&pool); // Release
printf("Thread %d: Released\n", id);
}
int main() {
sem_init(&pool, 0, 5); // 5 connections
pthread_t threads[10]; // 10 threads, only 5 connections!
int ids[10];
for (int i = 0; i < 10; i++) {
ids[i] = i;
pthread_create(&threads[i], NULL, worker, &ids[i]);
}
// Output:
// Threads 0-4: Get connections immediately
// Threads 5-9: BLOCK until connection available
}
Visual:
Semaphore = 5
T1: wait() β 4 β
T2: wait() β 3 β
T3: wait() β 2 β
T4: wait() β 1 β
T5: wait() β 0 β
T6: wait() β BLOCKS β
T7: wait() β BLOCKS β
T1: post() β 1
T6: UNBLOCKS β
Comparison
| Feature | Mutex | Semaphore |
|---|---|---|
| Type | Lock | Counter |
| Value | 0 or 1 | 0 to N |
| Owner | Yes | No |
| Use | Protect data | Count resources |
| Example | Shared variable | Connection pool |
Interview Q: "When use what?"
Answer:
"Mutex: Protect shared data (one at a time)
Binary Semaphore: Event signaling
Counting Semaphore: Limit resources (pools, slots)"
Deadlocks
What: Processes waiting for each other forever.
Real-world: Two cars at intersection, both waiting for other to move. Forever! πβοΈπ
The Four Conditions (ALL must exist):
1. Mutual Exclusion: Resource can't be shared
2. Hold and Wait: Holding one, waiting for another
3. No Preemption: Can't forcibly take resource
4. Circular Wait: P1βP2βP3βP1 (circle!)
Memory trick: "MH-NP-C" π
Classic Deadlock:
pthread_mutex_t lock_A, lock_B;
void* thread1() {
pthread_mutex_lock(&lock_A); // T1 gets A β
sleep(1);
pthread_mutex_lock(&lock_B); // T1 wants B (T2 has it) β
// BLOCKED FOREVER!
}
void* thread2() {
pthread_mutex_lock(&lock_B); // T2 gets B β
sleep(1);
pthread_mutex_lock(&lock_A); // T2 wants A (T1 has it) β
// BLOCKED FOREVER!
}
// DEADLOCK:
// T1 has A, wants B
// T2 has B, wants A
// Both wait forever! π
Timeline:
Time 0: T1 locks A T2 locks B
Time 1: T1 wants B T2 wants A
(T2 has B) (T1 has A)
BLOCKS β BLOCKS β
Circular Wait:
T1 β waits for B (held by T2)
T2 β waits for A (held by T1)
β CIRCLE! DEADLOCK!
Prevention (Break One Condition)
Best: Break Circular Wait with Lock Ordering:
// GOOD - No deadlock
void* thread1() {
pthread_mutex_lock(&lock_A); // Always A first β
pthread_mutex_lock(&lock_B); // Then B β
// work
pthread_mutex_unlock(&lock_B);
pthread_mutex_unlock(&lock_A);
}
void* thread2() {
pthread_mutex_lock(&lock_A); // Also A first β
pthread_mutex_lock(&lock_B); // Then B β
// work
pthread_mutex_unlock(&lock_B);
pthread_mutex_unlock(&lock_A);
}
// No circular wait possible!
// Both acquire in same order AβB
Why it works: If T1 has A, T2 must wait for A (can't get B first).
Banker's Algorithm (Avoidance)
Given:
Resources: A=3, B=3, C=2
Process | Max | Allocated | Need
--------|------|-----------|------
P0 | 7,5,3| 0,1,0 | 7,4,3
P1 | 3,2,2| 2,0,0 | 1,2,2
P2 | 9,0,2| 3,0,2 | 6,0,0
P3 | 2,2,2| 2,1,1 | 0,1,1
P4 | 4,3,3| 0,0,2 | 4,3,1
Question: Safe state?
Solution:
Available = (3,3,2)
Step 1: Find process that can finish
P1: Need (1,2,2) β€ Available (3,3,2) β
Run P1 β Available = (3,3,2) + (2,0,0) = (5,3,2)
Step 2:
P3: Need (0,1,1) β€ (5,3,2) β
Run P3 β Available = (7,4,3)
Step 3:
P0: Need (7,4,3) β€ (7,4,3) β
Run P0 β Available = (7,5,3)
Step 4:
P2: Need (6,0,0) β€ (7,5,3) β
Run P2 β Available = (10,5,5)
Step 5:
P4: Need (4,3,1) β€ (10,5,5) β
Run P4 β Available = (10,5,7)
Safe sequence: P1 β P3 β P0 β P2 β P4 β
System is SAFE!
If can't find ANY process β UNSAFE (may deadlock)!
Classic Problems
1. Producer-Consumer
Setup:
- Producers create items
- Consumers consume items
- Shared buffer (size N)
- Producer waits if buffer FULL
- Consumer waits if buffer EMPTY
Solution:
#define SIZE 5
int buffer[SIZE];
int in = 0, out = 0;
sem_t empty; // Count empty slots
sem_t full; // Count filled slots
pthread_mutex_t mutex; // Protect buffer
void* producer(void* arg) {
int id = *(int*)arg;
for (int i = 0; i < 10; i++) {
int item = id * 100 + i;
sem_wait(&empty); // Wait for empty slot
// Blocks if buffer full!
pthread_mutex_lock(&mutex); // Get buffer access
buffer[in] = item;
printf("Prod %d: item %d at %d\n", id, item, in);
in = (in + 1) % SIZE;
pthread_mutex_unlock(&mutex);
sem_post(&full); // Signal: item available
sleep(1);
}
}
void* consumer(void* arg) {
int id = *(int*)arg;
for (int i = 0; i < 10; i++) {
sem_wait(&full); // Wait for item
// Blocks if buffer empty!
pthread_mutex_lock(&mutex);
int item = buffer[out];
printf("Cons %d: got %d from %d\n", id, item, out);
out = (out + 1) % SIZE;
pthread_mutex_unlock(&mutex);
sem_post(&empty); // Signal: slot available
sleep(2);
}
}
int main() {
sem_init(&empty, 0, SIZE); // SIZE empty slots
sem_init(&full, 0, 0); // 0 items initially
pthread_mutex_init(&mutex, NULL);
pthread_t prod, cons;
int prod_id = 1, cons_id = 1;
pthread_create(&prod, NULL, producer, &prod_id);
pthread_create(&cons, NULL, consumer, &cons_id);
pthread_join(prod, NULL);
pthread_join(cons, NULL);
}
Why need BOTH semaphores AND mutex?
Semaphores (empty/full): Handle capacity
- empty: Counts free slots
- full: Counts available items
- Blocks when full/empty
Mutex: Handle mutual exclusion
- Protects buffer from simultaneous access
2. Dining Philosophers
Setup:
- 5 philosophers at round table
- 5 forks (one between each pair)
- Need 2 forks to eat
- All pick left fork β DEADLOCK! π±
WRONG (Deadlocks):
sem_t fork[5];
void* philosopher(void* arg) {
int id = *(int*)arg;
int left = id;
int right = (id + 1) % 5;
while (1) {
printf("Phil %d: Thinking\n", id);
sem_wait(&fork[left]); // Pick left
// If all pick left now β DEADLOCK!
sem_wait(&fork[right]); // Pick right
printf("Phil %d: EATING\n", id);
sleep(2);
sem_post(&fork[right]);
sem_post(&fork[left]);
}
}
Solution 1: Asymmetric (BEST!):
void* philosopher(void* arg) {
int id = *(int*)arg;
int left = id;
int right = (id + 1) % 5;
while (1) {
printf("Phil %d: Thinking\n", id);
if (id % 2 == 0) {
// Even: left then right
sem_wait(&fork[left]);
sem_wait(&fork[right]);
} else {
// Odd: right then left
sem_wait(&fork[right]);
sem_wait(&fork[left]);
}
printf("Phil %d: EATING\n", id);
sem_post(&fork[right]);
sem_post(&fork[left]);
}
}
// No circular wait! Different order for even/odd
Solution 2: Limit eaters:
sem_t room; // Only 4 in room at once
void* philosopher(void* arg) {
while (1) {
sem_wait(&room); // Enter (max 4)
// Pick forks
sem_wait(&fork[left]);
sem_wait(&fork[right]);
// Eat
// Put down
sem_post(&fork[right]);
sem_post(&fork[left]);
sem_post(&room); // Leave
}
}
int main() {
sem_init(&room, 0, 4); // Only 4 allowed
// With 4 at table, at least one can get both forks!
}
πΎ PART 3: Memory Management
Paging vs Segmentation
My confusion: "Both divide memory, what's different?"
What clicked: Paging = fixed pieces. Segmentation = logical pieces.
Paging (Fixed-size)
Concept: Divide into fixed-size pages (usually 4KB).
Logical Memory: Physical Memory:
ββββββββββββ 4KB ββββββββββββ
β Page 0 β ββββββββββ Frame 5 β
ββββββββββββ€ ββββββββββββ€
β Page 1 β ββββββββββ Frame 2 β
ββββββββββββ€ ββββββββββββ€
β Page 2 β ββββββββββ Frame 7 β
ββββββββββββ ββββββββββββ
Page Table maps pages to frames
Address Translation:
Logical Address = (Page Number, Offset)
Physical Address = (Frame Number, Offset)
Example:
Address = 8196
Page size = 4096 bytes
Page Number = 8196 / 4096 = 2
Offset = 8196 % 4096 = 4
Look up Page Table:
Page 2 β Frame 7
Physical = Frame 7, Offset 4
= 7 Γ 4096 + 4
= 28676
Pros: No external fragmentation
Cons: Internal fragmentation (last page not full)
Segmentation (Variable-size)
Concept: Divide by logical units.
Segments:
ββββββββββββββ
β Code (5KB) β β Segment 0
ββββββββββββββ€
β Data (3KB) β β Segment 1
ββββββββββββββ€
βStack (2KB) β β Segment 2
ββββββββββββββ€
βHeap (10KB) β β Segment 3
ββββββββββββββ
Segment Table:
Seg | Base | Limit
----|---------|-------
0 | 0x4000 | 5KB
1 | 0x8000 | 3KB
2 | 0x6000 | 2KB
3 | 0x1000 | 10KB
Address Translation:
Logical = (Segment, Offset)
Example: Access byte 100 in code
Logical = (0, 100)
Check: 100 < 5120 (limit)? β
Physical = 0x4000 + 100 = 0x4100
Pros: Logical units, easy sharing
Cons: External fragmentation (gaps)
Comparison
| Paging | Segmentation | |
|---|---|---|
| Size | Fixed (4KB) | Variable |
| Fragmentation | Internal | External |
| Visible to user | No | Yes (logical) |
| Protection | Per page | Per segment (better) |
| Sharing | Hard | Easy |
Modern systems: Use both! Paged segmentation - segments divided into pages.
Virtual Memory
The magic: Run 10GB program on 4GB RAM!
How: Not all pages need to be in RAM. Store some on disk!
Virtual Memory (10GB):
ββββββββββββ
β Page 0 β β In RAM β
ββββββββββββ€
β Page 1 β β On Disk π½
ββββββββββββ€
β Page 2 β β In RAM β
ββββββββββββ€
β Page 3 β β On Disk π½
ββββββββββββ
Physical RAM (4GB):
ββββββββββββ
β Frame 0 β β Page 0
ββββββββββββ€
β Frame 1 β β Page 2
ββββββββββββ
Disk:
ββββββββββββ
β Page 1 β
ββββββββββββ€
β Page 3 β
ββββββββββββ
Page Table with valid bit:
Page | Frame | Valid | Disk Address
-----|-------|-------|-------------
0 | 0 | 1 (RAM)| -
1 | - | 0 (disk)| 0x5000
2 | 1 | 1 (RAM)| -
3 | - | 0 (disk)| 0x7000
Page Fault (The Key!)
What happens when accessing page not in RAM:
1. CPU tries to access Page 1
β
2. MMU checks page table
β
3. Valid bit = 0 β PAGE FAULT interrupt
β
4. OS handler:
a. Find free frame (or evict a page)
b. Load Page 1 from disk to RAM
c. Update page table (valid=1, frame number)
β
5. Restart instruction
β
6. Now access succeeds!
Cost:
Memory access: 100 nanoseconds
Disk access: 10 milliseconds = 10,000,000 nanoseconds
Page fault is 100,000Γ slower! π±
If page fault rate = 1%:
Effective time = 0.99 Γ 100ns + 0.01 Γ 10,000,000ns
= 100,099ns
β 1000Γ slower!
This is why page replacement matters!
Demand Paging
Concept: Load pages only when needed.
Program starts:
- Don't load entire program!
- Mark all pages as invalid
Execute:
- Try to run β PAGE FAULT β Load page
- Access new page β PAGE FAULT β Load it
- Access loaded page β No fault β
Eventually:
- Frequently-used pages in RAM ("working set")
Thrashing (Death Spiral!)
What: System spends more time swapping than executing.
How it happens:
1. Too many processes
β
2. Each needs many pages
β
3. Not enough RAM for all
β
4. Constant page faults
β
5. CPU busy loading/evicting pages
β
6. Little actual work! π
Detection:
CPU utilization < 20%
Disk utilization > 90%
β THRASHING! π±
Solutions:
- Add RAM ($$)
- Reduce processes
- Better page replacement
- Working set model
Page Replacement Algorithms
Interview favorite: "Calculate page faults for each algorithm."
Setup:
Reference string: 7,0,1,2,0,3,0,4,2,3,0,3,2
Frames: 3
1. FIFO (First In First Out)
Rule: Replace oldest page.
Solution:
Ref | Frames | Fault?
----|-------------|-------
7 | 7 _ _ | Yes (1)
0 | 7 0 _ | Yes (2)
1 | 7 0 1 | Yes (3)
2 | 2 0 1 | Yes (4) [7 out]
0 | 2 0 1 | No
3 | 2 3 1 | Yes (5) [0 out]
0 | 2 3 0 | Yes (6) [1 out]
4 | 4 3 0 | Yes (7) [2 out]
2 | 4 2 0 | Yes (8) [3 out]
3 | 4 2 3 | Yes (9) [0 out]
0 | 0 2 3 | Yes (10) [4 out]
3 | 0 2 3 | No
2 | 0 2 3 | No
Total: 10 page faults
Belady's Anomaly: More frames can cause MORE faults!
Pros: Simple
Cons: Doesn't consider usage
2. Optimal (Theoretical)
Rule: Replace page not used for longest in FUTURE.
Solution:
Ref | Frames | Fault? | Reasoning
----|-------------|--------|----------
7 | 7 _ _ | Yes |
0 | 7 0 _ | Yes |
1 | 7 0 1 | Yes |
2 | 2 0 1 | Yes | 7 never used again
0 | 2 0 1 | No |
3 | 2 0 3 | Yes | 1 used far (pos 10)
0 | 2 0 3 | No |
4 | 4 0 3 | Yes | 2 used soon (pos 9)
2 | 4 0 2 | Yes |
3 | 4 0 3 | Yes |
0 | 4 0 3 | No |
3 | 4 0 3 | No |
2 | 2 0 3 | Yes |
Total: 9 (minimum possible!)
Note: Can't implement (can't predict future)! Used as benchmark.
3. LRU (Least Recently Used) β
Rule: Replace page not used for longest in PAST.
Solution:
Ref | Frames | Fault? | Last used
----|-------------|--------|----------
7 | 7 _ _ | Yes | 7 at 0
0 | 7 0 _ | Yes | 0 at 1
1 | 7 0 1 | Yes | 1 at 2
2 | 2 0 1 | Yes | 7 least recent (0)
0 | 2 0 1 | No | 0 at 4
3 | 2 0 3 | Yes | 1 least recent (2)
0 | 2 0 3 | No | 0 at 6
4 | 4 0 3 | Yes | 2 least recent (3)
2 | 4 0 2 | Yes | 3 least recent (5)
3 | 4 0 3 | Yes | 2 just used
0 | 4 0 3 | No | 0 at 9
3 | 4 0 3 | No | 3 at 10
2 | 4 2 3 | Yes | 0 least recent (9)
Total: 9 (same as Optimal!)
Implementation:
Method 1: Counter (simple but expensive)
struct Page {
int counter; // Increment on each access
};
// To find LRU: Find smallest counter
Method 2: Stack (efficient!)
// Stack of page numbers
// Top = most recently used
// Bottom = least recently used
Stack: [3, 0, 2, 4]
On access to 0:
1. Remove 0
2. Push 0 to top
Stack: [0, 3, 2, 4]
Pros: Good performance (approximates Optimal)
Cons: Expensive to implement
Comparison
| Algorithm | Faults | Complexity | Belady's Anomaly |
|---|---|---|---|
| FIFO | 10 | Simple | Yes π± |
| Optimal | 9 (best) | Impossible | No |
| LRU | 9 | Expensive | No |
Real systems: Use Clock algorithm (cheap LRU approximation).
TLB (Translation Lookaside Buffer)
Problem: Every memory access needs TWO accesses!
- Page table (get frame)
- Actual memory
Solution: TLB - cache for page table!
Structure:
TLB (tiny, fast):
ββββββββββββββββββββ
βPage | Frame | Protβ
β-----|-------|-----β
β 2 | 7 | RW β
β 5 | 3 | R β
β 8 | 1 | RWX β
ββββββββββββββββββββ
Size: 64-1024 entries
Speed: 1 cycle
Page Table (large, RAM):
ββββββββββββββ
βPage | Frameβ
β-----|------β
β 0 | 5 β
β 1 | 2 β
β 2 | 7 β β Same
β ... | ... β
ββββββββββββββ
Size: 1M+ entries
Speed: 100 cycles
Access with TLB:
1. CPU: logical address (page 2, offset 100)
β
2. Check TLB for page 2
TLB Hit β
ββ Get frame 7 (1 cycle)
ββ Physical address = (7, 100)
ββ Access memory (100 cycles)
Total: 101 cycles
TLB Miss β
ββ Access page table (100 cycles)
ββ Get frame number
ββ Update TLB
ββ Access memory (100 cycles)
ββ Total: 200 cycles (2Γ slower)
Performance:
TLB hit rate = 90%
Memory = 100ns
TLB = 1ns
Effective time:
= 0.9 Γ 101ns + 0.1 Γ 201ns
= 90.9ns + 20.1ns
= 111ns
Without TLB: 200ns always
With TLB: 111ns (44% faster!)
Why effective: Locality! Programs access same pages repeatedly. Small TLB covers working set.
π» PART 4: Linux Commands
These ALWAYS come in interviews!
File Permissions
Understanding Permissions:
$ ls -l file.txt
-rw-r--r-- 1 john devs 1024 Jan 1 12:00 file.txt
βββββββββ
βββββββββ
ββββββββββ Others: read
ββββββββββ Others: no write
ββββββββββ Others: no execute
ββββββββββ Group: read
ββββββββββ Group: no write
ββββββββββ Group: no execute
ββββββββββ Owner: read
ββββββββββ Owner: write
ββββββββββ Owner: no execute
Breakdown:
- rw- r-- r--
β β β β
β β β ββ Others (everyone)
β β ββββββ Group
β ββββββββββ Owner
ββββββββββββ File type (- = file, d = directory)
rwx = read, write, execute
r = 4, w = 2, x = 1
chmod (Change Mode):
Numeric:
# 755 = rwxr-xr-x
# Owner: 7 = 4+2+1 = rwx
# Group: 5 = 4+0+1 = r-x
# Others: 5 = 4+0+1 = r-x
chmod 755 script.sh
# 777 = rwxrwxrwx (DANGEROUS!)
# Everyone can do everything
chmod 777 file.txt # β οΈ Never in production!
# 644 = rw-r--r-- (common for files)
chmod 644 document.txt
# 600 = rw------- (private)
chmod 600 ~/.ssh/id_rsa # SSH key
Symbolic:
# Add execute for owner
chmod u+x script.sh
# Remove write from group
chmod g-w file.txt
# Add read for everyone
chmod a+r doc.txt
# Set exact
chmod u=rwx,g=rx,o=r file.txt
# u=user(owner), g=group, o=others, a=all
# +=add, -=remove, ==set
Interview Q: "777 vs 755?"
Answer:
"755 (rwxr-xr-x):
- Owner: Full control
- Group/Others: Read and execute only
- Safe for scripts/directories
777 (rwxrwxrwx):
- Everyone: Full control
- Security risk! Anyone can modify/delete
- Never use in production!"
chown (Change Owner):
# Change owner
chown john file.txt
# Change owner and group
chown john:devs file.txt
# Change only group
chown :devs file.txt
# Or
chgrp devs file.txt
# Recursive (all files in directory)
chown -R john:devs /var/www/site
Process Monitoring
ps (Process Status):
# All processes
ps aux
# My processes
ps -u $USER
# Process tree
ps auxf
# Specific process
ps aux | grep chrome
# Full command
ps auxww
# By PID
ps -p 1234
# Output columns:
# USER: Owner
# PID: Process ID
# %CPU: CPU usage
# %MEM: Memory usage
# VSZ: Virtual memory (KB)
# RSS: Physical RAM (KB)
# STAT: State (R=running, S=sleeping, Z=zombie)
# START: Start time
# TIME: CPU time used
# COMMAND: Command
top (Real-time):
$ top
# Interactive commands:
# P - Sort by CPU
# M - Sort by memory
# k - Kill process (enter PID)
# h - Help
# q - Quit
# One-time snapshot
top -n 1
# Specific user
top -u john
htop (Better top):
$ htop
# Features:
# - Colored output
# - Mouse support
# - Easy navigation
# - F9 to kill
# - F6 to sort
# Install if not present:
sudo apt install htop # Ubuntu/Debian
sudo yum install htop # CentOS/RHEL
kill (Send Signals):
# SIGTERM (polite - allows cleanup)
kill 1234
# or
kill -15 1234
kill -SIGTERM 1234
# SIGKILL (forceful - immediate)
kill -9 1234
kill -SIGKILL 1234
# SIGHUP (reload config)
kill -HUP 1234
# SIGSTOP (pause)
kill -STOP 1234
# SIGCONT (resume)
kill -CONT 1234
Interview Q: "SIGTERM vs SIGKILL?"
Answer:
"SIGTERM (kill or kill -15):
- Polite request
- Process can catch and cleanup
- Saves files, closes connections
- Use FIRST
SIGKILL (kill -9):
- Forceful termination
- Process CAN'T catch
- No cleanup possible
- Use if SIGTERM fails
- Can cause data corruption"
Proper way:
# 1. Try SIGTERM
kill 1234
sleep 5
# 2. Check if alive
ps -p 1234
# 3. If still alive, SIGKILL
kill -9 1234
Networking Commands
netstat (Network Statistics):
# All connections
netstat -a
# Listening ports
netstat -l
# TCP only
netstat -t
# UDP only
netstat -u
# Show process names
netstat -p
# Numerical (don't resolve names)
netstat -n
# Common combination
netstat -tulpn
# t=TCP, u=UDP, l=listening, p=program, n=numerical
$ netstat -tulpn
Proto Local Address Foreign Address State PID/Program
tcp 0.0.0.0:22 0.0.0.0:* LISTEN 1234/sshd
tcp 127.0.0.1:3306 0.0.0.0:* LISTEN 5678/mysqld
lsof (List Open Files):
# All open files
lsof
# By user
lsof -u john
# By process
lsof -p 1234
# Specific file
lsof /var/log/syslog
# Network connections
lsof -i
# Specific port
lsof -i :8080
# TCP connections
lsof -i tcp
# Kill all using file
lsof -t /path/file | xargs kill
# Example: Port in use?
$ lsof -i :3000
COMMAND PID USER FD TYPE DEVICE NODE NAME
node 1234 john 20u IPv4 12345 TCP *:3000 (LISTEN)
# Kill it:
kill 1234
ping (Test Connectivity):
# Ping host
ping google.com
# 5 packets then stop
ping -c 5 8.8.8.8
# Interval (seconds)
ping -i 2 google.com
# Timeout (seconds)
ping -W 5 google.com
# Flood (root only, testing)
ping -f localhost
curl (HTTP Requests):
# Simple GET
curl https://api.example.com/users
# GET with headers
curl -H "Authorization: Bearer token" \
https://api.example.com/users
# POST JSON
curl -X POST \
-H "Content-Type: application/json" \
-d '{"name":"John","email":"john@example.com"}' \
https://api.example.com/users
# Show response headers
curl -i https://example.com
# Verbose (debug)
curl -v https://example.com
# Follow redirects
curl -L https://bit.ly/short
# Save to file
curl -o file.html https://example.com
# Download
curl -O https://example.com/file.zip
# Upload
curl -F "file=@document.pdf" \
https://example.com/upload
traceroute (Network Path):
# Trace route
traceroute google.com
# Output shows each hop:
1 192.168.1.1 1.2 ms # Router
2 10.0.0.1 5.6 ms # ISP
3 72.14.215.1 12.3 ms # ISP backbone
...
15 142.250.185.46 45.6 ms # google.com
# Helps debug:
# - Where is delay?
# - Where does connection fail?
Text Processing
grep (Search):
# Search in file
grep "error" /var/log/syslog
# Case-insensitive
grep -i "ERROR" log.txt
# Line numbers
grep -n "error" file.txt
# Recursive
grep -r "function" /path/to/code
# Invert (lines NOT matching)
grep -v "debug" log.txt
# Count matches
grep -c "error" log.txt
# Context (2 before, 3 after)
grep -B 2 -A 3 "error" log.txt
# Regex
grep -E "error|warning" log.txt
# Multiple files
grep "TODO" *.js
# Show only filename
grep -l "error" *.log
# Show only match
grep -o "error.*" file.txt
awk (Column Processing):
# Print columns
awk '{print $1, $3}' file.txt
# ps output (user and PID)
ps aux | awk '{print $1, $2}'
# Sum column
awk '{sum += $3} END {print sum}' file.txt
# Conditional
awk '$3 > 50 {print $1, $3}' file.txt
# Custom delimiter
awk -F: '{print $1, $3}' /etc/passwd
# Last column
awk '{print $NF}' file.txt
# Count lines
awk 'END {print NR}' file.txt
# Real example: High CPU processes
ps aux | awk '$3 > 5.0 {print $1, $2, $3}'
sed (Stream Editor):
# Replace first occurrence
sed 's/old/new/' file.txt
# Replace all (global)
sed 's/old/new/g' file.txt
# In-place edit
sed -i 's/old/new/g' file.txt
# Delete lines matching
sed '/pattern/d' file.txt
# Delete lines 5-10
sed '5,10d' file.txt
# Print only matches
sed -n '/pattern/p' file.txt
# Replace on matching lines
sed '/pattern/s/old/new/g' file.txt
# Multiple replacements
sed -e 's/old1/new1/g' -e 's/old2/new2/g' file.txt
# Real example: Change config
sed -i 's/localhost/0.0.0.0/g' config.txt
Disk Management
df (Disk Free):
# Show disk usage
df
# Human-readable
df -h
$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 50G 35G 15G 70% /
/dev/sda2 100G 80G 20G 80% /home
/dev/sdb1 500G 450G 50G 90% /data
# Specific type
df -h -t ext4
# Inodes
df -i
du (Disk Usage):
# Directory size
du /var/log
# Human-readable
du -h /var/log
# Summary (total only)
du -sh /var/log
# Depth 1 (immediate subdirs)
du -h --max-depth=1 /var
# Sort by size
du -h /var | sort -h
# Top 10 largest
du -h / 2>/dev/null | sort -h | tail -10
# Real use: Find space hogs
du -sh /* | sort -h | tail -5
π― PART 5: Interview Preparation
Common Questions
Q1: "Process vs Thread?"
My answer:
"Process is program in execution with own memory:
- Code, Data, Heap, Stack
- Isolated (can't access other process memory)
- Creation slow (~10ms)
Thread is lightweight process:
- Shares code, data, heap, files
- Own stack, PC, registers
- Creation fast (~1ms)
- Can communicate via shared memory
Use processes: Isolation needed (browser tabs)
Use threads: Speed + shared data (web server)"
Q2: "Calculate average waiting time - Round Robin (quantum=2)"
Ref: 7,0,1,2,0,3,0,4,2,3,0,3,2
Frames: 3
[Draw Gantt chart on paper]
[Calculate step-by-step]
[Show work clearly]
Q3: "What causes deadlock?"
My answer:
"ALL FOUR conditions must exist:
- Mutual Exclusion: Resource can't be shared
- Hold and Wait: Holding one, waiting for another
- No Preemption: Can't forcibly take
- Circular Wait: P1βP2βP3βP1
Prevent: Break circular wait with lock ordering.
Example: Always acquire locks in ascending order (Lock A before Lock B). No circular dependency possible."
Q4: "Explain page fault"
My answer:
"When CPU accesses page not in RAM:
- MMU checks page table
- Valid bit = 0 β PAGE FAULT
- OS loads page from disk to RAM
- Updates page table
- Restarts instruction
Cost: Disk access is 100,000Γ slower than RAM!
Why matters: High page fault rate = thrashing (system spends more time swapping than executing)."
Q5: "Mutex vs Semaphore?"
My answer:
"Mutex: Binary lock (0/1)
- Protects critical section
- Locker must unlock
- Use for: Shared variable protection
Binary Semaphore: Like mutex but any can signal
- Use for: Event notification
Counting Semaphore: Counter (0 to N)
- Tracks resources
- Use for: Connection pools, limiting access
Example: Database with 5 connections:
sem_init(&pool, 0, 5); // 5 available
sem_wait(&pool); // Get connection
// use connection
sem_post(&pool); // Release
"
Q6: "How kill zombie process?"
My answer:
"Zombie = finished process waiting for parent to read exit status.
Can't kill zombie (already dead!)
Solutions:
- Kill parent: Parent dies β init adopts β init cleans
-
Fix code: Parent calls
wait()to read status - Signal handler: Register SIGCHLD handler
Why exists: OS keeps exit status for parent. If parent doesn't call wait(), zombie stays."
Q7: "Explain thrashing"
My answer:
"System spends more time swapping pages than executing.
Cause:
- Too many processes
- Each needs many pages
- Not enough RAM
- Constant page faults
Detection:
- CPU utilization < 20%
- Disk utilization > 90%
Solution:
- Add RAM
- Reduce multiprogramming
- Better page replacement (LRU)"
Q8: "chmod 777 vs 755?"
My answer:
"755 (rwxr-xr-x):
- Owner: Full control
- Group: Read, execute (no write)
- Others: Read, execute
- Safe for scripts
777 (rwxrwxrwx):
- Everyone: Full control
- Dangerous! Anyone can modify
- Never use in production
- Security risk
Real example: Script should be 755, SSH key should be 600 (owner only)."
Study Tips
Week 1: Process Management
- Day 1-2: Process vs Thread, PCB
- Day 3-4: Scheduling (practice calculations!)
- Day 5: Zombie/Orphan, context switch
- Day 6: Practice problems
- Day 7: Review
Week 2: Concurrency
- Day 1-2: Mutex, Semaphore, code examples
- Day 3-4: Deadlocks (4 conditions, Banker's)
- Day 5-6: Producer-Consumer, Dining Philosophers
- Day 7: Code + review
Week 3: Memory
- Day 1-2: Paging, Segmentation
- Day 3-4: Virtual memory, page faults
- Day 5: Page replacement (practice!)
- Day 6: TLB
- Day 7: Review + practice
Week 4: Linux + Practice
- Day 1-3: Linux commands (hands-on!)
- Day 4-5: Mock interviews
- Day 6-7: Weak areas
Daily Routine:
- Morning: Theory (2 hours)
- Afternoon: Practice (2 hours)
- Evening: Code examples (1 hour)
Resources:
- Operating System Concepts (Dinosaur book)
- GeeksforGeeks OS section
- YouTube: Neso Academy
- Practice: Previous year interview questions
π Final Words
Hey, you made it to the end! π
Remember:
- I'm learning WITH you (not teaching FROM above)
- We all start confused
- Practice makes perfect
- Don't just memorize - UNDERSTAND
Next steps:
- Bookmark this guide
- Follow the 4-week plan
- Practice on paper (interviews = whiteboard!)
- Code the examples yourself
- Join study groups
Questions? Comment below! I check daily.
Found this helpful?
- β€οΈ Like this post
- π¬ Share your progress
- π Share with friends preparing
- π Bookmark for revision
My journey: Still preparing, still learning, still making mistakes. But getting better every day! πͺ
You got this! See you in the placement season! π
π Quick Reference Card
Save this for last-minute revision:
Process vs Thread:
- Process: Separate memory, slow
- Thread: Shared memory, fast
Scheduling:
- FCFS: Simple, convoy effect
- SJF: Optimal WT, starvation
- RR: Fair, quantum matters
Deadlock Conditions: MH-NP-C
1. Mutual Exclusion
2. Hold and Wait
3. No Preemption
4. Circular Wait
Memory:
- Paging: Fixed 4KB
- Segmentation: Variable, logical
- Page Fault: 100,000Γ slower
- TLB: Cache for page table
Linux:
- chmod 755: rwxr-xr-x
- kill -15: Polite (SIGTERM)
- kill -9: Forceful (SIGKILL)
- grep: Search
- awk: Columns
- sed: Replace
Common Faults:
- Race condition: Use mutex
- Deadlock: Lock ordering
- Thrashing: Too many processes
- Zombie: Kill parent or wait()
Made with β€οΈ by a 3rd year student learning alongside you
Final Year approaching β’ Still preparing β’ Always learning β’ Never giving up
P.P.S. - Follow me for more placement prep content! Next up: DBMS! π
P.P.P.S. - Good luck with placements! We're all in this together! π€
Top comments (0)