Unit 8 of 1010 min read

Operating Systems

Processes, threads, IPC, concurrency, deadlock, scheduling, memory management, file systems.

Share:WhatsAppLinkedIn

Why this unit matters

Operating Systems is one of the highest-weighted units in GATE CS, reliably contributing 7-10 marks per paper. The questions span pure theory (deadlock conditions, scheduling algorithms, page-replacement policies) and numerical problems (turnaround time, page faults, disk seek time). Missing this unit is simply not an option if you are targeting a rank in the top 500. The good news is that the question types are well-defined and highly predictable.

Syllabus map

  • System calls, processes, threads, IPC (pipes, shared memory, message queues, signals), Concurrency and synchronisation (mutex, semaphores, monitors, classic problems), Deadlock (conditions, prevention, avoidance, Banker's algorithm, detection, recovery), CPU scheduling (FCFS, SJF, SRTF, Round Robin, Priority, Multilevel queues), I/O scheduling (FCFS, SSTF, SCAN, C-SCAN), Memory management (contiguous allocation, segmentation, paging), Virtual memory (demand paging, page replacement, OPT, FIFO, LRU, LFU, Clock), File systems (directory structure, file allocation, contiguous, linked, indexed, inode)

Processes and threads

A process is a program in execution. It has its own address space (code, data, heap, stack) and OS resources (file descriptors, etc.).

A thread is a lightweight unit of execution within a process. Threads of the same process share code, data, and heap, but each has its own stack and registers.

// Rough sketch of fork() semantics
pid_t pid = fork();
if (pid == 0) {
    // child process: new address space copied from parent
    exec(...);
} else {
    // parent process: pid holds child's PID
    wait(NULL);
}

IPC mechanisms:

  • Pipes: unidirectional byte stream, parent-child only., Named pipes (FIFOs): between unrelated processes., Shared memory: fastest IPC; needs synchronisation., Message queues: structured messages with priority., Signals: asynchronous notification.

GATE trap: After fork(), both parent and child continue from the next instruction. The child does NOT restart from main.


Concurrency and synchronisation

Semaphores

A semaphore S is an integer with two atomic operations:

  • wait(S) (P): if S > 0, decrement; else block.
  • signal(S) (V): if threads are blocked, wake one; else increment.

Binary semaphore (mutex): S in {0, 1}. Counting semaphore: S can be any non-negative integer.

Classic synchronisation problems

Producer-Consumer (Bounded Buffer):

semaphore mutex = 1, empty = N, full = 0;

// Producer
wait(empty);
wait(mutex);
  // add item to buffer
signal(mutex);
signal(full);

// Consumer
wait(full);
wait(mutex);
  // remove item from buffer
signal(mutex);
signal(empty);

Readers-Writers: Multiple readers can read simultaneously; a writer needs exclusive access. First readers-writers problem favours readers (writers can starve).

Dining Philosophers: 5 philosophers, 5 forks. Deadlock if all pick up left fork simultaneously. Solutions: allow at most 4 to sit, or use an asymmetric rule (one picks right fork first).

GATE trap: The order of wait() calls matters. Swapping mutex and empty/full in the producer leads to deadlock.


Deadlock

Four necessary conditions (Coffman)

  1. Mutual exclusion: at least one resource is non-sharable.
  2. Hold and wait: a process holding resources waits for more.
  3. No preemption: resources cannot be forcibly taken away.
  4. Circular wait: a cycle of processes each waiting for a resource held by the next.

All four must hold simultaneously for deadlock to occur.

Banker's algorithm (deadlock avoidance)

Given: Allocation[i][j], Max[i][j], Available[j]. Need[i][j] = Max[i][j], Allocation[i][j].

Safety algorithm: find a process i where Need[i] <= Work (initialised to Available). Add Allocation[i] to Work, mark i as finished. Repeat. If all processes finish, the state is safe.

Resource request: grant the request only if the resulting state is safe.

GATE trap: The Banker's algorithm assumes each process declares its maximum need in advance. It avoids deadlock but does not prevent starvation.


CPU scheduling

Key metrics:

  • Turnaround time (TAT) = Completion time, Arrival time., Waiting time (WT) = TAT, Burst time., Response time = First CPU time, Arrival time.
Algorithm Preemptive? Optimal for Starvation?
FCFS No Convoy effect demo No
SJF No Minimum average WT Yes (long processes)
SRTF Yes Minimum average WT (preemptive) Yes
Round Robin Yes Fair time sharing No
Priority Both Priority-critical systems Yes (low priority)

Round Robin example: Time quantum q = 2. Processes: P1 (burst 5), P2 (burst 3), P3 (burst 1), all arrive at t=0.

Gantt chart: P1(0-2), P2(2-4), P3(4-5), P1(5-7), P2(7-8), P1(8-10). TAT: P1=10, P2=8, P3=5. Average TAT = 23/3 = 7.67.

GATE trap: In SRTF, preemption happens when a new process arrives with a shorter remaining burst, not at every unit of time automatically.


Memory management

Paging

Physical memory divided into frames; logical memory into pages of equal size. A page table maps page number to frame number.

Effective access time (EAT) with TLB: EAT = hit_ratio * (TLB_time + memory_time) + (1, hit_ratio) * (TLB_time + 2 * memory_time)

The "+2" is because a TLB miss requires one extra memory access for the page table.

Multi-level paging: a 32-bit address space with 4KB pages uses a 20-bit page number and 12-bit offset. With two levels, the page number is split 10+10. Each page table fits in one frame.

Segmentation

Logical address = (segment number, offset). Segment table stores base and limit. Segmentation allows variable-size regions (code, stack, heap) but causes external fragmentation.

GATE trap: Paging eliminates external fragmentation but causes internal fragmentation (last page may not be full). Segmentation is the reverse.


Virtual memory and page replacement

Demand paging: a page is loaded only when referenced (page fault occurs). If no free frame is available, a victim page is selected by the replacement policy.

Page replacement algorithms

OPT (Optimal): Replace the page that will not be used for the longest time in the future. Not implementable; used as a benchmark.

FIFO: Replace the page that has been in memory the longest. Simple but suffers from Belady's anomaly (more frames can cause more page faults).

LRU: Replace the page least recently used. Does not suffer Belady's anomaly. Implementation: counters or stack.

Clock (Second Chance): Approximate LRU using a reference bit. Pages are arranged in a circular list; the hand moves past pages with reference bit 1 (clearing the bit) until it finds a page with bit 0.

Thrashing: A process spends more time paging than executing because it does not have enough frames for its working set. Solution: working set model or page-fault frequency control.

GATE trap: Belady's anomaly affects FIFO, not LRU or OPT.


File systems

File allocation methods

Contiguous allocation: File occupies consecutive blocks. Fast sequential access; direct access by block number. Suffers external fragmentation. Requires knowing file size at creation.

Linked allocation: Each block contains a pointer to the next. No external fragmentation; supports dynamic growth. No efficient direct access; pointer overhead; reliability issue (broken link loses rest of file).

Indexed allocation (inode): A special index block holds pointers to all data blocks. Direct access; no external fragmentation. For large files, use multi-level indexing or extents.

Unix inode: 12 direct pointers, 1 single indirect, 1 double indirect, 1 triple indirect. If block size = 4KB and pointer size = 4 bytes, each indirect block holds 4096/4 = 1024 pointers.

Maximum file size with inode: 12 + 1024 + 1024^2 + 1024^3 blocks.

GATE trap: Contiguous allocation is best for sequential access speed. Indexed is best for random access. Linked is worst for direct access.


Worked examples

Example 1. Banker's algorithm safety check.

4 processes, 3 resource types. Available = [3, 3, 2].

Process Allocation Max Need
P0 [0,1,0] [7,5,3] [7,4,3]
P1 [2,0,0] [3,2,2] [1,2,2]
P2 [3,0,2] [9,0,2] [6,0,0]
P3 [2,1,1] [2,2,2] [0,1,1]
P4 [0,0,2] [4,3,3] [4,3,1]

Work = [3,3,2]. P1 needs [1,2,2] <= [3,3,2]. Grant; Work = [5,3,2]. P3 needs [0,1,1] <= [5,3,2]. Grant; Work = [7,4,3]. P0 needs [7,4,3] <= [7,4,3]. Grant; Work = [7,5,3]. P2 needs [6,0,0] <= [7,5,3]. Grant; Work = [10,5,5]. P4 needs [4,3,1] <= [10,5,5]. Grant. Safe sequence: P1, P3, P0, P2, P4. State is safe.

Example 2. Page replacement with LRU.

Reference string: 7 0 1 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1. Frames = 3.

Simulate step by step (bold = page fault): 7: [7] fault. 0: [7,0] fault. 1: [7,0,1] fault. 2: [2,0,1] fault (evict 7, LRU). 0: [2,0,1] hit. 3: [2,0,3] fault (evict 1). 0: hit. 4: [4,0,3] fault (evict 2). 2: [4,2,3] fault (evict 0). 3: hit. 0: [0,2,3] fault (evict 4). 3: hit. 2: hit. 1: [0,2,1] fault (evict 3). 2: hit. 0: hit. 1: hit. 7: [7,0,1] fault (evict 2). 0: hit. 1: hit.

Total page faults: count the "fault" events above = 12 (verify against your own simulation).

Example 3. CPU scheduling with SJF (non-preemptive).

Processes: P1 (arrival 0, burst 8), P2 (arrival 1, burst 4), P3 (arrival 2, burst 2), P4 (arrival 3, burst 1).

At t=0: only P1 available; start P1. Runs until t=8. At t=8: P2, P3, P4 all available. Shortest burst is P4 (1). Run P4 (8-9). Then P3 (9-11). Then P2 (11-15). TAT: P1=8, P4=9-3=6, P3=11-2=9, P2=15-1=14. Average TAT = (8+6+9+14)/4 = 37/4 = 9.25.

Example 4. Effective access time with TLB.

TLB access time = 20 ns, memory access time = 100 ns, TLB hit ratio = 90%.

EAT = 0.9 * (20 + 100) + 0.1 * (20 + 200) = 0.9 * 120 + 0.1 * 220 = 108 + 22 = 130 ns.

Without TLB (two memory accesses): 200 ns. TLB saves 70 ns on average.

Example 5. Unix inode max file size.

Block size = 1 KB = 1024 bytes. Pointer size = 4 bytes. Pointers per block = 1024/4 = 256. 12 direct: 12 * 1KB = 12 KB. Single indirect: 256 * 1KB = 256 KB. Double indirect: 256^2 * 1KB = 64 MB. Triple indirect: 256^3 * 1KB = 16 GB. Total ~= 16 GB + 64 MB + 256 KB + 12 KB.


Quick-revision summary

  • Deadlock: needs all four Coffman conditions simultaneously., Banker's algorithm: safety requires a safe sequence; check Need <= Work for each candidate., SRTF gives minimum average WT; FCFS has convoy effect., TLB miss = 2 memory accesses (page table + data). Hit = 1 + TLB., Belady's anomaly: FIFO only. OPT and LRU are stack algorithms (no anomaly)., Paging: internal fragmentation. Segmentation: external fragmentation., Inode: 12 direct, 1 single, 1 double, 1 triple indirect pointer.

How to study this unit

  1. Solve at least 20 scheduling Gantt chart problems, FCFS, SJF, SRTF, RR. Time yourself; you need to be fast.
  2. Run the Banker's algorithm on 5 different examples until it is mechanical.
  3. For page replacement, simulate OPT, FIFO, and LRU on the same reference string side by side. Compare fault counts.
  4. Draw the page table / TLB lookup steps for multi-level paging until address translation is completely fluent.
  5. Memorise inode structure and file allocation trade-offs with a simple comparison table.
  6. For synchronisation, always draw the semaphore value timeline when debugging a solution, it catches errors that logic alone misses.

Test Your Knowledge

Quick MCQ check on this unit

Start Quiz →

Prefer watching over reading?

Subscribe for free.

Subscribe on YouTube