Unit 6 of 1012 min read

Operating Systems

Processes, threads, synchronization, deadlock, scheduling, memory management, file systems.

Share:WhatsAppLinkedIn

Why this unit matters

Operating Systems is the most mark-dense unit in ISRO CS papers. Expect 10-14 marks, and expect at least half of those to be numerical: average waiting time for a scheduling algorithm on a given process set, number of page faults for a specific reference string with a given algorithm, or Banker's algorithm safety sequence computation. The rest are concept questions about synchronisation (semaphore values, deadlock conditions) and file systems.

The unit connects to Computer Architecture (TLB, paging, virtual memory) and Databases (transactions, concurrency control, which share conceptual overlap with OS synchronisation).

Syllabus map

  • Processes and threads

    • Process: PCB contents, process states (new, ready, running, waiting, terminated)
    • Context switch overhead
    • Threads: user-level vs kernel-level, hybrid (two-level) model
    • Inter-process communication (IPC): pipes (named, unnamed), message queues, shared memory, signals, sockets
  • Process synchronisation

    • Critical section problem: mutual exclusion, progress, bounded waiting
    • Peterson's solution (software), hardware solutions (TestAndSet, Swap)
    • Semaphores: counting, binary; P (wait) and V (signal) operations
    • Monitors: condition variables, wait and signal
    • Classical problems: producer-consumer, readers-writers, dining philosophers
  • Deadlock

    • Necessary conditions: mutual exclusion, hold and wait, no preemption, circular wait
    • Prevention: negate one of the four conditions
    • Avoidance: Banker's algorithm (safe state, safe sequence)
    • Detection: resource allocation graph, cycle detection
    • Recovery: process termination, resource preemption
  • CPU scheduling

    • FCFS, SJF (non-preemptive), SRTF (preemptive SJF), Round Robin, priority (preemptive and non-preemptive), multilevel queue, multilevel feedback queue
    • Metrics: waiting time, turnaround time, response time, throughput, CPU utilisation
  • Memory management

    • Contiguous allocation: fixed and variable partitions, fragmentation (internal, external), compaction
    • Paging: page table, page size, physical/logical address space
    • Segmentation: segment table, protection, sharing
    • Virtual memory: demand paging, page fault handling
    • Page replacement: FIFO, LRU, OPT (Belady's optimal), Clock (second chance)
    • Belady's anomaly: FIFO can have more faults with more frames
    • Thrashing: working set model, page fault frequency
  • File systems

    • File allocation: contiguous, linked, indexed (single, double, triple indirect)
    • Directory structures: single-level, two-level, tree-structured, acyclic graph
    • FAT (File Allocation Table)
    • Unix inode structure
    • Disk scheduling: FCFS, SSTF, SCAN (elevator), C-SCAN, LOOK, C-LOOK

Processes and threads

Process control block (PCB)

The PCB holds: process ID, process state, program counter, CPU registers, CPU scheduling information (priority, queue pointers), memory management information (page table base, segment table), I/O status, accounting information.

Context switch: save PC and registers of current process into its PCB, load PC and registers from PCB of next process. Context switch time is pure overhead.

Thread models

User-level threads: managed by a user-space library. The kernel sees one process. All threads in a process block if the kernel blocks the process (on a blocking system call). Fast context switch (no kernel involvement).

Kernel-level threads: managed by the kernel. Each thread is a schedulable entity. One thread blocking does not block others (kernel can schedule another thread of the same process). Slower context switch.

Hybrid (two-level) model: multiple user threads multiplexed on multiple kernel threads. Combines benefits of both.

Process synchronisation

Semaphore operations

A semaphore S has an integer value. Operations:

  • wait(S) / P(S): if S > 0, decrement S; else block the process.
  • signal(S) / V(S): if some process is blocked on S, wake one; else increment S.

For a binary semaphore (mutex), S starts at 1. For a counting semaphore, S starts at the count of available resources.

ISRO trap: signal (V) never blocks the calling process. wait (P) blocks only if S <= 0 after decrement.

Producer-consumer problem

Binary semaphore mutex = 1 (protects buffer). Counting semaphore empty = N (number of empty slots, starts at N). Counting semaphore full = 0 (number of filled slots).

Producer: wait(empty); wait(mutex); add item; signal(mutex); signal(full). Consumer: wait(full); wait(mutex); remove item; signal(mutex); signal(empty).

Common mistake: swapping the order of wait(empty) and wait(mutex) in the producer can cause deadlock.

Dining philosophers

Five philosophers, five forks. Naive solution where each philosopher picks up left fork then right fork can deadlock.

Solutions: allow at most 4 philosophers to pick up forks simultaneously; use asymmetry (odd philosophers pick left first, even pick right first); use a monitor or a central arbiter.

Deadlock

Banker's algorithm

The Banker's algorithm decides whether allocating resources to a process will leave the system in a safe state.

Data structures (for n processes, m resource types):

  • Available[m]: available instances of each resource.
  • Max[n][m]: maximum demand of each process.
  • Allocation[n][m]: current allocation.
  • Need[n][m] = Max[n][m] - Allocation[n][m].

Safety algorithm: find a process i where Need[i] <= Available (element-wise). Simulate granting its request, add its Allocation to Available, mark it as done. Repeat. If all processes can be completed, the state is safe.

Resource request algorithm: when process i requests Request[i], check Request[i] <= Need[i], Request[i] <= Available, then tentatively grant, run safety algorithm. If safe, grant; else rollback.

CPU scheduling

Metrics

For process i with arrival time a_i, burst time b_i, completion time c_i:

  • Turnaround time = c_i - a_i
  • Waiting time = turnaround time - burst time = c_i - a_i - b_i
  • Response time = time of first CPU access - a_i

FCFS

Simple FIFO. Non-preemptive. Can suffer from convoy effect: a long process makes short processes wait.

SJF and SRTF

SJF (non-preemptive): at each scheduling decision, choose the process with the shortest burst time. Minimises average waiting time among non-preemptive algorithms.

SRTF (preemptive SJF): when a new process arrives with a shorter remaining time than the current process, preempt. Minimises average waiting time overall (optimal for minimising average waiting time with known burst times).

Round Robin

Each process gets a time quantum Q. After Q units, if not finished, it is preempted and placed at the end of the ready queue.

Smaller Q: better response time, more context switches. Larger Q: approaches FCFS.

Average waiting time depends on the order and burst times. For ISRO numerical questions: draw a Gantt chart.

Memory management

Paging

Logical address = page number (p) + offset (d). Physical address = frame number (f) + offset (d). The page table maps p to f.

If logical address space is 2^m and page size is 2^n, then page number is m-n bits and offset is n bits.

ISRO numerical: page table size = number of pages * entry size. If logical address is 32 bits, page size is 4 KB = 2^12, then number of pages = 2^20. Each entry is typically 4 bytes. Page table size = 2^20 * 4 = 4 MB.

Page replacement algorithms

FIFO: replace the page that has been in memory the longest. Belady's anomaly: more frames can cause more page faults with FIFO.

LRU: replace the page least recently used. No Belady's anomaly. Cannot have more faults with more frames.

OPT (Belady's optimal): replace the page that will not be used for the longest time in the future. Not implementable online (requires future knowledge). Used as a benchmark.

Clock (second-chance): circular buffer of frames with a reference bit. When the clock hand points to a page with reference bit 1, clear the bit and advance. Replace the first page with reference bit 0.

ISRO trap: LRU approximation (Clock) is not the same as exact LRU. Exact LRU needs expensive hardware support (counter or stack update on every access).

File systems

Unix inode

An inode holds file metadata (owner, permissions, timestamps, size) and block pointers:

  • 12 direct block pointers.
  • 1 single indirect pointer (points to a block of pointers).
  • 1 double indirect pointer.
  • 1 triple indirect pointer.

If block size is 4 KB and pointer size is 4 bytes, each indirect block holds 4096/4 = 1024 pointers.

Maximum file size: 12 + 1024 + 1024^2 + 1024^3 blocks. With 4 KB blocks: approximately 4 TB.

Disk scheduling

SSTF (Shortest Seek Time First): go to the nearest request. May cause starvation of far requests.

SCAN (elevator): move in one direction, serve requests, reverse at the end. C-SCAN: move in one direction, jump back to the beginning without serving requests on the return.

LOOK: same as SCAN but reverses at the last request in the current direction, not at the disk end.

Worked examples

Example 1: Round Robin scheduling

Processes: P1 (arrival 0, burst 6), P2 (arrival 1, burst 4), P3 (arrival 2, burst 2). Time quantum = 2.

Gantt chart: 0-2: P1 (P1 remaining 4) 2-4: P2 (P2 remaining 2; P3 arrives at 2 and joins queue) 4-6: P3 (P3 burst 2, finishes at 6) 6-8: P1 (P1 remaining 2) 8-10: P2 (P2 remaining 0, finishes at 10) 10-12: P1 (P1 finishes at 12... wait, let me recount)

Actually at time 4, queue has P3 (arrived at 2), P1 (re-queued at 2). So order after P2's first quantum: P3, P1. At 4, P3 runs 2 units, finishes at 6. At 6, P1 runs 2 units (remaining 4-2=2 after first quantum, now 2 more). P1 finishes at 8. At 8, P2 runs its remaining 2 units (had 4, ran 2, remaining 2). P2 finishes at 10.

Turnaround: P1 = 8-0 = 8, P2 = 10-1 = 9, P3 = 6-2 = 4. Waiting: P1 = 8-6 = 2, P2 = 9-4 = 5, P3 = 4-2 = 2. Average waiting = (2+5+2)/3 = 3.

Example 2: Banker's algorithm safety check

Resources: A, B, C with Available = [3, 3, 2]. Processes with Allocation and Need:

Process Alloc A B C Need A B C
P0 0 1 0 7 4 3
P1 2 0 0 1 2 2
P2 3 0 2 6 0 0
P3 2 1 1 0 1 1
P4 0 0 2 4 3 1

Check P1: Need[1] = [1,2,2] <= Available [3,3,2]. Grant. Available = [3+2, 3+0, 2+0] = [5,3,2]. Mark P1 done. Check P3: Need[3] = [0,1,1] <= [5,3,2]. Grant. Available = [5+2,3+1,2+1] = [7,4,3]. Mark P3 done. Check P4: Need[4] = [4,3,1] <= [7,4,3]. Grant. Available = [7,4,5]. Mark P4 done. Check P0: Need[0] = [7,4,3] <= [7,4,5]. Grant. Available = [7,5,5]. Mark P0 done. Check P2: Need[2] = [6,0,0] <= [10,5,5]. Grant. Mark P2 done.

Safe sequence: P1, P3, P4, P0, P2.

Example 3: Page faults with LRU

Reference string: 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2. Frames = 3.

Trace (LRU, 3 frames): 7(F), 0(F), 1(F), 2(F: evict 7, LRU), 0(H), 3(F: evict 1), 0(H), 4(F: evict 2), 2(F: evict 3), 3(F: evict 0... wait, LRU order is 0,4,2 at this point. 0 used at step 7, 4 at step 8, 2 at step 9. LRU is 0. Evict 0). 0(F: evict 4), 3(H), 2(H).

Total page faults: 7, 0, 1, 2, 3, 4, 2 (step 9), 0 (step 10) = 8 faults.

Example 4: Semaphore value after operations

Initial: S = 1. Operations by threads: T1: P(S), T2: P(S), T3: V(S), T4: V(S).

After T1's P(S): S = 0. After T2's P(S): S = -1 (T2 blocks). After T3's V(S): S = 0, T2 unblocked. After T4's V(S): S = 1.

Final value: 1.

Example 5: Unix inode file size limit

Block size = 1 KB = 1024 bytes. Pointer size = 4 bytes. Pointers per block = 1024/4 = 256.

Direct blocks: 12 * 1 KB = 12 KB. Single indirect: 256 * 1 KB = 256 KB. Double indirect: 256 * 256 * 1 KB = 64 MB. Triple indirect: 256^3 * 1 KB = 16 GB.

Maximum file size = 12 KB + 256 KB + 64 MB + 16 GB approx 16.06 GB.

Quick-revision summary

  • PCB holds: PID, state, PC, registers, memory info, I/O info, accounting.
  • Semaphore: P decrements (blocks if <= 0), V increments (wakes blocked process). V never blocks.
  • Producer-consumer: wait(empty), wait(mutex) in producer. Wait(full), wait(mutex) in consumer.
  • Deadlock conditions: mutual exclusion, hold and wait, no preemption, circular wait. All four are necessary.
  • Banker's algorithm: need to find a safe sequence. If no safe sequence exists, the state is unsafe.
  • SRTF minimises average waiting time. Round Robin gives best response time.
  • FIFO replacement: Belady's anomaly possible. LRU: no anomaly, approximated by Clock.
  • Page table size = (logical address space / page size) * entry size.
  • Unix inode: 12 direct + 1 single indirect + 1 double indirect + 1 triple indirect.
  • Disk scheduling: SSTF can starve. SCAN is fairer but serves end cylinders less frequently than middle.

How to study this unit

  1. Draw Gantt charts for FCFS, SJF, SRTF, and Round Robin on the same set of 4-5 processes. Compute average waiting time and turnaround time. ISRO asks this every year.
  2. Practise the Banker's algorithm safety check and resource request algorithm on 2-3 different problem instances.
  3. Trace page replacement algorithms (FIFO, LRU, OPT) on the same reference string with different frame counts to understand Belady's anomaly.
  4. Simulate semaphore operations for producer-consumer with 2-3 producers and consumers to check for deadlock and starvation.
  5. Practise Unix inode file size calculations with different block sizes (512 B, 1 KB, 4 KB). These come up as straight numerical questions.
  6. Know the comparison table for scheduling algorithms: preemptive or not, optimal for what metric, drawbacks. ISRO asks "which algorithm minimises average waiting time" type questions.

Test Your Knowledge

Quick MCQ check on this unit

Start Quiz →

Prefer watching over reading?

Subscribe for free.

Subscribe on YouTube