Module Review: Memory Management
Key Takeaways
- Virtual Memory is an Illusion: Every process thinks it has a private, contiguous 256TB address space, but the MMU translates these Virtual Addresses (VA) to Physical Addresses (PA) on the fly.
- Paging Solves Fragmentation: By dividing memory into fixed 4KB chunks (Pages and Frames), we eliminate External Fragmentation but suffer from Internal Fragmentation.
- The TLB is Critical: Address translation is slow (RAM access). The TLB (Translation Lookaside Buffer) caches recent translations to make memory access fast (1 cycle).
- Multi-Level Page Tables: To save space, modern CPUs use a 4-level tree (PML4 → PDPT → PD → PT). Unused memory regions don’t consume Page Table space.
- LRU is Optimal-ish: While we can’t predict the future (OPT), Least Recently Used (LRU) is a great predictor. The Clock Algorithm approximates LRU efficiently.
- Thrashing Kills Performance: When the sum of active Working Sets exceeds Physical RAM, the OS spends 100% of its time swapping pages to disk.
-
Malloc Manages the Heap: User-space allocators request large chunks from the OS (sbrk/mmap) and slice them up using strategies like First Fit, Best Fit, or Slab Allocation.
Module Review: Memory Management
[!NOTE] This module explores the core principles of Module Review: Memory Management, deriving solutions from first principles and hardware constraints to build world-class, production-ready expertise.
1. Flashcards
What is the main purpose of the MMU?
To translate Virtual Addresses (VA) to Physical Addresses (PA) and enforce memory protection (Read/Write/Execute bits).
What is the difference between a Page and a Frame?
A Page is a fixed-size block of Virtual Memory. A Frame is a fixed-size block of Physical RAM.
What is the TLB?
Translation Lookaside Buffer. A hardware cache inside the MMU that stores recent VA-to-PA translations to speed up memory access.
What is Belady's Anomaly?
A phenomenon in FIFO page replacement where increasing the number of page frames results in more page faults.
What is Thrashing?
When the system spends more time swapping pages in/out of disk than executing instructions, causing CPU utilization to drop to near zero.
What is Internal Fragmentation?
Wasted space inside an allocated block (e.g., asking for 10 bytes but getting a 4KB page).
What is the Working Set?
The set of pages a process is currently using actively. If Working Set > RAM, thrashing occurs.
2. Cheat Sheet
| Concept | Definition | Key Characteristic |
|---|---|---|
| Virtual Memory | Abstraction of storage resources | Isolates processes; illusion of infinite memory |
| MMU | Hardware translator | Uses Page Tables to map VA → PA |
| Page Table | Data structure in RAM | Stores the mapping. Multi-level (PML4) on x64 |
| TLB | Cache for Page Table | Critical for performance. Miss = Slow |
| Page Fault | Exception raised by MMU | “Page not in RAM”. OS must fetch from Disk (Swap) |
| LRU | Eviction Algorithm | Evicts Least Recently Used page. Uses Temporal Locality |
| Slab Allocation | Kernel Allocator | Caches objects of specific size (no fragmentation) |
| mmap / sbrk | System Calls | How malloc requests memory from the Kernel |
3. Next Steps
Now that you understand how the OS manages memory, dive into how it manages execution.
- Next Module: Concurrency
- Related Topic: File Systems (How Swap works on disk)
[!TIP] Interview Prep: Be ready to implement an LRU Cache or explain Thrashing in system design interviews.