Module Review: Caching

[!NOTE] This module explores the core principles of Module Review: Caching, deriving solutions from first principles and hardware constraints to build world-class, production-ready expertise.

1. Cheat Sheet: The Numbers You Must Know

Operation Latency (Approx) Analogy  
L1/L2 Cache CPU Internal < 10ns Heartbeat
RAM (Redis) Memory Access ~100ns Brushing Teeth
SSD Read Fast Disk ~150,000ns (150µs) Weekend Trip
Network (LAN) Data Center ~500,000ns (0.5ms) 1 Week Vacation
Network (WAN) Cross-Region ~150,000,000ns (150ms) 4 Years (University)

2. Flashcards: Test Your Knowledge

What is the "Thundering Herd" problem?

Tap to reveal

Concurrency Spike

When a popular cache key expires, thousands of requests hit the database simultaneously to regenerate it. Solved by Mutex Locks or Refresh-Ahead.

LRU vs LFU?

Tap to reveal

Recency vs Frequency

LRU: Evicts least recently used (Good for Recency bias).
LFU: Evicts least frequently used (Good for long-term popularity, resistant to scans).

Write-Through vs Write-Back?

Tap to reveal

Safety vs Speed

Write-Through: Writes to DB + Cache synchronously (Safe).
Write-Back: Writes to Cache, async to DB (Fast, risk of data loss).

What is Consistent Hashing?

Tap to reveal

Distributed Scaling

A technique to map keys to nodes on a ring. When a node is added/removed, only 1/N keys are moved, minimizing cache misses.

Redis Persistence: RDB vs AOF?

Tap to reveal

Snapshot vs Log

RDB: Periodic snapshots (Compact, fast restart, data loss window).
AOF: Append-only log (Durable, large file, slower restart).

What is Cache Penetration?

Tap to reveal

Querying Non-Existent Data

Users query keys that don't exist in DB (e.g. id=-1). Bypasses cache and hits DB. Solved by Bloom Filters or caching Nulls.


3. Decision Matrix: Redis vs Memcached

Feature Redis Memcached
Data Types Strings, Lists, Sets, Hashes, Bitmaps Strings Only (Binary Blobs)
Architecture Single-Threaded (Event Loop) Multi-Threaded
Persistence Yes (RDB + AOF) No (Pure In-Memory)
Clustering Native (Hash Slots) Client-Side Only (Consistent Hashing)
Best For Complex Apps, Queues, Leaderboards Simple KV Caching, Scaling Vertical CPUs

4. Key Takeaways

  1. Cache = Speed: It bridges the gap between RAM (100ns) and Disk (10ms).
  2. Invalidation is Hard: Strong consistency (Write-Through) vs Eventual consistency (TTL).
  3. Eviction Matters: Use LRU for general cases, TinyLFU/ARC for scanning workloads.
  4. Protect the DB: Use Bloom Filters for Penetration and Mutex Locks for Thundering Herds.