Key Takeaways
- Cache-Aside is the most robust pattern for general-purpose applications. It lazily loads data into the cache.
- Write-Through ensures strong consistency but adds write latency.
- Write-Behind offers maximum write performance but risks data loss.
- LRU (Least Recently Used) is the default choice for caching, but LFU (Least Frequently Used) is better for stable access patterns resistant to scans.
-
Client-Side Caching (Redis 6+) eliminates network latency by storing hot keys in the application’s heap, using Tracking for invalidation.
[!NOTE] This module explores the core principles of Module Review: Caching, deriving solutions from first principles and hardware constraints to build world-class, production-ready expertise.
1. Flashcards
What is the "Cache Stampede"?
When many concurrent requests miss the cache simultaneously (due to expiration) and all hit the database at once.
Why does Redis use "Approximated LRU"?
To save memory. A true LRU requires a Doubly Linked List (2 pointers per key), which is too expensive for millions of keys.
Which eviction policy is best for "analytics" data?
allkeys-lfu. Analytics data often has stable "hot" keys that should persist even if not accessed in the last few seconds.
What is the main benefit of "Broadcasting Mode" in Tracking?
It drastically reduces memory usage on the Redis server by tracking prefixes instead of individual keys.
In Cache-Aside, why do we DELETE on write instead of UPDATE?
To prevent race conditions where two concurrent writes leave the cache with a different value than the database.
2. Cheat Sheet
| Feature | Description |
|---|---|
maxmemory-policy |
Configures eviction (e.g., allkeys-lru, volatile-lfu). |
noeviction |
Default policy. Returns errors when full. Safe for DB mode. |
volatile-* |
Only evicts keys with an Expiration (TTL) set. |
allkeys-* |
Can evict any key to make space. |
CLIENT TRACKING ON |
Enables server-assisted client-side caching. |
BCAST |
Broadcasting mode for tracking (Prefix-based). |
3. Next Steps
Now that you’ve mastered Caching, it’s time to look at real-time data flow.