Client-Side Caching
Standard Redis caching gives you ~1ms latency. That’s fast, but for high-frequency trading or real-time bidding, the network round-trip is still too slow.
Client-Side Caching (CSC) stores a subset of your hot data directly in your application’s memory (Heap). This reduces latency to ~0.1µs (10,000x faster).
1. The Stale Data Problem
The challenge with local caching is invalidation. If Instance A updates a key, how does Instance B know to delete its local copy?
- Old Way (TTL): Set a short TTL (e.g., 5 seconds). Data is stale for up to 5 seconds.
- Old Way (Pub/Sub): Manually publish invalidation events. Complex and error-prone.
- New Way (Redis 6+): Server-Assisted Client-Side Caching.
2. Redis Tracking (The Solution)
Redis 6 introduced a feature where the server tracks which keys a client has read. When one of those keys is modified by any client, the server sends an Invalidation Message to the holding client.
How it Works (RESP3)
- Client A reads
user:1. Redis notes: “Client A is interested inuser:1”. - Client B updates
user:1. - Redis checks its tracking table and sees Client A.
- Redis pushes an invalidation message to Client A.
- Client A deletes
user:1from its local memory.
3. Interactive: Invalidation Flow
Simulate the “Fetch → Update → Invalidate” cycle.
4. Implementation in Java & Go
Modern clients handle the complexity of tracking and invalidation automatically.
Java (Lettuce)
Lettuce provides a CacheFrontend that wraps your Redis client. It uses a Map for local storage and automatically listens for invalidation messages.
import io.lettuce.core.RedisClient;
import io.lettuce.core.support.caching.CacheAccessor;
import io.lettuce.core.support.caching.CacheFrontend;
import io.lettuce.core.support.caching.ClientSideCaching;
import java.util.Map;
import java.util.concurrent.ConcurrentHashMap;
public class NearCacheExample {
public static void main(String[] args) {
RedisClient client = RedisClient.create("redis://localhost");
Map<String, String> localCache = new ConcurrentHashMap<>();
// Create the frontend
CacheFrontend<String, String> frontend = ClientSideCaching.enable(
CacheAccessor.forMap(localCache),
client.connect(),
TrackingArgs.builder().bcast().build() // Use Broadcasting mode
);
// Usage
// 1st call: Hits Redis, stores in localCache
String val1 = frontend.get("key1");
// 2nd call: Hits localCache (0 network IO)
String val2 = frontend.get("key1");
}
}
package main
import (
"context"
"time"
"github.com/redis/rueidis"
)
func main() {
client, _ := rueidis.NewClient(rueidis.ClientOption{
InitAddress: []string{"127.0.0.1:6379"},
})
// Rueidis automatically handles Client-Side Caching
// via the DoCache method.
cmd := client.B().Get().Key("my_key").Cache()
// 1st call: Hits Redis
// 2nd call: Hits Local Memory (until "my_key" changes)
// TTL (5s) is a fallback safety net.
val, err := client.DoCache(context.Background(), cmd, 5*time.Second).ToString()
}
5. Tracking Modes
- Default Mode:
- Redis remembers every key you requested.
- Pros: Precision. Only invalidates what you have.
- Cons: High memory usage on Redis server if you access millions of keys.
- Broadcasting Mode (
BCAST):- You subscribe to a Prefix (e.g.,
user:*). - Redis sends invalidation if any key matching the prefix changes.
- Pros: Very low memory on server (only stores prefixes).
- Cons: More network noise (False positives: updating
user:2invalidatesuser:1if you are subscribed touser:*).
- You subscribe to a Prefix (e.g.,
[!TIP] Use Broadcasting Mode for most production scenarios to protect the Redis server’s memory.