Caching Patterns

Caching is the art of storing expensive data in cheap-to-access locations. It is the single most effective way to scale read-heavy systems. However, implementing it correctly requires understanding when to write to the cache and how to keep it consistent with your source of truth (the database).

1. First Principles: The Physics of Latency

Why do we cache? To understand this, we must look at the “distance” data travels in a computer system.

Storage Layer Approx. Latency Human Scale Analogy
L1 Cache ≈0.5 ns Heartbeat
RAM (Redis) ≈100 ns Blinking your eye
Network (Intra-DC) ≈0.5 ms Brushing your teeth
SSD (Database) ≈1-10 ms Walking to the store

Fetching data from a database (SSD) over the network is orders of magnitude slower than reading from RAM.

The Amortized Latency Equation

The performance of a cache is mathematically defined by its Hit Rate (p).

Lavg = p × Lcache + (1 - p) × Ldb

Where:

  • Lavg is the average latency per request.
  • p is the probability of a cache hit (0.0 to 1.0).
  • Lcache is the cost of reading from Redis (~1ms including network).
  • Ldb is the cost of reading from the Database (~10ms).

[!NOTE] Even a modest hit rate of 80% (p=0.8) dramatically reduces average latency: L_{avg} = 0.8(1) + 0.2(10) = 0.8 + 2 = 2.8ms This is a 3.5x speedup compared to the raw DB latency of 10ms.


2. Interactive: Latency Simulator

Adjust the Hit Rate slider to see how it affects the Average Latency and the flow of requests.

Requests
0
Cache Hits
0
Avg Latency
0 ms
Users
Cache (1ms)
DB (10ms)

3. Pattern 1: Cache-Aside (Lazy Loading)

This is the industry standard. The application treats the cache as a separate data store and manages the data flow manually.

The Algorithm

  1. Read:
    • App checks Redis for key K.
    • Hit: Return value V.
    • Miss: App reads K from Database, writes K to Redis, then returns V.
  2. Write:
    • App writes K to Database.
    • App deletes (invalidates) K from Redis.

[!TIP] Why Delete instead of Update? If you update the cache during a write, you risk a race condition where two concurrent writes leave the cache with a different value than the database. Deleting forces the next read to fetch the latest data from the source of truth.

Implementation

import redis.clients.jedis.Jedis;

public class UserRepository {
    private Jedis redis;
    private Database db;

    public User getUser(String userId) {
        String cacheKey = "user:" + userId;

        // 1. Check Cache
        String cachedData = redis.get(cacheKey);
        if (cachedData != null) {
            return deserialize(cachedData);
        }

        // 2. Cache Miss: Read from DB
        User user = db.findUserById(userId);

        // 3. Populate Cache (with TTL)
        if (user != null) {
            redis.setex(cacheKey, 3600, serialize(user));
        }

        return user;
    }

    public void updateUser(User user) {
        // 1. Write to DB
        db.save(user);

        // 2. Invalidate Cache
        redis.del("user:" + user.getId());
    }
}
package repository

import (
    "context"
    "encoding/json"
    "time"
    "github.com/redis/go-redis/v9"
)

type UserRepository struct {
    rdb *redis.Client
    db  *Database
}

func (r *UserRepository) GetUser(ctx context.Context, id string) (*User, error) {
    key := "user:" + id

    // 1. Check Cache
    val, err := r.rdb.Get(ctx, key).Result()
    if err == nil {
        var user User
        json.Unmarshal([]byte(val), &user)
        return &user, nil
    }

    // 2. Cache Miss: Read from DB
    user, err := r.db.FindUser(id)
    if err != nil {
        return nil, err
    }

    // 3. Populate Cache
    jsonBytes, _ := json.Marshal(user)
    r.rdb.Set(ctx, key, jsonBytes, 1*time.Hour)

    return user, nil
}

func (r *UserRepository) UpdateUser(ctx context.Context, user *User) error {
    // 1. Write to DB
    if err := r.db.Save(user); err != nil {
        return err
    }

    // 2. Invalidate Cache
    return r.rdb.Del(ctx, "user:"+user.ID).Err()
}

4. Pattern 2: Write-Through

In this pattern, the application treats the cache as the main data store. The cache is responsible for reading from and writing to the backing database synchronously.

The Algorithm

  1. Read: Same as Cache-Aside (but handled by the library/framework).
  2. Write:
    • App writes to Cache.
    • Cache synchronously writes to DB.
    • Both return success.

Pros: Strong consistency (Cache and DB are always in sync). Cons: Higher write latency (Wait for both writes).


5. Pattern 3: Write-Behind (Write-Back)

This is a high-performance, high-risk strategy. The application writes only to the cache, and the cache updates the database asynchronously.

The Algorithm

  1. Write:
    • App writes to Redis.
    • Redis returns “OK” immediately.
    • A background process (or Redis Queue) pushes the change to the DB later.

Pros: Extremely fast writes (sub-millisecond). Cons: Data Loss Risk. If Redis crashes before syncing to the DB, the data is gone forever.

Implementation Logic

public void saveUserAsync(User user) {
    // 1. Write to Redis
    redis.set("user:" + user.getId(), serialize(user));

    // 2. Push ID to a processing queue (e.g., Redis List)
    redis.lpush("write_queue", user.getId());
}

// Background Worker
public void processQueue() {
    while (true) {
        String userId = redis.brpop(0, "write_queue").get(1);
        String userData = redis.get("user:" + userId);
        db.save(deserialize(userData));
    }
}
func (r *UserRepository) SaveUserAsync(ctx context.Context, user *User) {
    // 1. Write to Redis
    jsonBytes, _ := json.Marshal(user)
    r.rdb.Set(ctx, "user:"+user.ID, jsonBytes, 0)

    // 2. Push to Queue
    r.rdb.LPush(ctx, "write_queue", user.ID)
}

// Background Worker
func (r *UserRepository) ProcessQueue(ctx context.Context) {
    for {
        // Block until a key is available
        res, _ := r.rdb.BRPop(ctx, 0, "write_queue").Result()
        id := res[1]

        // Get latest data
        val, _ := r.rdb.Get(ctx, "user:"+id).Result()

        // Save to DB
        r.db.SaveRaw(id, val)
    }
}

6. Summary

Pattern Best For Consistency Write Speed
Cache-Aside General Purpose, Read-Heavy Eventual (Window of staleness) Slow (DB write)
Write-Through Critical Data (Financial) Strong Slowest (2 writes)
Write-Behind Analytics, Counters, Non-Critical Weak (Data loss risk) Fastest (RAM write)