Eviction Policies

Redis is an in-memory database. Since RAM is expensive and finite, you will eventually run out of it. When maxmemory is reached, Redis must decide what to delete to make room for new data. This decision process is governed by the Eviction Policy.

1. The Policies

You configure the policy in redis.conf using maxmemory-policy.

Policy Description Best Use Case
noeviction Returns error on write when full. Database mode (Data durability is critical).
allkeys-lru Evicts Least Recently Used keys. General Caching (Hot/Cold data).
volatile-lru Evicts LRU keys with a TTL set. Mixed use (Cache + Persistent data).
allkeys-lfu Evicts Least Frequently Used keys. Stable access patterns (Analytics).
allkeys-random Random eviction. Uniform distribution of key access.

2. Deep Dive: LRU (Least Recently Used)

LRU assumes that if data was accessed recently, it is likely to be accessed again soon.

Interactive: LRU Visualization

Click the boxes to “access” them. Watch how they move to the Head (Most Recently Used) and how the Tail (Least Recently Used) is evicted when the cache is full.

Cache Capacity: 5
Click a number to access it (or add new)
Ready.

3. How to Implement LRU (Code)

A true LRU cache is implemented using two data structures combined:

  1. HashMap: For O(1) access to keys.
  2. Doubly Linked List: For O(1) ordering updates (moving a node to the head).
import java.util.LinkedHashMap;
import java.util.Map;

// Java's LinkedHashMap has built-in LRU support!
public class LRUCache<K, V> extends LinkedHashMap<K, V> {
    private final int capacity;

    public LRUCache(int capacity) {
        // true = accessOrder (LRU), false = insertionOrder
        super(capacity, 0.75f, true);
        this.capacity = capacity;
    }

    @Override
    protected boolean removeEldestEntry(Map.Entry<K, V> eldest) {
        // Automatically remove oldest when capacity is exceeded
        return size() > capacity;
    }

    public static void main(String[] args) {
        LRUCache<String, String> cache = new LRUCache<>(2);
        cache.put("A", "1");
        cache.put("B", "2");
        System.out.println(cache); // {A=1, B=2}

        cache.get("A"); // Access A, moves to end (MRU)
        System.out.println(cache); // {B=2, A=1}

        cache.put("C", "3"); // Triggers eviction of B (LRU)
        System.out.println(cache); // {A=1, C=3}
    }
}
package main

import (
	"container/list"
	"fmt"
)

type LRUCache struct {
	capacity int
	cache    map[string]*list.Element
	list     *list.List
}

type Pair struct {
	key   string
	value string
}

func NewLRUCache(capacity int) *LRUCache {
	return &LRUCache{
		capacity: capacity,
		cache:    make(map[string]*list.Element),
		list:     list.New(),
	}
}

func (c *LRUCache) Get(key string) string {
	if elem, found := c.cache[key]; found {
		c.list.MoveToFront(elem) // Mark as MRU
		return elem.Value.(Pair).value
	}
	return ""
}

func (c *LRUCache) Put(key, value string) {
	if elem, found := c.cache[key]; found {
		c.list.MoveToFront(elem)
		elem.Value = Pair{key, value}
		return
	}

	if c.list.Len() >= c.capacity {
		// Evict LRU (Back of list)
		elem := c.list.Back()
		if elem != nil {
			c.list.Remove(elem)
			delete(c.cache, elem.Value.(Pair).key)
		}
	}

	newElem := c.list.PushFront(Pair{key, value})
	c.cache[key] = newElem
}

4. Redis Internals: The Approximation

While the code above is academically correct, Redis does NOT use a Doubly Linked List.

Why?

  1. Memory Overhead: A DLL requires 2 extra pointers (prev, next) per key. In a 64-bit system, that’s 16 bytes per key overhead. For 100 million keys, that’s 1.6 GB of wasted RAM.
  2. Performance: Pointer manipulation causes CPU cache misses.

The Redis Solution: Sampled LRU

Redis uses an approximation algorithm:

  1. When memory is full, pick K random keys (configured by maxmemory-samples, default 5).
  2. Calculate the “idle time” (time since last access) for these K keys.
  3. Evict the one with the highest idle time.
  4. (Optional) Keep a “pool” of best candidates to improve accuracy over time.

This achieves 99% of the accuracy of true LRU with zero memory overhead (Redis stores a 24-bit timestamp in the object header anyway).


5. LFU (Least Frequently Used)

Available since Redis 4.0. LFU is better for caches where access patterns are stable but not necessarily recent.

  • LRU Problem: If a bot scans your database once, it flushes out useful cached items (Cache Pollution).
  • LFU Solution: A one-time scan keys will have low frequency and be evicted quickly, preserving the truly “hot” keys.

Redis implements LFU using a Morris Counter (probabilistic counting) to store frequency in just 8 bits (0-255), combined with a Decay Time to handle shifting trends.