Skip to content

LRU Algorithm

When Redis memory exceeds physical limits, it can lead to swapping, drastically reducing performance. To prevent this, Redis allows you to set a maximum memory limit with the maxmemory parameter.

Once the actual memory exceeds maxmemory, Redis offers several eviction policies (maxmemory-policy) to determine how to free up space for continued read/write operations:

  • noeviction: Stops processing write requests (DEL requests are still allowed), ensuring data isn't lost but halting service continuity. This is the default policy.
  • volatile-lru: Evicts the least recently used keys with expiration set. Keys without expiration remain untouched.
  • volatile-ttl: Similar to volatile-lru, but evicts based on the remaining TTL; keys with shorter TTLs are prioritized.
  • volatile-random: Randomly evicts keys from the set of keys with expiration.
  • allkeys-lru: Evicts the least recently used keys from all keys, including those without expiration.
  • allkeys-random: Randomly evicts keys from the entire key set.

Use volatile-xxx for keys needing persistence, and allkeys-xxx for caches where expiration isn't necessary.

LRU Algorithm Mechanics

The LRU algorithm requires a key/value dictionary and an additional linked list to maintain access order. When space runs out, the least recently used element (tail of the list) is removed. When a key is accessed, it is moved to the front of the list, keeping the most recently accessed keys at the front.

Here's a simple implementation of an LRU cache in Python using OrderedDict:

python
from collections import OrderedDict

class LRUDict(OrderedDict):
    def __init__(self, capacity):
        self.capacity = capacity

    def __setitem__(self, key, value):
        if key in self:
            self.move_to_end(key)
        self[key] = value
        if len(self) > self.capacity:
            self.popitem(last=False)

    def __getitem__(self, key):
        value = super().__getitem__(key)
        self.move_to_end(key)
        return value

    def __repr__(self):
        return repr(self.items)

d = LRUDict(10)
for i in range(15):
    d[i] = i
print(d)

Approximate LRU in Redis

Redis employs an approximate LRU algorithm instead of a strict one to minimize memory overhead and structural changes. It uses a random sampling method to achieve results similar to strict LRU. Each key gets a timestamp (24 bits) for tracking its last access time.

When Redis detects it has exceeded maxmemory during write operations, it randomly samples keys (default is 5) to evict the oldest one. This process continues until memory is within limits. The sampling depends on the maxmemory-policy configuration, targeting either all keys or only those with expiration.

Comparison and Enhancements

The effectiveness of approximate LRU improves with larger sample sizes, which can be configured. Redis 3.0 introduced an eviction pool that enhances this algorithm further by merging newly sampled keys with existing ones in the pool, retaining the oldest keys for future eviction cycles.

LRU Algorithm has loaded