1. Problem Statement
Design a data structure that follows the constraints of a Least Recently Used (LRU) cache.
Implement the LRUCache class:
LRUCache(int capacity): Initialize the cache with positive sizecapacity.int get(int key): Return the value of the key if it exists, otherwise return -1.void put(int key, int value): Update the value of the key if it exists. Otherwise, add the key-value pair to the cache. If the number of keys exceeds the capacity, evict the least recently used key.
Constraint: All operations must be in $O(1)$ average time complexity.
2. Approach: The Hybrid Solution
graph LR
subgraph "HashMap (O(1) Access)"
M[Map] --> K1[Key 1]
M --> K2[Key 2]
M --> K3[Key 3]
end
subgraph "Doubly Linked List (O(1) Order)"
Head[Head / MRU] <--> N1[Node 1]
N1 <--> N2[Node 2]
N2 <--> N3[Node 3]
N3 <--> Tail[Tail / LRU]
end
K1 -.-> N1
K2 -.-> N2
K3 -.-> N3
The Failed Options
- HashMap only: $O(1)$ access, but how do you track "recency"? Sorting the map takes $O(N \log N)$.
- List only: Easy to track recency (move to head), but finding a key takes $O(N)$.
The "Aha!" Moment: HashMap + Doubly Linked List
We use the HashMap for $O(1)$ lookups and the Doubly Linked List to maintain the order of usage.
- Head: Most Recently Used (MRU).
- Tail: Least Recently Used (LRU).
- The HashMap stores
Key -> Node. This allows us to jump to any node in the list in $O(1)$ and move it to the head.
3. Java Implementation
class LRUCache {
class Node {
int key, val;
Node prev, next;
Node(int k, int v) { key = k; val = v; }
}
private Map<Integer, Node> map = new HashMap<>();
private Node head = new Node(0, 0), tail = new Node(0, 0);
private int capacity;
public LRUCache(int capacity) {
this.capacity = capacity;
head.next = tail;
tail.prev = head;
}
public int get(int key) {
if (map.containsKey(key)) {
Node node = map.get(key);
remove(node);
insert(node); // Move to head
return node.val;
}
return -1;
}
public void put(int key, int value) {
if (map.containsKey(key)) remove(map.get(key));
if (map.size() == capacity) {
map.remove(tail.prev.key);
remove(tail.prev);
}
insert(new Node(key, value));
}
private void remove(Node node) {
node.prev.next = node.next;
node.next.prev = node.prev;
}
private void insert(Node node) {
map.put(node.key, node);
node.next = head.next;
node.next.prev = node;
head.next = node;
node.prev = head;
}
}
4. 5-Minute "Video-Style" Walkthrough
- The Double-Link Advantage: Why a Doubly Linked List? Because when we find a node via the HashMap, we need to delete it from its current position. In a singly linked list, we'd need to find the
prevnode ($O(N)$). In a doubly linked list, we havenode.prevright there ($O(1)$). - Sentinel Nodes: Notice the
headandtaildummy nodes. They prevent us from having to check fornullevery time we insert or delete. This is the hallmark of a senior implementation. - Eviction: The eviction always happens at
tail.prev. The "usage" always happens athead.next.
5. Interview Discussion
- Interviewer: "How would you make this thread-safe?"
- You: "I would wrap the methods in
synchronizedblocks or use aReentrantLock. For the map, I could useConcurrentHashMap." - Interviewer: "What if the capacity is massive and doesn't fit in RAM?"
- You: "Then we transition to System Design. We would use a distributed cache like Redis which uses similar LRU logic internally."