Skip to main content

Command Palette

Search for a command to run...

Deep Dive (java.util) - LinkedHashMap

Insertion Order Meets Hashing

Published
5 min read

"HashMap with a memory — it remembers the order you put things in."

LinkedHashMap extends HashMap and adds a doubly-linked list threaded through all entries. This gives you HashMap's O(1) performance plus predictable iteration order. It's also the foundation for building LRU caches.


What is LinkedHashMap, Really?

LinkedHashMap is a HashMap with an extra linked list connecting all entries in the order they were inserted (or accessed). It extends HashMap directly:


The Dual Structure — Hash Table + Linked List

Each entry in LinkedHashMap is a LinkedHashMap.Entry that extends HashMap.Node with two extra pointers: before and after:

static class Entry<K,V> extends HashMap.Node<K,V> {
    Entry<K,V> before; // Previous entry in insertion order
    Entry<K,V> after;  // Next entry in insertion order
}

Two separate structures, same nodes:

  • Hash table → O(1) lookup by key (via bucket chains, just like HashMap)

  • Linked list → predictable iteration order (insertion order by default)


Insertion Order Mode

By default, LinkedHashMap maintains insertion order:

LinkedHashMap<String, Integer> map = new LinkedHashMap<>();
map.put("Banana", 2);
map.put("Apple", 1);
map.put("Cherry", 3);
map.put("Date", 4);

for (Map.Entry<String, Integer> entry : map.entrySet()) {
    System.out.println(entry.getKey() + " = " + entry.getValue());
}
// Output (guaranteed order):
// Banana = 2
// Apple = 1
// Cherry = 3
// Date = 4

// Updating an existing key does NOT change its position
map.put("Apple", 99);
// Order: Banana, Apple, Cherry, Date (Apple stays in place)

Access Order Mode — LRU Behavior

When created with accessOrder = true, the linked list reorders entries on every get() or put() — moving accessed entries to the tail (most recently used):

LinkedHashMap<String, Integer> map = new LinkedHashMap<>(16, 0.75f, true);
//                                                                  ^^^^
//                                                          accessOrder = true

map.put("A", 1);  // Order: A
map.put("B", 2);  // Order: A → B
map.put("C", 3);  // Order: A → B → C

map.get("A");     // Access A → moves to tail
                   // Order: B → C → A

map.get("B");     // Access B → moves to tail
                   // Order: C → A → B

map.put("D", 4);  // Insert D at tail
                   // Order: C → A → B → D

The head is always the LEAST recently used (LRU) entry. This is the foundation of an LRU cache.


Building an LRU Cache

Here's the classic LRU cache pattern using LinkedHashMap:

public class LRUCache<K, V> extends LinkedHashMap<K, V> {
    private final int maxSize;
    
    public LRUCache(int maxSize) {
        super(maxSize * 4 / 3 + 1, 0.75f, true); // accessOrder = true
        this.maxSize = maxSize;
    }
    
    @Override
    protected boolean removeEldestEntry(Map.Entry<K, V> eldest) {
        return size() > maxSize; // Auto-evict when exceeding max size
    }
}

// Usage
LRUCache<String, byte[]> cache = new LRUCache<>(100);
cache.put("page1", loadPage("page1")); // Cached
cache.put("page2", loadPage("page2")); // Cached
// ... add 98 more entries ...
cache.put("page101", loadPage("page101")); // page1 evicted! (least recently used)

cache.get("page2"); // Moves page2 to tail (most recently used)
cache.put("page102", loadPage("page102")); // page3 evicted (page2 was accessed)

removeEldestEntry() — Auto-Eviction Hook

After every put(), LinkedHashMap calls:

protected boolean removeEldestEntry(Map.Entry<K,V> eldest) {
    return false; // Default: never auto-remove
}

Override this to implement auto-eviction:

// Evict when size exceeds limit
@Override
protected boolean removeEldestEntry(Map.Entry<K,V> eldest) {
    return size() > maxSize;
}

// Evict based on age
@Override
protected boolean removeEldestEntry(Map.Entry<K,V> eldest) {
    long age = System.currentTimeMillis() - eldest.getValue().timestamp;
    return age > TimeUnit.HOURS.toMillis(1);
}

Performance Characteristics

Operation LinkedHashMap HashMap Notes
get(key) O(1) O(1) Same hash table lookup
put(key, value) O(1) O(1) + linked list maintenance
remove(key) O(1) O(1) + unlink from list
Iteration O(n) O(n + capacity) LHM iterates only entries, not empty buckets!
Memory per entry ~48 bytes ~32 bytes +16 bytes for before/after pointers

Key insight: LinkedHashMap iteration is O(n) where n = number of entries. Regular HashMap iteration is O(n + capacity) because it scans the entire bucket array, including empty buckets. For sparse HashMaps (few entries, large capacity), LinkedHashMap iterates significantly faster.


Common Pitfalls

Pitfall 1: ConcurrentModificationException in Access-Order Mode

// ❌ TRAP: Iterating in access-order mode + calling get()
LinkedHashMap<String, Integer> map = new LinkedHashMap<>(16, 0.75f, true);
map.put("A", 1);
map.put("B", 2);

for (String key : map.keySet()) {
    map.get(key);  // 💥 ConcurrentModificationException!
    // get() modifies the linked list structure in access-order mode!
}

Pitfall 2: Forgetting It's Not Thread-Safe

// ❌ NOT thread-safe — just like HashMap
LinkedHashMap<String, Integer> map = new LinkedHashMap<>();

// ✅ Wrap if shared between threads (but consider ConcurrentHashMap instead)
Map<String, Integer> safeMap = Collections.synchronizedMap(new LinkedHashMap<>());

Pitfall 3: Extra Memory Cost

// Each entry costs ~16 bytes more than HashMap
// For 1M entries: ~16 MB extra overhead
// Consider if the ordering is worth the cost

Summary

The one-liner: LinkedHashMap = HashMap + insertion/access-order linked list, perfect for ordered iteration and LRU caches, at the cost of ~16 bytes extra per entry.