Skip to main content

Command Palette

Search for a command to run...

Lock-Free Programming - Java (Part 1)

Published
6 min read

You've heard terms like lock-free, wait-free, and non-blocking thrown around in job interviews, conference talks, and library documentation. They sound intimidating. But underneath the jargon lies a beautifully simple idea: what if multiple threads could safely share data without anyone ever waiting for a lock?


What Problem Are We Actually Solving?

Modern computers have multiple CPU cores. To make programs faster, we split work across threads. But the moment two threads touch the same piece of memory, things can go wrong.

The Classic Disaster: A Shared Counter

public class UnsafeCounter {
    private int count = 0;

    public void increment() {
        count++;  // This is NOT a single operation!
    }

    public int getCount() {
        return count;
    }
}

When you write count++, the CPU actually does three things:

Now imagine two threads doing this at the same time when count = 0:

Both threads read 0, both compute 1, and both write 1. We just lost an update. This is called a race condition.


The Traditional Fix: Locks

The classic solution? Put a fence around the shared data. Only one thread can enter at a time.

public class LockedCounter {
    private int count = 0;
    private final Object lock = new Object();

    public void increment() {
        synchronized (lock) {
            count++;
        }
    }

    public int getCount() {
        synchronized (lock) {
            return count;
        }
    }
}

This works! Thread safety achieved. But locks come with baggage.

The Problems with Locks

Let's look at the scariest one — deadlock:

When your server freezes at 3 AM, deadlocks are usually near the top of the suspect list.


Enter Lock-Free Programming

Lock-free programming is an alternative to thread safety in which no thread ever blocks waiting for another thread. Instead of taking turns, threads proceed optimistically and then check whether anything went wrong.

The Core Idea: Optimistic Concurrency

Think of it like a polite game at a buffet:

  1. Look at what you want to do

  2. Prepare your action

  3. Try to do it atomically

  4. If someone else changed things while you were preparing, start over

No waiting. No locks. Just retrying.

This "try and retry" loop is the heartbeat of lock-free programming. It relies on a special hardware instruction called Compare-And-Swap (CAS), which we'll explore in Article 3.


The Progress Guarantee Spectrum

Not all non-blocking algorithms are created equal. Computer scientists define a hierarchy of progress guarantees:

Guarantee What it Means Real-World Analogy
Wait-Free Every thread finishes in a bounded number of steps, guaranteed A highway with dedicated lanes — nobody ever slows down
Lock-Free The system as a whole always makes progress; individual threads might retry A roundabout — you might circle a few times, but traffic always flows
Obstruction-Free Progress guaranteed if only one thread runs A single-lane road — works great alone, jams with traffic
Lock-Based A sleeping thread holding a lock can block everyone A drawbridge — one stuck boat stops all traffic

Most practical lock-free algorithms (and everything in this series) fall in the lock-free category: the system always makes forward progress, but an individual thread might need to retry its operation.


When Should You Care About Lock-Free?

Lock-free programming is NOT always better than locks. Here's a decision framework:

Good Use Cases for Lock-Free

Scenario Why Lock-Free Helps
High-frequency trading Every microsecond counts; lock contention is unacceptable
Game engines Thousands of objects are updating in parallel, and frame deadlines are strict
Logging frameworks Loggers must never slow down the main application
Metrics/counters Millions of increments per second across many threads
Message passing queues Producers and consumers should never block each other

When Locks Are Just Fine

  • Low contention (few threads, infrequent access)

  • Complex operations that span many steps (lock-free handles only simple atomic changes)

  • When correctness and readability matter more than raw throughput

  • When you're using java.util.concurrent — those classes already use lock-free internally!


A Sneak Peek: Your First Lock-Free Code

Here's a taste of what lock-free code looks like in Java. Don't worry about understanding every detail — we'll break it all down in the coming articles.

import java.util.concurrent.atomic.AtomicInteger;

public class LockFreeCounter {
    private final AtomicInteger count = new AtomicInteger(0);

    public void increment() {
        int current;
        int next;
        do {
            current = count.get();          // Step 1: Read
            next = current + 1;              // Step 2: Compute
        } while (!count.compareAndSet(current, next));  // Step 3: Try to swap
        // If compareAndSet fails (someone else changed it), loop and retry
    }

    public int getCount() {
        return count.get();
    }
}

This tiny loop is doing something remarkable:

  • No locks. No synchronized. No ReentrantLock.

  • No blocking. No thread ever waits for another thread.

  • Thread-safe. This compareAndSet ensures only one thread's update "wins" per round, and losers retry.

Here's what the compareAndSet (CAS) Looks like in action:

No lost updates. No locks. Just atoms and retries.


The Lock-Free Mental Model

Here's the key perspective shift for lock-free programming:

Lock-based: "I'm going to lock this door, do my work, and unlock it. Everyone else: wait."

Lock-free: "I'm going to try to update this. If nobody else touched it, great! If they did, I'll just try again with the new value."


Key Takeaways

  1. Race conditions occur when multiple threads read, modify, and write shared data without coordination.

  2. Locks solve race conditions but introduce deadlocks, priority inversion, and contention overhead.

  3. Lock-free algorithms use atomic operations (such as CAS) to coordinate threads without locks.s

  4. Lock-free ≠ is always faster — it shines in high-contention scenarios with simple operations.

  5. The core pattern is: read → compute → CAS → retry if failed.