Java

Java Multithreading Interview Questions

Test your Java concurrency knowledge with 20 interview questions on threads, synchronization, ExecutorService, CompletableFuture, locks, volatile, atomic classes, and thread safety patterns.

20 Questions
30 min
Mixed Difficulty
Start Quiz

Topics Covered

Thread BasicsSynchronizationWait/NotifyThread SafetyVolatileAtomic ClassesExecutorServiceCompletableFutureLocksThreadLocalDeadlockForkJoinPool

Difficulty Breakdown

6
Junior
8
Mid-Level
6
Senior

What to Expect

  • Multiple choice questions with 4 options each
  • Instant score and topic-by-topic breakdown
  • Detailed explanations for every question
  • Personalized course recommendations based on your weak areas

Java Multithreading Interview Questions and Answers

Below are all 20 questions covered in this quiz, grouped by topic. Each question includes the correct answer and a detailed explanation to help you prepare for your next interview.

2Thread Basics

Q

What is the difference between extending Thread and implementing Runnable in Java?

A

Implementing Runnable is preferred because Java supports single inheritance, so extending Thread prevents extending another class

Implementing Runnable is preferred because Java only supports single inheritance. If you extend Thread, you cannot extend any other class. Runnable also promotes better separation of concerns by decoupling the task from the thread mechanism. A Runnable can be passed to a Thread constructor or submitted to an ExecutorService, making it more flexible.

Q

What is the output of the following code?

Thread t = new Thread(() -> System.out.print("A"));
t.start();
t.start();
A

A, then an IllegalThreadStateException is thrown

A thread in Java can only be started once. The first call to start() transitions the thread from NEW to RUNNABLE and prints "A". The second call to start() throws an IllegalThreadStateException because the thread is no longer in the NEW state. To run the same task again, you must create a new Thread instance.

1Synchronization

Q

What does the synchronized keyword do when applied to an instance method?

A

It acquires the intrinsic lock (monitor) of the current object (this), preventing other threads from entering any synchronized method on the same instance

When synchronized is applied to an instance method, the thread must acquire the intrinsic lock of the object (this) before executing the method. Other threads trying to enter any synchronized method on the same object instance will block until the lock is released. A synchronized static method, by contrast, locks the Class object.

1Wait/Notify

Q

What is the difference between wait() and sleep() in Java?

A

wait() releases the monitor lock and must be called inside a synchronized block; sleep() holds the lock and can be called anywhere

wait() is defined in Object and must be called within a synchronized block. It releases the intrinsic lock so other threads can acquire it, and the thread remains waiting until notify()/notifyAll() is called. sleep() is defined in Thread, pauses execution for a specified time, but does NOT release any locks the thread holds.

2Thread Safety

Q

What is a race condition?

A

A bug where the program outcome depends on the unpredictable timing of thread execution, leading to incorrect results when multiple threads access shared data without proper synchronization

A race condition occurs when two or more threads access shared mutable state concurrently and at least one thread modifies it without proper synchronization. The result depends on the relative timing of the threads, which is non-deterministic. Classic example: two threads incrementing a shared counter can lose updates because read-modify-write is not atomic.

Q

What is the purpose of the ConcurrentHashMap's compute() method, and why is it better than using a regular HashMap with external synchronization?

concurrentMap.compute(key, (k, v) -> v == null ? 1 : v + 1);
A

compute() atomically applies the remapping function for the given key, avoiding the race condition of a separate get-then-put sequence even without external synchronization

ConcurrentHashMap.compute() atomically executes the remapping function for a single key. With a regular HashMap, the pattern get-check-put would require external synchronization to prevent race conditions. ConcurrentHashMap uses fine-grained locking (per-bucket in Java 8+) so that compute() on one key doesn't block operations on other keys. The entire read-modify-write cycle happens atomically for that key, making it both safe and efficient for concurrent counters, accumulators, and merging operations.

2Volatile

Q

What does the volatile keyword guarantee, and what does it NOT guarantee?

A

It guarantees visibility of changes across threads but does NOT guarantee atomicity of compound operations like i++

The volatile keyword establishes a happens-before relationship: any write to a volatile variable is immediately visible to all threads reading that variable. However, it does NOT make compound operations atomic. For example, volatile int count; count++ is still not thread-safe because it involves a read, increment, and write as separate steps. Use AtomicInteger or synchronization for compound operations.

Q

What problem does the following code have, and how would you fix it?

private boolean running = true;

// Thread 1:
public void stop() { running = false; }

// Thread 2:
public void run() {
    while (running) {
        // do work
    }
}
A

Thread 2 may never see the update to running because without volatile or synchronization, the JVM/CPU can cache the variable in a register or reorder reads, causing an infinite loop

Without the volatile keyword on the running field, the Java Memory Model does not guarantee that Thread 2 will ever see the write made by Thread 1. The JIT compiler may hoist the read of running outside the loop (loop-invariant code motion), or the CPU may serve the read from a local cache. The fix is to declare the field as volatile boolean running = true; which establishes a happens-before relationship, ensuring that Thread 2 sees the updated value. Alternatively, both methods could be synchronized, or an AtomicBoolean could be used.

1Atomic Classes

Q

What is the output of the following code?

AtomicInteger counter = new AtomicInteger(0);
ExecutorService executor = Executors.newFixedThreadPool(3);
for (int i = 0; i < 1000; i++) {
    executor.submit(() -> counter.incrementAndGet());
}
executor.shutdown();
executor.awaitTermination(5, TimeUnit.SECONDS);
System.out.println(counter.get());
A

Exactly 1000

AtomicInteger.incrementAndGet() is an atomic operation implemented using CAS (Compare-And-Swap) at the hardware level. Even though 3 threads are executing concurrently, every increment is guaranteed to be atomic and no updates are lost. The final result is always exactly 1000. If a plain int were used instead, the result would likely be less than 1000 due to race conditions.

3ExecutorService

Q

What is the primary advantage of using ExecutorService over manually creating Thread objects?

A

ExecutorService manages a pool of reusable threads, reducing the overhead of thread creation/destruction and providing a higher-level API for task submission, lifecycle management, and result retrieval via Future

Creating a new OS thread is expensive (typically ~1MB stack allocation plus kernel-level operations). ExecutorService maintains a pool of pre-created threads that are reused across tasks. It also provides a clean API for submitting tasks (submit/invokeAll), retrieving results (Future), and managing the lifecycle (shutdown/awaitTermination).

Q

What is the difference between Executors.newFixedThreadPool() and Executors.newCachedThreadPool()?

A

newFixedThreadPool maintains a fixed number of threads; newCachedThreadPool creates new threads as needed and reuses idle threads, terminating them after 60 seconds of inactivity

newFixedThreadPool(n) creates a pool with exactly n threads. If all threads are busy, new tasks queue up. newCachedThreadPool creates threads on demand with no upper bound. Idle threads are kept for 60 seconds before being terminated. CachedThreadPool is suitable for many short-lived tasks, but can be dangerous under heavy load since it may create an unbounded number of threads, potentially exhausting system resources.

Q

What happens if you call Future.get() and the task has not yet completed?

A

It blocks the calling thread until the result is available, or throws ExecutionException if the task threw an exception

Future.get() is a blocking call. If the task is still running, the calling thread will block until the task completes, is cancelled, or throws an exception. If the task threw an exception, get() wraps it in an ExecutionException. To avoid indefinite blocking, use the overloaded get(long timeout, TimeUnit unit) which throws TimeoutException if the result is not available within the specified time. This blocking nature is why CompletableFuture (with its callback-based API) is often preferred.

2CompletableFuture

Q

What does CompletableFuture.supplyAsync() do?

CompletableFuture<String> future = CompletableFuture.supplyAsync(() -> {
    return fetchDataFromApi();
});
A

It runs the supplier asynchronously on the common ForkJoinPool (by default) and returns a CompletableFuture that will hold the result when complete

supplyAsync() executes the given Supplier asynchronously. By default it uses the common ForkJoinPool, but you can pass a custom Executor as a second argument. It immediately returns a CompletableFuture that will be completed with the supplier's result. You can chain further stages using thenApply(), thenCompose(), thenAccept(), etc., enabling a non-blocking, declarative async pipeline.

Q

What is the difference between thenApply() and thenCompose() in CompletableFuture?

A

thenApply() transforms the result with a Function<T,R>; thenCompose() chains a Function that returns a CompletableFuture<R>, flattening the nested future (like flatMap)

thenApply(Function<T,R>) transforms the completed value and returns CompletableFuture<R>. thenCompose(Function<T,CompletableFuture<R>>) is used when the transformation itself is asynchronous — it flattens the result to avoid CompletableFuture<CompletableFuture<R>>. This is analogous to map() vs flatMap() in streams. Use thenApply for synchronous mapping and thenCompose for chaining async operations.

2Locks

Q

What is ReentrantLock and how does it differ from the synchronized keyword?

A

ReentrantLock provides the same mutual exclusion as synchronized but adds features like tryLock(), timed locking, interruptible locking, and the ability to create multiple Condition objects

ReentrantLock (from java.util.concurrent.locks) offers the same mutual exclusion as synchronized but with greater flexibility. Key advantages: tryLock() for non-blocking lock attempts, tryLock(timeout) for timed waits, lockInterruptibly() for interruptible locking, and the ability to create multiple Condition objects for fine-grained wait/signal patterns. The lock must be explicitly released in a finally block to avoid leaks. Both synchronized and ReentrantLock are reentrant.

Q

What does the following code print?

ReadWriteLock rwLock = new ReentrantReadWriteLock();
// Thread 1:
rwLock.readLock().lock();
// Thread 2:
rwLock.readLock().lock();
// Thread 3:
rwLock.writeLock().lock();

// Assume Thread 1 and 2 start before Thread 3.
A

Thread 1 and Thread 2 acquire the read lock concurrently; Thread 3 blocks until both read locks are released

ReadWriteLock allows multiple concurrent readers but exclusive access for writers. Thread 1 and Thread 2 can both hold the read lock simultaneously since read operations don't conflict with each other. Thread 3, requesting the write lock, must wait until all read locks are released. This pattern is ideal for data structures that are read frequently but written to rarely, like caches or configuration stores.

1ThreadLocal

Q

What is a ThreadLocal variable and when would you use it?

A

A variable that provides each thread with its own independent copy, so each thread reads and writes to its own instance without synchronization

ThreadLocal<T> gives each thread its own isolated copy of a variable. Common use cases include per-thread database connections, SimpleDateFormat instances (which are not thread-safe), per-request user context in web servers, and transaction management. Be careful to remove() ThreadLocal values when a thread is returned to a pool; otherwise, stale data from a previous task can leak into the next task.

2Deadlock

Q

Which of the following correctly prevents a deadlock between two locks?

// Thread 1:
synchronized(lockA) {
    synchronized(lockB) { /* work */ }
}

// Thread 2:
synchronized(lockB) {
    synchronized(lockA) { /* work */ }
}
A

Always acquire locks in the same global order — both threads should lock A first, then B

The deadlock occurs because Thread 1 holds lockA and waits for lockB, while Thread 2 holds lockB and waits for lockA — a circular wait. The standard solution is lock ordering: establish a consistent global order (e.g., always acquire lockA before lockB). Other strategies include using tryLock() with a timeout (so a thread gives up rather than waiting forever) or using a single coarser lock, though that reduces concurrency.

Q

What is the difference between a deadlock and a livelock?

A

In a deadlock, threads are blocked waiting for each other. In a livelock, threads are active but keep changing state in response to each other without making progress

In a deadlock, threads are permanently blocked — none can proceed. In a livelock, threads are not blocked but continuously react to each other in a way that prevents progress (like two people stepping side-to-side trying to pass each other in a hallway). A livelock can be resolved by introducing randomness or backoff delays. Starvation is a related concept where a thread is repeatedly denied access to a resource because other threads keep acquiring it first.

1ForkJoinPool

Q

What is the purpose of ForkJoinPool and how does work-stealing work?

A

ForkJoinPool is designed for recursive divide-and-conquer tasks. Each thread has a local deque; idle threads steal tasks from the tail of busy threads' deques

ForkJoinPool is optimized for tasks that can be recursively split into smaller subtasks (fork) and whose results are combined (join). Each worker thread maintains a double-ended queue (deque). A thread pushes/pops its own tasks from the head of its deque, while idle threads steal work from the tail of other threads' deques. This work-stealing algorithm ensures high CPU utilization. The common ForkJoinPool is also used by parallel streams and CompletableFuture by default.

Ready to test yourself?

Take the interactive quiz and get your score with a personalized topic breakdown.

Start the Quiz

Your Career Transformation Starts Now

Join thousands of developers mastering in-demand skills with Amigoscode. Try it free today.