- What is multithreading in Java?
Multithreading in Java is a feature that allows for the concurrent execution of two or more parts of a program, known as threads, to maximize the utilization of the CPU. Each thread can run in parallel with others, sharing the same memory space.
Key Concepts
Thread: A thread is a lightweight sub-process, the smallest unit of processing. Multiple threads can exist within a single process.
Concurrency vs. Parallelism:
Concurrency: Multiple threads make progress in an interleaved manner on a single CPU core. The system switches between threads, giving the illusion of simultaneous execution.
Parallelism: Multiple threads run simultaneously on different CPU cores. This is true parallel execution.
Main Purpose:
Performance: To perform CPU-intensive tasks in parallel, speeding up execution time on multi-core processors.
Responsiveness: To keep an application responsive. For example, a long-running task (like a network request or file download) can run in a background thread without freezing the user interface.
- How to Create Threads in Java
There are two primary ways to create a thread:
Implementing the Runnable Interface (Preferred Method):
Create a class that implements the java.lang.Runnable interface.
Implement the run() method, which contains the code to be executed by the thread.
Create an instance of the Thread class, passing your Runnable object to its constructor, and call the start() method.
class MyRunnable implements Runnable {
public void run() {
System.out.println("Thread is running by implementing Runnable.");
}
}
// To start the thread:
MyRunnable myRunnable = new MyRunnable();
Thread thread = new Thread(myRunnable);
thread.start();
class MyThread extends Thread {
public void run() {
System.out.println("Thread is running by extending Thread.");
}
}
// To start the thread:
MyThread thread = new MyThread();
thread.start();
- What are the different state of thread.
In Java, a thread can be in one of the following states, which are defined in the java.lang.Thread.State enum:
NEW
A thread that has been created but has not yet started. It remains in this state until the start() method is invoked on it.
RUNNABLE
A thread that is executing in the Java Virtual Machine (JVM). A thread in this state is either currently running or is ready to run and waiting for its turn to be selected by the thread scheduler.
BLOCKED
A thread that is waiting to acquire a monitor lock to enter a synchronized block or method. It is blocked because another thread currently holds the lock.
WAITING
A thread that is waiting indefinitely for another thread to perform a particular action. A thread enters this state by calling one of the following methods:
Object.wait() (with no timeout)
Thread.join() (with no timeout)
LockSupport.park()
TIMED_WAITING
A thread that is waiting for a specified period of time. A thread enters this state by calling one of these methods with a timeout value:
Thread.sleep(long millis)
Object.wait(long timeout)
Thread.join(long millis)
LockSupport.parkNanos(long nanos)
LockSupport.parkUntil(long deadline)
TERMINATED
A thread that has completed its execution. This happens when its run() method has finished, either by completing normally or by throwing an unhandled exception
- Explain Synchronization
Synchronization in Java is a mechanism to control access to shared resources by multiple threads to prevent race conditions and ensure thread safety.
Key Concepts
The Problem It Solves
Race Condition: When multiple threads access and modify shared data simultaneously, leading to unpredictable results
Memory Consistency Errors: When different threads have inconsistent views of the same dataHow It Works
Every object in Java has an intrinsic lock (monitor). When a thread acquires this lock, other threads must wait until it's released.Two Main Approaches
public synchronized void updateData() {
// Only one thread can execute this at a time
}
public void updateData() {
synchronized(this) {
// Only this section is synchronized
}
}
- What is a Deadlock?
A deadlock is a concurrency failure where two or more threads block forever because each is waiting for a resource (lock) held by another, creating a circular wait.
Example (Java): two threads acquire the same two locks in opposite order, so neither can proceed.
public class DeadlockExample {
private static final Object LOCK_A = new Object();
private static final Object LOCK_B = new Object();
public static void main(String[] args) {
Thread t1 = new Thread(() -> {
synchronized (LOCK_A) {
sleep(100);
synchronized (LOCK_B) {
System.out.println("t1 acquired both locks");
}
}
});
Thread t2 = new Thread(() -> {
synchronized (LOCK_B) {
sleep(100);
synchronized (LOCK_A) {
System.out.println("t2 acquired both locks");
}
}
});
t1.start();
t2.start();
}
private static void sleep(long ms) {
try { Thread.sleep(ms); } catch (InterruptedException ignored) { }
}
}
- ** Lock Use Cases in Java**
i. Method-Level Lock (synchronized method)
Use Case: When the entire method needs atomic execution (no thread can interrupt).
public class DatabaseConnection {
private static DatabaseConnection instance;
public static synchronized DatabaseConnection getInstance() {
if (instance == null) {
instance = new DatabaseConnection();
}
return instance;
}
}
ii. Class-Level Lock:
Use Case: When all instances of a class share the same resource (static fields).
public class OrderIdGenerator {
private static long currentId = 0;
public static synchronized long getNextOrderId() {
return ++currentId;
}
}
iii. Object-Level Lock
Use Case: When the entire method needs atomic execution (no thread can interrupt).
public class Account {
private final Object sharedLock;
public Account(Object lock) {
this.sharedLock = lock;
}
public void transfer(Account target, double amount) {
synchronized(sharedLock) { // Both accounts use same lock
this.balance -= amount;
target.balance += amount;
}
}
}
- *Wait, notify and notify all *
wait() / notify() / notifyAll() in Java (with example)
These are Object monitor methods used for thread coordination with synchronized.
wait() (+ optional timeout): releases the monitor lock and parks the current thread until it’s notified.
notify(): wakes one arbitrary thread waiting on the same monitor.
notifyAll(): wakes all threads waiting on the same monitor (they will re-contend for the lock).
Key rules
Must be called inside a synchronized(lock) block (otherwise IllegalMonitorStateException).
Always use wait() in a while loop (spurious wakeups + condition may no longer hold).
Prefer notifyAll() when multiple different conditions might be waited on, to avoid “wrong thread woke up” stalls.
public class WaitNotifyExample {
static class BoundedBuffer {
private final Object lock = new Object();
private final int capacity;
private int count = 0;
BoundedBuffer(int capacity) {
this.capacity = capacity;
}
public void put() throws InterruptedException {
synchronized (lock) {
while (count == capacity) {
lock.wait(); // releases lock; waits until someone notifies
}
count++;
System.out.println(Thread.currentThread().getName() + " put -> count=" + count);
lock.notifyAll(); // wake consumers (and possibly producers)
}
}
public void take() throws InterruptedException {
synchronized (lock) {
while (count == 0) {
lock.wait(); // releases lock; waits until someone notifies
}
count--;
System.out.println(Thread.currentThread().getName() + " took -> count=" + count);
lock.notifyAll(); // wake producers (and possibly consumers)
}
}
}
public static void main(String[] args) {
BoundedBuffer buffer = new BoundedBuffer(2);
Runnable producer = () -> {
try {
for (int i = 0; i < 5; i++) buffer.put();
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
};
Runnable consumer = () -> {
try {
for (int i = 0; i < 5; i++) buffer.take();
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
};
new Thread(producer, "Producer-1").start();
new Thread(producer, "Producer-2").start();
new Thread(consumer, "Consumer-1").start();
new Thread(consumer, "Consumer-2").start();
}
}
9) Atomic classes
Atomic classes (in java.util.concurrent.atomic) provide lock-free, thread-safe operations on single variables using CAS (compare-and-swap). They give atomicity + visibility without synchronized for many common cases.
Common atomic types
AtomicInteger, AtomicLong, AtomicBoolean, AtomicReference
AtomicIntegerArray, AtomicLongArray, AtomicReferenceArray
AtomicStampedReference, AtomicMarkableReference (avoid ABA issues)
LongAdder, DoubleAdder, LongAccumulator (best under high contention counters)
- Many threads updating a counter: LongAdder
- Need exact atomic increment/sequence: AtomicLong / AtomicInteger
- Atomic swap / init of an object: AtomicReference
- Ensure an action runs once: AtomicBoolean.compareAndSet
- Building lock-free structures / ABA risk: AtomicStampedReference /
Volatile
volatile is a field modifier that provides visibility and ordering guarantees between threads, but not mutual exclusion.
What volatile guarantees
Visibility: when one thread writes a new value to a volatile variable, other threads reading it will see the latest value (no stale cached value).
Happens-before / ordering: a write to a volatile field happens-before every subsequent read of that same field. This also prevents certain instruction reordering around that read/write.
What volatile does not guarantee
Atomicity for compound actions: operations like count++, x = x + 1, or check-then-act are still race-prone even if count/x is volatile.
volatile makes the read/write visible, but ++ is multiple steps (read +\ mutate +\ write).
Common use cases
Stop/cancel flag
One thread sets a flag, worker threads observe it and stop.
One-way publication of a reference
Publish a fully constructed object to other threads (when the reference is written once and then read many times).
State signaling
Simple state changes like READY/NOT_READY where only single reads/writes are needed.
public class VolatileExamples {
// Use case 1: stop flag (visibility)
private volatile boolean running = true;
public void stop() {
running = false; // other threads will see this
}
public void runLoop() {
while (running) {
// do work
}
}
// Use case 2: volatile reference publication
private volatile Config config;
public void loadConfigOnce() {
// build locally first (avoid publishing partially-initialized state)
Config loaded = new Config("v1");
config = loaded; // safely publish the reference
}
public Config getConfig() {
return config; // sees latest published reference
}
// Not safe: volatile does NOT make this atomic
private volatile int counter = 0;
public void incrementBroken() {
counter++; // race: not atomic
}
// Correct alternatives:
// - use AtomicInteger, LongAdder, or synchronized/ReentrantLock
static final class Config {
final String version;
Config(String version) { this.version = version; }
}
}
- Lock interface (Java) Lock is in java.util.concurrent.locks and is an explicit alternative to synchronized. It gives more control: timed/interruptible acquisition, optional fairness, and multiple Conditions (like wait/notify but more flexible). Key methods
lock() - acquire (blocks, not interruptible)
lockInterruptibly() - acquire but can be interrupted while waiting
tryLock() / tryLock(timeout, unit) - non-blocking / timed
unlock() - release (must be in finally)
newCondition() - create a Condition for await()/signal() coordination
Common implementations
ReentrantLock - most common (optionally fair)
ReentrantReadWriteLock - separate read/write locks (many readers, single writer)
StampedLock - advanced, includes optimistic reads (not reentrant)
Minimal example (ReentrantLock)
Uses tryLock and ensures unlock in finally..
import java.util.concurrent.TimeUnit;
import java.util.concurrent.locks.Lock;
import java.util.concurrent.locks.ReentrantLock;
public class LockExample {
private final Lock lock = new ReentrantLock();
private int counter = 0;
public void increment() throws InterruptedException {
if (lock.tryLock(200, TimeUnit.MILLISECONDS)) {
try {
counter++; // critical section
} finally {
- Automatic Classes Vs Volatile
- Volatile : Guarantees visibility (latest write becomes visible to other threads) and ordering (a write happens-before subsequent reads of the same variable).
Does not make compound operations atomic (e.g., count++, check-then-act).
Best for simple flags or publishing a reference.
public class VolatileFlag {
private volatile boolean running = true;
public void stop() { running = false; }
public void runLoop() {
while (running) {
// work
}
}
}
- Atomic classes Provide atomic read-modify-write operations using CAS (lock-free in many cases).
Also provide visibility guarantees similar to volatile.
Best for counters, state transitions, and thread-safe updates based on the current value.
import java.util.concurrent.atomic.AtomicInteger;
public class AtomicCounter {
private final AtomicInteger count = new AtomicInteger();
public int incAndGet() { return count.incrementAndGet(); }
}
1) What is ScheduledExecutorService?
A Java concurrency API to run tasks after a delay or periodically, using a thread pool.
import java.time.LocalTime;
import java.util.concurrent.*;
public class ScheduledBasics {
public static void main(String[] args) throws Exception {
ScheduledExecutorService scheduler = Executors.newScheduledThreadPool(2);
scheduler.schedule(() ->
System.out.println(LocalTime.now() + " \- runs once after 1s"),
1, TimeUnit.SECONDS);
scheduler.shutdown();
scheduler.awaitTermination(5, TimeUnit.SECONDS);
}
}
ScheduledExecutorService - common questions, answers, and examples
1) What is ScheduledExecutorService?
A Java concurrency API to run tasks after a delay or periodically, using a thread pool.
import java.time.LocalTime;
import java.util.concurrent.*;
public class ScheduledBasics {
public static void main(String[] args) throws Exception {
ScheduledExecutorService scheduler = Executors.newScheduledThreadPool(2);
scheduler.schedule(() ->
System.out.println(LocalTime.now() + " \- runs once after 1s"),
1, TimeUnit.SECONDS);
scheduler.shutdown();
scheduler.awaitTermination(5, TimeUnit.SECONDS);
}
}
2) How is it different from Timer?
ScheduledExecutorService is preferred because it:
supports multiple threads
handles exceptions better (a Timer can die if a task throws)
integrates cleanly with Future, cancellation, shutdown, etc.
3) How to run a task once after a delay?
Use schedule(...).
import java.util.concurrent.*;
public class ScheduleOnce {
public static void main(String[] args) throws Exception {
ScheduledExecutorService scheduler = Executors.newSingleThreadScheduledExecutor();
ScheduledFuture<String> f = scheduler.schedule(() -> "done", 2, TimeUnit.SECONDS);
System.out.println("Result: " + f.get());
scheduler.shutdown();
}
}
4) How to run repeatedly at a fixed rate?
Use scheduleAtFixedRate(...). It tries to keep the rate (can “catch up” if a run is delayed).
import java.time.LocalTime;
import java.util.concurrent.*;
public class FixedRateExample {
public static void main(String[] args) throws Exception {
ScheduledExecutorService scheduler = Executors.newScheduledThreadPool(1);
ScheduledFuture<?> handle = scheduler.scheduleAtFixedRate(() -> {
System.out.println(LocalTime.now() + " \- tick");
sleep(700);
}, 0, 1, TimeUnit.SECONDS);
scheduler.schedule(() -> handle.cancel(false), 5, TimeUnit.SECONDS);
scheduler.schedule(scheduler::shutdown, 6, TimeUnit.SECONDS);
scheduler.awaitTermination(10, TimeUnit.SECONDS);
}
private static void sleep(long ms) {
try { Thread.sleep(ms); } catch (InterruptedException e) { Thread.currentThread().interrupt(); }
}
}
5) How to run repeatedly with fixed delay?
Use scheduleWithFixedDelay(...). Next run starts after the previous finishes plus delay.
import java.time.LocalTime;
import java.util.concurrent.*;
public class FixedDelayExample {
public static void main(String[] args) throws Exception {
ScheduledExecutorService scheduler = Executors.newScheduledThreadPool(1);
ScheduledFuture<?> handle = scheduler.scheduleWithFixedDelay(() -> {
System.out.println(LocalTime.now() + " \- start");
sleep(1200);
System.out.println(LocalTime.now() + " \- end");
}, 0, 1, TimeUnit.SECONDS);
scheduler.schedule(() -> handle.cancel(false), 6, TimeUnit.SECONDS);
scheduler.schedule(scheduler::shutdown, 7, TimeUnit.SECONDS);
scheduler.awaitTermination(10, TimeUnit.SECONDS);
}
private static void sleep(long ms) {
try { Thread.sleep(ms); } catch (InterruptedException e) { Thread.currentThread().interrupt(); }
}
}
6) Which one should I use: fixed rate vs fixed delay?
Use scheduleAtFixedRate for regular ticks (metrics, polling on wall clock).
Use scheduleWithFixedDelay when work time varies and you must avoid overlap/catch-up.
7) Does
ScheduledExecutorService run tasks in parallel?
Yes, if the pool has multiple threads. With a single thread, tasks run sequentially.
import java.util.concurrent.*;
public class ParallelScheduling {
public static void main(String[] args) throws Exception {
ScheduledExecutorService scheduler = Executors.newScheduledThreadPool(2);
scheduler.schedule(() -> work("A"), 0, TimeUnit.SECONDS);
scheduler.schedule(() -> work("B"), 0, TimeUnit.SECONDS);
scheduler.shutdown();
scheduler.awaitTermination(5, TimeUnit.SECONDS);
}
private static void work(String name) {
System.out.println("Start " + name + " on " + Thread.currentThread().getName());
try { Thread.sleep(1000); } catch (InterruptedException e) { Thread.currentThread().interrupt(); }
System.out.println("End " + name);
}
}
8) What happens if a scheduled task throws an exception?
For periodic tasks, an uncaught exception typically stops future executions of that task. Catch exceptions inside the runnable.
import java.util.concurrent.*;
public class ExceptionSafeTask {
public static void main(String[] args) throws Exception {
ScheduledExecutorService scheduler = Executors.newSingleThreadScheduledExecutor();
scheduler.scheduleAtFixedRate(() -> {
try {
System.out.println("Running...");
if (Math.random() < 0.5) throw new RuntimeException("boom");
} catch (Exception e) {
System.out.println("Handled: " + e.getMessage());
}
}, 0, 1, TimeUnit.SECONDS);
Thread.sleep(4000);
scheduler.shutdownNow();
}
}
9) How do I cancel a scheduled task?
Use the returned ScheduledFuture and call cancel(...).
import java.util.concurrent.*;
public class CancelExample {
public static void main(String[] args) throws Exception {
ScheduledExecutorService scheduler = Executors.newSingleThreadScheduledExecutor();
ScheduledFuture<?> handle = scheduler.scheduleAtFixedRate(
() -> System.out.println("tick"),
0, 500, TimeUnit.MILLISECONDS
);
Thread.sleep(1600);
handle.cancel(false); // `false` \= don’t interrupt if currently running
scheduler.shutdown();
scheduler.awaitTermination(2, TimeUnit.SECONDS);
}
}
10) What is the meaning of cancel(true) vs cancel(false)?
true: attempts to interrupt the running thread.
false: lets current run finish, prevents future runs.
11) How do I shut down properly?
Use shutdown() for graceful stop, then awaitTermination(...). Use shutdownNow() for forced interruption.
import java.util.concurrent.*;
public class ShutdownExample {
public static void main(String[] args) throws Exception {
ScheduledExecutorService scheduler = Executors.newScheduledThreadPool(1);
scheduler.scheduleAtFixedRate(() -> System.out.println("tick"), 0, 1, TimeUnit.SECONDS);
Thread.sleep(2500);
scheduler.shutdown(); // graceful
if (!scheduler.awaitTermination(2, TimeUnit.SECONDS)) {
scheduler.shutdownNow(); // forced
}
}
}
12) How to schedule based on a specific time of day?
Compute the initial delay, then run daily.
import java.time.*;
import java.util.concurrent.*;
public class DailyAtTime {
public static void main(String[] args) {
ScheduledExecutorService scheduler = Executors.newSingleThreadScheduledExecutor();
LocalTime target = LocalTime.of(2, 0); // 02:00
long initialDelayMs = computeInitialDelayMs(target);
long dayMs = TimeUnit.DAYS.toMillis(1);
scheduler.scheduleAtFixedRate(
() -> System.out.println("Daily job at " + LocalDateTime.now()),
initialDelayMs, dayMs, TimeUnit.MILLISECONDS
);
}
private static long computeInitialDelayMs(LocalTime target) {
ZonedDateTime now = ZonedDateTime.now();
ZonedDateTime next = now.with(target);
if (!next.isAfter(now)) next = next.plusDays(1);
return Duration.between(now, next).toMillis();
}
}
13) Can it return a value?
Yes. schedule(Callable, ...) returns ScheduledFuture.
import java.util.concurrent.*;
public class CallableExample {
public static void main(String[] args) throws Exception {
ScheduledExecutorService scheduler = Executors.newSingleThreadScheduledExecutor();
ScheduledFuture<Integer> f = scheduler.schedule(() -> 40 + 2, 1, TimeUnit.SECONDS);
System.out.println("Value: " + f.get());
scheduler.shutdown();
}
}
14) How to handle timeouts when waiting for results?
Use get(timeout, unit).
import java.util.concurrent.*;
public class TimeoutGet {
public static void main(String[] args) throws Exception {
ScheduledExecutorService scheduler = Executors.newSingleThreadScheduledExecutor();
ScheduledFuture<String> f = scheduler.schedule(() -> {
Thread.sleep(2000);
return "ok";
}, 0, TimeUnit.SECONDS);
try {
System.out.println(f.get(1, TimeUnit.SECONDS));
} catch (TimeoutException e) {
System.out.println("Timed out");
f.cancel(true);
} finally {
scheduler.shutdownNow();
}
}
}
15) How to avoid overlapping executions for periodic tasks?
If overlap is a risk, prefer:
scheduleWithFixedDelay (naturally non-overlapping in one thread), or
a pool size of 1, or
a lock/guard inside the task.
import java.util.concurrent.*;
import java.util.concurrent.atomic.AtomicBoolean;
public class NoOverlapGuard {
private static final AtomicBoolean running = new AtomicBoolean(false);
public static void main(String[] args) {
ScheduledExecutorService scheduler = Executors.newScheduledThreadPool(2);
scheduler.scheduleAtFixedRate(() -> {
if (!running.compareAndSet(false, true)) return;
try {
Thread.sleep(1200);
System.out.println("Work done by " + Thread.currentThread().getName());
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
} finally {
running.set(false);
}
}, 0, 500, TimeUnit.MILLISECONDS);
}
}
*newCachedThreadPool vs newFixedThreadPool (ExecutorService)
*
Cached (Executors.newCachedThreadPool())
What it is
Thread pool with 0 core threads and unbounded max threads.
Uses a SynchronousQueue (tasks are handed directly to a thread; if none is free, a new thread is created).
Idle threads are reused and typically removed after ~60s.
- Behavior Can grow very large under load => risk of CPU thrash / memory pressure.
Low latency for bursts (no waiting queue).
- Use cases Short-lived, bursty, non-blocking tasks, where throughput matters and load is controlled.
Example: handling many small asynchronous callbacks, lightweight background jobs that finish quickly.
Fixed (Executors.newFixedThreadPool(n))
What it is
Thread pool with exactly n threads (bounded concurrency).
Uses an (effectively) unbounded queue (LinkedBlockingQueue) by default.Behavior
Concurrency is limited to n => stable CPU/memory usage.
If tasks arrive faster than they complete, they queue up => risk of high queue growth / latency.
- Use cases Steady workloads where you must cap parallelism.
CPU-bound work: set n around number of cores.
I/O-bound work: n can be higher, but still bounded to protect downstream services.
Quick rule of thumb
Use fixed when you need predictable resource usage and backpressure via queuing.
Use cached only when tasks are very short, non-blocking, and request rate is controlled (or you can tolerate thread growth).
Minimal examples
- Cached pool for bursty short tasks
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
public class CachedPoolDemo {
public static void main(String[] args) {
ExecutorService es = Executors.newCachedThreadPool();
for (int i = 0; i < 1000; i++) {
int id = i;
es.submit(() -> {
// short task
return id;
});
}
es.shutdown();
}
}
- Fixed pool to cap concurrency (e.g., calling an external API)
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
public class FixedPoolDemo {
public static void main(String[] args) {
int maxParallelCalls = 10;
ExecutorService es = Executors.newFixedThreadPool(maxParallelCalls);
for (int i = 0; i < 1000; i++) {
es.submit(() -> {
// I/O call \- limited to 10 concurrent executions
// callExternalService();
});
}
es.shutdown();
}
}
ExecutorService: execute() vs submit()
- execute(Runnable) Return type: void (= no handle to track completion)
Use when: fire-and-forget tasks where you do not need a result or cancellation handle.
Exception handling
: if the task throws, it goes to the thread’s UncaughtExceptionHandler (often just logged). You cannot observe it via a Future.
submit(...)Overloads
: submit(Runnable), submit(Callable), submit(Runnable, V result)
Return type: Future<?> / Future (= enables get(), cancel(), status checks)Use when
: you need a result, need to wait, need to cancel, or need to capture exceptions.Exception handling
: exceptions are captured and rethrown as ExecutionException from future.get().
import java.util.concurrent.*;
public class ExecuteVsSubmitDemo {
public static void main(String[] args) throws Exception {
ExecutorService es = Executors.newFixedThreadPool(1);
// 1) execute(): no Future, exception is not observable via caller
es.execute(() -> {
System.out.println("execute running");
throw new RuntimeException("boom from execute");
});
// 2) submit(Runnable): returns Future, exception surfaces on get()
Future<?> f1 = es.submit(() -> {
System.out.println("submit(Runnable) running");
throw new RuntimeException("boom from submit runnable");
});
try {
f1.get(); // throws ExecutionException wrapping the original exception
} catch (ExecutionException e) {
System.out.println("Caught from submit(Runnable): " + e.getCause());
}
// 3) submit(Callable): returns a value
Future<Integer> f2 = es.submit(() -> 40 + 2);
System.out.println("Callable result: " + f2.get());
es.shutdown();
}
}
Practical rule
Use execute() when you truly do not care about completion/result.
Use submit() when you need Future capabilities or want exceptions to be observable to the caller.
How do you gracefully shut down an ExecutorService?
You can gracefully shut down an ExecutorService by calling the shutdown() method.
This initiates an orderly shutdown in which previously submitted tasks are executed, but no new tasks will be accepted.
You can also use shutdownNow() to attempt to stop all actively executing tasks and halt the processing of waiting tasks.
Difference between shutdown() and shutdownNow() methods in ExecutorService?
shutdown(): Initiates an orderly shutdown, where tasks that were already submitted are completed before the service is fully shut down. No new tasks will be accepted.
shutdownNow(): Tries to stop all actively executing tasks and attempts to stop any waiting tasks. It returns a list of the tasks that were waiting to be executed.
Internal Working of ThreadPoolExecutor
Key internal parts
Worker threads set: a collection of Worker objects (each wraps a Thread and runs a task loop).
Work queue: a BlockingQueue that holds submitted tasks waiting for a thread.
Pool sizing rules:
corePoolSize: baseline number of threads kept (even if idle, unless allowCoreThreadTimeOut(true)).
maximumPoolSize: hard cap on threads.
keepAliveTime: how long non-core (and optionally core) idle threads live.
Control state: a single atomic ctl combines:
run state (RUNNING, SHUTDOWN, STOP, TIDYING, TERMINATED)
worker count (how many threads currently exist)
Rejection policy: what happens when it cannot accept more work (AbortPolicy, CallerRunsPolicy, etc.).
2) What happens on execute(Runnable)
Internally the logic is roughly:
If current workers < corePoolSize
- create a new worker thread immediately to run the task (bypasses the queue). Else try to enqueue the task (workQueue.offer(task))
- if enqueue succeeds, the task waits until a worker takes it.
- it then rechecks state: if pool is shutting down and can’t run it, it removes and rejects. If enqueue fails (queue full or a SynchronousQueue handoff failed)
- if workers < maximumPoolSize, create a new non-core worker thread to run it. Else reject
- apply the configured RejectedExecutionHandler. This is why queue choice matters: LinkedBlockingQueue (often unbounded): usually queues instead of growing threads, so max threads may never be reached.
SynchronousQueue: never stores tasks, so it tends to create threads up to max under load.
3) Worker thread main loop (how tasks actually run)
Each worker does:
run an initial task (if any),
then repeatedly:
fetch next task via getTask():
blocks on queue.take() if core threads are kept alive,
or uses timed poll(keepAliveTime) for threads that may time out,
run task inside a try/finally,
update completed-task counts,
if the pool is stopping/shutdown and queue empty, exit.
If a task throws an unchecked exception and it isn’t caught inside the task, that worker thread typically dies, and the executor may create a replacement (depending on state and sizing).
4) Shutdown states
shutdown():
stops accepting new tasks,
continues executing queued tasks,
interrupts only idle workers to help them exit when done.
shutdownNow():
attempts to stop immediately,
interrupts workers (even running),
drains queue and returns pending tasks.
5) Minimal example (shows sizing + rejection)
Briefly: this config creates up to 2 core threads, queues up to 2 tasks, then grows to max 4 threads; after that it rejects.
import java.util.concurrent.*;
public class ThreadPoolExecutorInternalsDemo {
public static void main(String[] args) throws InterruptedException {
ThreadPoolExecutor ex = new ThreadPoolExecutor(
2, // corePoolSize
4, // maximumPoolSize
10, TimeUnit.SECONDS, // keepAliveTime
new ArrayBlockingQueue<>(2), // bounded queue (capacity 2)
Executors.defaultThreadFactory(),
new ThreadPoolExecutor.AbortPolicy() // reject when saturated
);
for (int i = 1; i <= 8; i++) {
int id = i;
try {
ex.execute(() -> {
System.out.println("Task " + id + " on " + Thread.currentThread().getName());
sleep(1000);
});
} catch (RejectedExecutionException e) {
System.out.println("Rejected task " + id);
}
}
ex.shutdown();
ex.awaitTermination(30, TimeUnit.SECONDS);
System.out.println("Largest pool size: " + ex.getLargestPoolSize());
System.out.println("Completed tasks: " + ex.getCompletedTaskCount());
}
private static void sleep(long ms) {
try {
Thread.sleep(ms);
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
}
}
In this run you’ll typically observe: first 2 tasks start (core threads), next 2 queue, next 2 start new threads (up to max 4), remaining tasks get rejected once both queue and max threads are saturated.
Exception handling in threading (Java)
1) Exceptions in a Thread
An exception thrown from run() does not propagate to the caller thread. Handle it inside run() or use an UncaughtExceptionHandler.
public class ThreadExceptionHandling {
public static void main(String[] args) {
Thread t = new Thread(() -> {
throw new RuntimeException("boom");
});
t.setUncaughtExceptionHandler((thread, ex) -> {
System.err.println("Uncaught in " + thread.getName() + ": " + ex.getMessage());
});
t.start();
}
}
2) Exceptions in ExecutorService: execute() vs submit()
execute() = exception goes to the worker thread’s UncaughtExceptionHandler (often just logged).
submit() = exception is stored in the Future; you see it only when calling get().
import java.util.concurrent.*;
public class ExecutorExceptionHandling {
public static void main(String[] args) throws Exception {
ExecutorService es = Executors.newFixedThreadPool(1);
es.execute(() -> { throw new RuntimeException("boom from execute"); });
Future<?> f = es.submit(() -> { throw new RuntimeException("boom from submit"); });
try {
f.get();
} catch (ExecutionException e) {
System.err.println("Caught via Future.get(): " + e.getCause());
} finally {
es.shutdown();
}
}
}
3) Exceptions in ScheduledExecutorService periodic tasks
For scheduleAtFixedRate / scheduleWithFixedDelay: an uncaught exception typically stops future executions of that periodic task. Catch inside the runnable to keep it running.
import java.util.concurrent.*;
public class ScheduledExceptionHandling {
public static void main(String[] args) throws InterruptedException {
ScheduledExecutorService ses = Executors.newSingleThreadScheduledExecutor();
ses.scheduleAtFixedRate(() -> {
try {
System.out.println("tick");
if (System.nanoTime() % 2 == 0) throw new RuntimeException("intermittent");
} catch (Exception e) {
System.err.println("Handled inside task: " + e.getMessage());
}
}, 0, 500, TimeUnit.MILLISECONDS);
Thread.sleep(3000);
ses.shutdownNow();
}
}
4) Recommended pattern
Catch exceptions inside tasks for resilience.
For executors, prefer submit() when you must observe failures (Future + get()).
Set a default handler for unexpected failures:
*BlockingQueue types commonly used with ThreadPoolExecutor
*
In ThreadPoolExecutor, the BlockingQueue choice determines whether the pool queues, hands off, or forces thread growth, and how backpressure happens.
LinkedBlockingQueue
Type: linked, optionally bounded (constructor can take capacity); often used unbounded.
Behavior: tasks mostly queue once corePoolSize threads are busy; pool typically does not grow beyond core if queue is unbounded.
When to use: stable throughput, avoid thread explosion; prefer a bounded capacity in production.
ArrayBlockingQueue
Type: bounded array-backed FIFO.
Behavior: once full, executor will try to create threads up to maximumPoolSize; then reject.
When to use: strong backpressure + predictable memory; common “safe default” with a rejection policy.
SynchronousQueue
Type: zero-capacity handoff queue (no storage).
Behavior: each submit must be immediately handed to a worker; if none idle, executor tends to create new threads up to maximumPoolSize; then reject.
When to use: very short tasks, low queuing latency; used by newCachedThreadPool().
PriorityBlockingQueue
Type: unbounded priority queue (tasks must be Comparable or use a wrapper with a comparator).
Behavior: tasks ordered by priority, not FIFO; generally unbounded => risk of memory growth.
When to use: when task ordering by priority matters.
DelayQueue
Type: unbounded, time-based ordering (elements implement Delayed).
Behavior: tasks become available only after delay; not typically used directly with ThreadPoolExecutor (more common with ScheduledThreadPoolExecutor).
When to use: delayed execution patterns.
LinkedTransferQueue
Type: unbounded transfer queue (can behave like handoff when consumers waiting).
Behavior: can reduce latency under contention; still unbounded by default.
When to use: high-throughput handoff/queue hybrid cases (advanced tuning).
Notes
With unbounded queues (LinkedBlockingQueue default, PriorityBlockingQueue, etc.), maximumPoolSize is often effectively ignored because tasks queue instead of triggering new threads.
Prefer bounded queues (ArrayBlockingQueue or bounded LinkedBlockingQueue) + an explicit RejectedExecutionHandler for predictable backpressure.
Java concurrency classes (common) + use cases + minimal examples
1) Executors & thread pools
Executor / ExecutorService - run tasks asynchronously, manage a pool, shutdown lifecycle
import java.util.concurrent.*;
class ExExecutorService {
public static void main(String[] args) throws Exception {
ExecutorService es = Executors.newFixedThreadPool(2);
Future<Integer> f = es.submit(() -> 40 + 2);
System.out.println(f.get());
es.shutdown();
}
}
ThreadPoolExecutor - fully tunable pool (core/max, queue, rejection)
import java.util.concurrent.*;
class ExThreadPoolExecutor {
public static void main(String[] args) {
ThreadPoolExecutor ex = new ThreadPoolExecutor(
2, 4, 10, TimeUnit.SECONDS,
new ArrayBlockingQueue<>(100),
new ThreadPoolExecutor.CallerRunsPolicy()
);
ex.execute(() -> System.out.println(Thread.currentThread().getName()));
ex.shutdown();
}
}
ScheduledExecutorService / ScheduledThreadPoolExecutor - delayed/periodic tasks
import java.util.concurrent.*;
class ExScheduled {
public static void main(String[] args) throws Exception {
ScheduledExecutorService ses = Executors.newSingleThreadScheduledExecutor();
ses.schedule(() -> System.out.println("delayed"), 200, TimeUnit.MILLISECONDS);
Thread.sleep(400);
ses.shutdown();
}
}
ForkJoinPool - CPU bound divide&conquer, work stealing
import java.util.concurrent.*;
class ExForkJoin {
static class SumTask extends RecursiveTask<Long> {
private final long n;
SumTask(long n) { this.n = n; }
@Override protected Long compute() {
if (n <= 10_000) {
long s = 0;
for (long i = 1; i <= n; i++) s += i;
return s;
}
long mid = n / 2;
SumTask left = new SumTask(mid);
SumTask right = new SumTask(n - mid);
left.fork();
return right.compute() + left.join();
}
}
public static void main(String[] args) {
ForkJoinPool p = ForkJoinPool.commonPool();
System.out.println(p.invoke(new SumTask(1_000_000)));
}
}
2) Futures & async composition
Callable - task that returns a value and can throw checked exceptions
import java.util.concurrent.*;
class ExCallable {
public static void main(String[] args) throws Exception {
Callable<String> c = () -> "ok";
System.out.println(c.call());
}
}
Future - represents pending result (get/cancel/isDone)
import java.util.concurrent.*;
class ExFuture {
public static void main(String[] args) throws Exception {
ExecutorService es = Executors.newSingleThreadExecutor();
Future<String> f = es.submit(() -> "result");
System.out.println(f.get());
es.shutdown();
}
}
FutureTask - Runnable + Future combo (manual start, caching)
import java.util.concurrent.*;
class ExFutureTask {
public static void main(String[] args) throws Exception {
FutureTask<Integer> ft = new FutureTask<>(() -> 1 + 2);
new Thread(ft).start();
System.out.println(ft.get());
}
}
CompletableFuture / CompletionStage - async pipelines, combine stages
import java.util.concurrent.*;
class ExCompletableFuture {
public static void main(String[] args) throws Exception {
CompletableFuture<Integer> cf =
CompletableFuture.supplyAsync(() -> 21)
.thenApply(x -> x * 2);
System.out.println(cf.get());
}
}
ExecutorCompletionService - submit many tasks, consume results as they finish
import java.util.concurrent.*;
class ExCompletionService {
public static void main(String[] args) throws Exception {
ExecutorService es = Executors.newFixedThreadPool(3);
CompletionService<Integer> cs = new ExecutorCompletionService<>(es);
for (int i = 0; i < 5; i++) {
int id = i;
cs.submit(() -> id * id);
}
for (int i = 0; i < 5; i++) {
System.out.println(cs.take().get());
}
es.shutdown();
}
}
3) Coordination primitives (synchronizers)
CountDownLatch - wait for N events (one shot)
import java.util.concurrent.*;
class ExCountDownLatch {
public static void main(String[] args) throws Exception {
CountDownLatch latch = new CountDownLatch(2);
new Thread(() -> { work(); latch.countDown(); }).start();
new Thread(() -> { work(); latch.countDown(); }).start();
latch.await();
System.out.println("done");
}
static void work() { /* do work */ }
}
CyclicBarrier - all parties meet each phase (reusable)
import java.util.concurrent.*;
class ExCyclicBarrier {
public static void main(String[] args) {
CyclicBarrier barrier = new CyclicBarrier(3, () -> System.out.println("phase complete"));
Runnable r = () -> {
try {
barrier.await();
} catch (Exception ignored) {}
};
new Thread(r).start();
new Thread(r).start();
new Thread(r).start();
}
}
Phaser - flexible barrier with dynamic registration
import java.util.concurrent.*;
class ExPhaser {
public static void main(String[] args) {
Phaser phaser = new Phaser(1);
phaser.register();
new Thread(() -> { phaser.arriveAndAwaitAdvance(); }).start();
phaser.arriveAndAwaitAdvance();
System.out.println("advanced");
}
}
Semaphore - limit concurrent access (permits)
import java.util.concurrent.*;
class ExSemaphore {
public static void main(String[] args) throws Exception {
Semaphore sem = new Semaphore(2);
Runnable r = () -> {
try {
sem.acquire();
try { Thread.sleep(100); } catch (InterruptedException ignored) {}
} catch (InterruptedException ignored) {
} finally {
sem.release();
}
};
for (int i = 0; i < 5; i++) new Thread(r).start();
}
}
Exchanger - 2 threads swap data at rendezvous
import java.util.concurrent.*;
class ExExchanger {
public static void main(String[] args) {
Exchanger<String> ex = new Exchanger<>();
new Thread(() -> {
try { System.out.println("A got " + ex.exchange("fromA")); } catch (InterruptedException ignored) {}
}).start();
new Thread(() -> {
try { System.out.println("B got " + ex.exchange("fromB")); } catch (InterruptedException ignored) {}
}).start();
}
}
LockSupport - low level park/unpark building block
import java.util.concurrent.locks.LockSupport;
class ExLockSupport {
public static void main(String[] args) throws Exception {
Thread t = new Thread(() -> {
LockSupport.park();
System.out.println("unparked");
});
t.start();
Thread.sleep(100);
LockSupport.unpark(t);
}
}
4) Locks & conditions
Lock / ReentrantLock - explicit mutual exclusion, tryLock, fairness
import java.util.concurrent.locks.*;
class ExReentrantLock {
private static final Lock lock = new ReentrantLock();
public static void main(String[] args) {
lock.lock();
try {
System.out.println("critical section");
} finally {
lock.unlock();
}
}
}
Condition - multiple wait sets per lock
import java.util.concurrent.locks.*;
class ExCondition {
static final ReentrantLock lock = new ReentrantLock();
static final Condition ready = lock.newCondition();
static boolean done = false;
public static void main(String[] args) throws Exception {
Thread waiter = new Thread(() -> {
lock.lock();
try {
while (!done) ready.await();
System.out.println("released");
} catch (InterruptedException ignored) {
} finally {
lock.unlock();
}
});
waiter.start();
Thread.sleep(100);
lock.lock();
try {
done = true;
ready.signalAll();
} finally {
lock.unlock();
}
}
}
ReadWriteLock / ReentrantReadWriteLock - many readers, few writers
import java.util.concurrent.locks.*;
class ExReadWriteLock {
static final ReentrantReadWriteLock rw = new ReentrantReadWriteLock();
static int value = 0;
static int read() {
rw.readLock().lock();
try { return value; }
finally { rw.readLock().unlock(); }
}
static void write(int v) {
rw.writeLock().lock();
try { value = v; }
finally { rw.writeLock().unlock(); }
}
}
StampedLock - optimistic reads for high read concurrency (advanced)
import java.util.concurrent.locks.StampedLock;
class ExStampedLock {
static final StampedLock sl = new StampedLock();
static int x = 0;
static int optimisticRead() {
long stamp = sl.tryOptimisticRead();
int v = x;
if (!sl.validate(stamp)) {
stamp = sl.readLock();
try { v = x; }
finally { sl.unlockRead(stamp); }
}
return v;
}
}
5) Atomics & contention reducers
AtomicInteger / AtomicLong / AtomicReference - CAS based updates
import java.util.concurrent.atomic.*;
class ExAtomics {
public static void main(String[] args) {
AtomicInteger ai = new AtomicInteger(0);
ai.incrementAndGet();
System.out.println(ai.get());
AtomicReference<String> ar = new AtomicReference<>("a");
ar.compareAndSet("a", "b");
System.out.println(ar.get());
}
}
AtomicIntegerArray / AtomicLongArray - atomic ops on array elements
import java.util.concurrent.atomic.*;
class ExAtomicArray {
public static void main(String[] args) {
AtomicIntegerArray a = new AtomicIntegerArray(3);
a.incrementAndGet(1);
System.out.println(a.get(1));
}
}
AtomicStampedReference / AtomicMarkableReference - mitigate ABA patterns
import java.util.concurrent.atomic.*;
class ExStampedRef {
public static void main(String[] args) {
AtomicStampedReference<String> ref = new AtomicStampedReference<>("v1", 0);
int[] stamp = new int[1];
String v = ref.get(stamp);
ref.compareAndSet(v, "v2", stamp[0], stamp[0] + 1);
System.out.println(ref.getReference());
}
}
LongAdder / DoubleAdder - high contention counters (metrics)
import java.util.concurrent.atomic.*;
class ExLongAdder {
public static void main(String[] args) {
LongAdder adder = new LongAdder();
adder.increment();
adder.add(10);
System.out.println(adder.sum());
}
}
LongAccumulator / DoubleAccumulator - high contention reductions with custom op
import java.util.concurrent.atomic.*;
class ExAccumulator {
public static void main(String[] args) {
LongAccumulator max = new LongAccumulator(Long::max, Long.MIN_VALUE);
max.accumulate(10);
max.accumulate(7);
System.out.println(max.get());
}
}
6) Concurrent collections
ConcurrentHashMap - scalable concurrent map, atomic compute operations
import java.util.concurrent.*;
class ExConcurrentHashMap {
public static void main(String[] args) {
ConcurrentHashMap<String, Integer> m = new ConcurrentHashMap<>();
m.merge("k", 1, Integer::sum);
System.out.println(m.get("k"));
}
}
ConcurrentSkipListMap / ConcurrentSkipListSet - sorted concurrent structures
import java.util.concurrent.*;
class ExSkipList {
public static void main(String[] args) {
ConcurrentSkipListMap<Integer, String> m = new ConcurrentSkipListMap<>();
m.put(2, "b");
m.put(1, "a");
System.out.println(m.firstEntry());
}
}
CopyOnWriteArrayList / CopyOnWriteArraySet - many reads, few writes
import java.util.concurrent.*;
class ExCopyOnWrite {
public static void main(String[] args) {
CopyOnWriteArrayList<Integer> list = new CopyOnWriteArrayList<>();
list.add(1);
for (Integer v : list) System.out.println(v);
}
}
ConcurrentLinkedQueue / ConcurrentLinkedDeque - lock free queues
import java.util.concurrent.*;
class ExConcurrentQueue {
public static void main(String[] args) {
ConcurrentLinkedQueue<String> q = new ConcurrentLinkedQueue<>();
q.add("a");
System.out.println(q.poll());
}
}
7) Blocking queues (often used with ThreadPoolExecutor)
ArrayBlockingQueue - bounded FIFO, backpressure
import java.util.concurrent.*;
class ExArrayBlockingQueue {
public static void main(String[] args) throws Exception {
BlockingQueue<Integer> q = new ArrayBlockingQueue<>(1);
q.put(1);
System.out.println(q.take());
}
}
LinkedBlockingQueue - optionally bounded, common default
import java.util.concurrent.*;
class ExLinkedBlockingQueue {
public static void main(String[] args) throws Exception {
BlockingQueue<String> q = new LinkedBlockingQueue<>(10);
q.offer("x");
System.out.println(q.poll(1, TimeUnit.SECONDS));
}
}
SynchronousQueue - zero capacity handoff between producer and consumer
import java.util.concurrent.*;
class ExSynchronousQueue {
public static void main(String[] args) {
SynchronousQueue<String> q = new SynchronousQueue<>();
new Thread(() -> {
try { System.out.println("took " + q.take()); } catch (InterruptedException ignored) {}
}).start();
try { q.put("handoff"); } catch (InterruptedException ignored) {}
}
}
PriorityBlockingQueue - priority ordered tasks/items (usually unbounded)
import java.util.concurrent.*;
class ExPriorityBlockingQueue {
public static void main(String[] args) throws Exception {
PriorityBlockingQueue<Integer> q = new PriorityBlockingQueue<>();
q.put(5);
q.put(1);
System.out.println(q.take());
}
}
DelayQueue - delayed availability items (for scheduling patterns)
import java.util.concurrent.*;
import java.util.concurrent.atomic.AtomicLong;
class ExDelayQueue {
static final AtomicLong SEQ = new AtomicLong();
static final class Item implements Delayed {
final long runAtNanos;
final long seq = SEQ.incrementAndGet();
Item(long delayMs) {
this.runAtNanos = System.nanoTime() + TimeUnit.MILLISECONDS.toNanos(delayMs);
}
@Override public long getDelay(TimeUnit unit) {
return unit.convert(runAtNanos - System.nanoTime(), TimeUnit.NANOSECONDS);
}
@Override public int compareTo(Delayed o) {
Item other = (Item) o;
int c = Long.compare(this.runAtNanos, other.runAtNanos);
return (c != 0) ? c : Long.compare(this.seq, other.seq);
}
}
public static void main(String[] args) throws Exception {
DelayQueue<Item> q = new DelayQueue<>();
q.put(new Item(200));
q.take();
System.out.println("released");
}
}
LinkedTransferQueue / TransferQueue - high throughput queue with transfer capability
import java.util.concurrent.*;
class ExTransferQueue {
public static void main(String[] args) {
TransferQueue<String> q = new LinkedTransferQueue<>();
new Thread(() -> {
try { System.out.println("got " + q.take()); } catch (InterruptedException ignored) {}
}).start();
q.tryTransfer("x");
}
}
8) Thread local, random, time
ThreadLocal - per thread context (request id, formatters)
class ExThreadLocal {
static final ThreadLocal<Integer> TL = ThreadLocal.withInitial(() -> 0);
public static void main(String[] args) {
TL.set(123);
System.out.println(TL.get());
}
}
ThreadLocalRandom - random without contention in multithreaded code
import java.util.concurrent.*;
class ExThreadLocalRandom {
public static void main(String[] args) {
int x = ThreadLocalRandom.current().nextInt(1, 10);
System.out.println(x);
}
}
TimeUnit - consistent time conversions and timed waits
import java.util.concurrent.*;
class ExTimeUnit {
public static void main(String[] args) {
System.out.println(TimeUnit.SECONDS.toMillis(2));
}
}
9) Classic thread primitives (still used)
Thread + UncaughtExceptionHandler - manual threads, catch uncaught failures
class ExThreadHandler {
public static void main(String[] args) {
Thread t = new Thread(() -> { throw new RuntimeException("boom"); });
t.setUncaughtExceptionHandler((th, ex) -> System.out.println(th.getName() + " -> " + ex.getMessage()));
t.start();
}
}
synchronized + wait()/notifyAll() - intrinsic monitor coordination
Top comments (0)