Complete Guide to Java Thread Pools
Table of Contents
- What is a Thread Pool?
- Why Use Thread Pools?
- Types of Thread Pools
- When to Use Each Type
- Performance Impact
- Common Pitfalls
- Best Practices
- Real-World Examples
What is a Thread Pool?
A thread pool is a collection of pre-initialized threads that are ready to execute tasks. Instead of creating a new thread for each task (expensive), you reuse existing threads from the pool.
Without Thread Pool (Bad)
// ❌ Creating new thread for each request - expensive!
for (int i = 0; i < 1000; i++) {
new Thread(() -> processRequest()).start();
}
// Result: 1000 threads created!
// - High memory usage (1MB per thread)
// - Context switching overhead
// - Thread creation/destruction cost
With Thread Pool (Good)
// ✅ Reusing threads from pool
ExecutorService executor = Executors.newFixedThreadPool(10);
for (int i = 0; i < 1000; i++) {
executor.submit(() -> processRequest());
}
// Result: Only 10 threads created, reused for all 1000 tasks!
Why Use Thread Pools?
1. Resource Management
- Control thread count - Prevent system overload
- Memory efficiency - Threads are expensive (1MB stack per thread)
- CPU optimization - Match thread count to available cores
2. Performance
- No thread creation overhead - Threads are pre-created
- Reduced context switching - Fewer threads = less CPU time wasted
- Better throughput - Efficient task scheduling
3. Simplified Code
-
Clean API -
submit()instead ofnew Thread() - Built-in features - Thread naming, exception handling, shutdown
- Future support - Get results from async tasks
Performance Numbers
Scenario: Execute 10,000 simple tasks
Without Thread Pool:
- Time: 15 seconds
- Threads created: 10,000
- Memory: 10GB (10,000 × 1MB)
- Context switches: ~50,000
With Thread Pool (10 threads):
- Time: 3 seconds
- Threads created: 10
- Memory: 10MB (10 × 1MB)
- Context switches: ~100
Types of Thread Pools
Java provides 5 main types of thread pools via Executors factory:
- FixedThreadPool
- CachedThreadPool
- SingleThreadExecutor
- ScheduledThreadPool
- WorkStealingPool (Java 8+)
Let's explore each in detail.
1. FixedThreadPool
What is it?
A thread pool with a fixed number of threads. If all threads are busy, tasks wait in a queue.
Creation
ExecutorService executor = Executors.newFixedThreadPool(10);
How it Works
Thread Pool: [T1] [T2] [T3] [T4] [T5]
Task Queue: [Task6] [Task7] [Task8] [Task9] [Task10]
When T1 finishes → T1 picks up Task6
When T2 finishes → T2 picks up Task7
...
Internal Configuration
// Equivalent to:
new ThreadPoolExecutor(
10, // corePoolSize
10, // maximumPoolSize (same as core)
0L, // keepAliveTime
TimeUnit.MILLISECONDS,
new LinkedBlockingQueue<>() // Unbounded queue!
)
Use Cases ✅
1. CPU-Intensive Tasks
// Image processing service
ExecutorService executor = Executors.newFixedThreadPool(
Runtime.getRuntime().availableProcessors()
);
for (File image : images) {
executor.submit(() -> {
processImage(image); // CPU-heavy
});
}
Best Practice: Thread count = Number of CPU cores
2. Database Connection Pools
// Match thread count to DB connection pool size
ExecutorService executor = Executors.newFixedThreadPool(20);
HikariConfig config = new HikariConfig();
config.setMaximumPoolSize(20); // Same as thread pool
3. Rate-Limited APIs
// External API allows max 10 concurrent requests
ExecutorService executor = Executors.newFixedThreadPool(10);
4. Bounded Resource Access
// Only 5 printers available
ExecutorService executor = Executors.newFixedThreadPool(5);
for (Document doc : documents) {
executor.submit(() -> printDocument(doc));
}
Advantages ✅
- ✅ Predictable resource usage - Fixed memory footprint
- ✅ No thread creation overhead - Threads pre-created
- ✅ Prevents resource exhaustion - Bounded thread count
- ✅ Good for CPU-bound tasks - Matches CPU core count
Disadvantages ❌
- ❌ Unbounded queue - Can cause OutOfMemoryError if tasks pile up
- ❌ Not adaptive - Can't scale up for sudden load spikes
- ❌ Potential deadlock - If tasks depend on each other and queue fills
Impact Analysis
| Aspect | Impact | Notes |
|---|---|---|
| Memory | Low (Fixed) | 10 threads = ~10MB |
| CPU | Optimal (if sized correctly) | Match to core count |
| Throughput | High (for CPU tasks) | All cores utilized |
| Latency | Medium | Tasks may wait in queue |
| Scalability | Low | Fixed capacity |
When to Use
✅ CPU-intensive workloads (video encoding, data processing)
✅ Predictable load patterns
✅ Resource-constrained environments
✅ When you need to limit concurrent operations
When NOT to Use
❌ I/O-heavy tasks (network calls, file operations)
❌ Highly variable workload
❌ When tasks can pile up indefinitely
2. CachedThreadPool
What is it?
A thread pool that creates new threads as needed but reuses idle threads. Threads are kept alive for 60 seconds.
Creation
ExecutorService executor = Executors.newCachedThreadPool();
How it Works
Time 0s: No threads exist
Time 1s: Task arrives → Create T1
Time 2s: Task arrives → Create T2
Time 3s: Task arrives → T1 finished, reuse T1
Time 63s: T2 idle for 60s → Terminate T2
Internal Configuration
// Equivalent to:
new ThreadPoolExecutor(
0, // corePoolSize (no minimum)
Integer.MAX_VALUE, // maximumPoolSize (unlimited!)
60L, // keepAliveTime
TimeUnit.SECONDS,
new SynchronousQueue<>() // No task queue
)
Use Cases ✅
1. Short-Lived Async Tasks
// Email sending service
ExecutorService executor = Executors.newCachedThreadPool();
for (User user : users) {
executor.submit(() -> {
sendEmail(user); // Quick I/O task
});
}
2. I/O-Bound Operations
// Web scraping
ExecutorService executor = Executors.newCachedThreadPool();
for (String url : urls) {
executor.submit(() -> {
String content = fetchUrl(url); // I/O-heavy, not CPU-heavy
parseContent(content);
});
}
3. Burst Traffic Handling
// Handle sudden spike in requests
ExecutorService executor = Executors.newCachedThreadPool();
// Traffic spike: 1000 requests in 5 seconds
// Pool creates threads as needed
// After spike: threads are reclaimed
4. Event Processing
// Handle UI events
ExecutorService executor = Executors.newCachedThreadPool();
button.onClick(() -> {
executor.submit(() -> {
// Handle click asynchronously
processClickEvent();
});
});
Advantages ✅
- ✅ Adaptive - Scales up/down with load
- ✅ No task queuing - Tasks execute immediately
- ✅ Low latency - Tasks start quickly
- ✅ Resource efficient - Idle threads are terminated
Disadvantages ❌
- ❌ Unbounded threads - Can create thousands of threads (OOM)
- ❌ High overhead - Constant thread creation/destruction
- ❌ Unpredictable resources - Can exhaust system resources
- ❌ Context switching - Too many threads = poor performance
Impact Analysis
| Aspect | Impact | Notes |
|---|---|---|
| Memory | Variable (can be HIGH) | Unbounded threads |
| CPU | Can be poor | Too many threads = context switching |
| Throughput | High (for I/O) | Many tasks execute concurrently |
| Latency | Very Low | Immediate task execution |
| Scalability | High (but risky) | Grows with load |
Real Danger Example
ExecutorService executor = Executors.newCachedThreadPool();
// Sudden burst: 100,000 requests
for (int i = 0; i < 100000; i++) {
executor.submit(() -> fetchUrl("http://api.example.com"));
}
// Result: Attempts to create 100,000 threads!
// - OutOfMemoryError
// - System becomes unresponsive
// - Application crashes
When to Use
✅ I/O-bound tasks (network calls, file operations)
✅ Short-lived tasks (< 1 second)
✅ Unpredictable, bursty workloads
✅ Low to moderate traffic applications
When NOT to Use
❌ High-traffic production systems
❌ CPU-intensive tasks
❌ Long-running tasks
❌ When you can't control task submission rate
3. SingleThreadExecutor
What is it?
A thread pool with exactly one thread. Tasks execute sequentially in order.
Creation
ExecutorService executor = Executors.newSingleThreadExecutor();
How it Works
Thread Pool: [T1]
Task Queue: [Task1] [Task2] [Task3] [Task4]
T1 executes Task1 → Task2 → Task3 → Task4 (sequential)
Internal Configuration
// Equivalent to:
new ThreadPoolExecutor(
1, // corePoolSize
1, // maximumPoolSize
0L,
TimeUnit.MILLISECONDS,
new LinkedBlockingQueue<>() // Unbounded queue
)
Use Cases ✅
1. Sequential Processing
// Process orders in sequence
ExecutorService executor = Executors.newSingleThreadExecutor();
for (Order order : orders) {
executor.submit(() -> {
processOrder(order); // Must be sequential
});
}
2. Write Operations to Shared Resource
// Log file writer - avoid concurrent writes
ExecutorService logWriter = Executors.newSingleThreadExecutor();
public void log(String message) {
logWriter.submit(() -> {
fileWriter.write(message + "\n"); // Sequential writes
});
}
3. Event Loop Pattern
// Message queue consumer
ExecutorService consumer = Executors.newSingleThreadExecutor();
consumer.submit(() -> {
while (true) {
Message msg = messageQueue.poll();
if (msg != null) {
handleMessage(msg); // Process one at a time
}
}
});
4. State Machine
// Order state transitions must be sequential
ExecutorService stateMachine = Executors.newSingleThreadExecutor();
public void transitionOrder(Order order, State newState) {
stateMachine.submit(() -> {
validateTransition(order, newState);
order.setState(newState);
saveOrder(order);
});
}
Advantages ✅
- ✅ Thread-safe by design - No concurrency issues
- ✅ Order guaranteed - Tasks execute in submission order
- ✅ Simple reasoning - No race conditions
- ✅ Minimal overhead - Only one thread
Disadvantages ❌
- ❌ No parallelism - Only one task at a time
- ❌ Throughput limited - Bottleneck for high load
- ❌ Unbounded queue - Tasks can pile up
- ❌ Single point of failure - If thread dies, executor recreates it
Impact Analysis
| Aspect | Impact | Notes |
|---|---|---|
| Memory | Very Low | Only 1 thread (~1MB) |
| CPU | Low utilization | Single core only |
| Throughput | Low | Sequential execution |
| Latency | Can be high | Tasks wait in queue |
| Scalability | None | Fixed at 1 thread |
When to Use
✅ Ordered processing required (FIFO queue)
✅ Shared resource with no concurrent access
✅ Event loops
✅ State machines
✅ Simple background tasks
When NOT to Use
❌ High-throughput requirements
❌ CPU-intensive workloads
❌ Parallel processing needed
❌ Performance-critical paths
4. ScheduledThreadPool
What is it?
A thread pool for scheduling tasks to run after a delay or periodically.
Creation
ScheduledExecutorService scheduler = Executors.newScheduledThreadPool(5);
How it Works
Thread Pool: [T1] [T2] [T3] [T4] [T5]
Schedule Task1 at 10:00:00
Schedule Task2 at 10:00:05
Schedule Task3 every 30 seconds
Methods
1. schedule - Run once after delay
scheduler.schedule(() -> {
System.out.println("Executed after 5 seconds");
}, 5, TimeUnit.SECONDS);
2. scheduleAtFixedRate - Run periodically (fixed rate)
// Start after 0s, then every 10s
// 0s → 10s → 20s → 30s
scheduler.scheduleAtFixedRate(() -> {
System.out.println("Every 10 seconds");
}, 0, 10, TimeUnit.SECONDS);
Fixed Rate: Next execution = initialDelay + (n * period)
- If task takes longer than period, next task starts immediately after
- Can have overlapping executions!
3. scheduleWithFixedDelay - Run periodically (fixed delay)
// Start after 0s, then 10s after previous task completes
// 0s → Task(5s) → 15s → Task(5s) → 30s
scheduler.scheduleWithFixedDelay(() -> {
System.out.println("10 seconds after previous completion");
}, 0, 10, TimeUnit.SECONDS);
Fixed Delay: Next execution = previousCompletion + delay
- Waits for previous task to complete
- No overlapping executions
Use Cases ✅
1. Periodic Data Sync
ScheduledExecutorService scheduler = Executors.newScheduledThreadPool(1);
// Sync with external API every 5 minutes
scheduler.scheduleAtFixedRate(() -> {
syncDataFromAPI();
}, 0, 5, TimeUnit.MINUTES);
2. Cache Cleanup
// Clean expired cache entries every hour
scheduler.scheduleWithFixedDelay(() -> {
cache.removeExpiredEntries();
}, 1, 1, TimeUnit.HOURS);
3. Health Checks
// Check service health every 30 seconds
scheduler.scheduleAtFixedRate(() -> {
boolean healthy = checkHealth();
if (!healthy) {
alertOps();
}
}, 0, 30, TimeUnit.SECONDS);
4. Report Generation
// Generate daily report at 2 AM
long initialDelay = calculateDelayUntil2AM();
scheduler.scheduleAtFixedRate(() -> {
generateDailyReport();
}, initialDelay, 24, TimeUnit.HOURS);
5. Session Timeout
// Check for expired sessions every minute
scheduler.scheduleWithFixedDelay(() -> {
sessionManager.cleanupExpiredSessions();
}, 1, 1, TimeUnit.MINUTES);
6. Heartbeat / Keep-Alive
// Send heartbeat every 10 seconds
scheduler.scheduleAtFixedRate(() -> {
sendHeartbeat();
}, 0, 10, TimeUnit.SECONDS);
Advantages ✅
- ✅ Built-in scheduling - No need for Quartz/Spring @Scheduled
- ✅ Precise timing - Uses system clock
- ✅ Flexible - One-time or periodic tasks
- ✅ Concurrent execution - Multiple scheduled tasks
Disadvantages ❌
- ❌ No persistence - Schedules lost on restart
- ❌ No clustering - Single JVM only
- ❌ Limited features - No cron expressions
- ❌ Memory leak risk - Cancelled tasks retain references
Impact Analysis
| Aspect | Impact | Notes |
|---|---|---|
| Memory | Low-Medium | Based on pool size |
| CPU | Low (scheduled) | Tasks run periodically |
| Throughput | N/A | Not for high throughput |
| Latency | Low | Precise scheduling |
| Scalability | Medium | Limited to single JVM |
scheduleAtFixedRate vs scheduleWithFixedDelay
// FIXED RATE - Can overlap if task takes longer than period
scheduler.scheduleAtFixedRate(() -> {
sleep(15_000); // Task takes 15 seconds
}, 0, 10, TimeUnit.SECONDS);
// Timeline:
// 0s: Task1 starts
// 10s: Task2 starts (Task1 still running!) ← OVERLAP
// 15s: Task1 ends
// 20s: Task3 starts (Task2 still running!)
// 25s: Task2 ends
// FIXED DELAY - No overlap, waits for completion
scheduler.scheduleWithFixedDelay(() -> {
sleep(15_000); // Task takes 15 seconds
}, 0, 10, TimeUnit.SECONDS);
// Timeline:
// 0s: Task1 starts
// 15s: Task1 ends
// 25s: Task2 starts (10s delay after Task1 completion)
// 40s: Task2 ends
// 50s: Task3 starts
When to Use
✅ Periodic background jobs
✅ Cache management
✅ Health checks / monitoring
✅ Scheduled cleanups
✅ Time-based triggers
When NOT to Use
❌ Distributed/clustered scheduling (use Quartz)
❌ Persistent schedules across restarts
❌ Complex cron expressions
❌ High-precision timing (< 1ms)
5. WorkStealingPool (Java 8+)
What is it?
A thread pool based on ForkJoinPool that uses work-stealing algorithm. Idle threads "steal" tasks from busy threads.
Creation
ExecutorService executor = Executors.newWorkStealingPool();
// Defaults to Runtime.getRuntime().availableProcessors()
// Or specify parallelism level
ExecutorService executor = Executors.newWorkStealingPool(8);
How it Works
Traditional Thread Pool
T1: [Task1] [Task2] [Task3] [Task4] ← T1 is busy
T2: [Task5] ← T2 is idle (can't help T1)
T3: [Task6] [Task7] ← T3 is busy
Work-Stealing Pool
T1: [Task1] [Task2] [Task3] [Task4]
T2: [Task5] → steals Task4 from T1 ← Work stealing!
T3: [Task6] [Task7]
Result: Better load balancing
Internal Architecture
// Based on ForkJoinPool
ForkJoinPool pool = new ForkJoinPool(
Runtime.getRuntime().availableProcessors()
);
Each thread has its own deque (double-ended queue):
- Thread pushes tasks to HEAD of its own deque
- Thread pops tasks from HEAD (LIFO - cache locality)
- Other threads steal from TAIL (FIFO - least recently used)
Use Cases ✅
1. Divide-and-Conquer Algorithms
ExecutorService executor = Executors.newWorkStealingPool();
public long fibonacci(int n) {
if (n <= 1) return n;
Future<Long> f1 = executor.submit(() -> fibonacci(n - 1));
Future<Long> f2 = executor.submit(() -> fibonacci(n - 2));
return f1.get() + f2.get();
}
2. Parallel Stream Processing
// Uses ForkJoinPool.commonPool() internally
List<Integer> numbers = IntStream.range(0, 1_000_000)
.boxed()
.collect(Collectors.toList());
long sum = numbers.parallelStream()
.filter(n -> n % 2 == 0)
.mapToInt(n -> n * n)
.sum();
3. Recursive Task Processing
ExecutorService executor = Executors.newWorkStealingPool();
public void processDirectory(File dir) {
File[] files = dir.listFiles();
for (File file : files) {
executor.submit(() -> {
if (file.isDirectory()) {
processDirectory(file); // Recursive
} else {
processFile(file);
}
});
}
}
4. Map-Reduce Operations
// Process large dataset in parallel
List<String> documents = loadDocuments();
Map<String, Integer> wordCount = documents.parallelStream()
.flatMap(doc -> Arrays.stream(doc.split(" ")))
.collect(Collectors.groupingByConcurrent(
Function.identity(),
Collectors.summingInt(e -> 1)
));
Advantages ✅
- ✅ Better load balancing - Work stealing prevents idle threads
- ✅ Cache-friendly - LIFO order improves cache locality
- ✅ Efficient for recursive tasks - Divides work automatically
- ✅ Async by default - Non-blocking task submission
Disadvantages ❌
- ❌ Async mode - Tasks may not execute in submission order
- ❌ Complexity - Harder to reason about execution order
- ❌ Not for I/O tasks - Best for CPU-bound recursive work
- ❌ Unpredictable order - Difficult to debug
Impact Analysis
| Aspect | Impact | Notes |
|---|---|---|
| Memory | Medium | Based on CPU cores |
| CPU | Optimal | Work stealing maximizes utilization |
| Throughput | Very High | For CPU-bound recursive tasks |
| Latency | Medium | Task order unpredictable |
| Scalability | High | Adapts to available cores |
When to Use
✅ Divide-and-conquer algorithms
✅ Recursive task processing
✅ Parallel streams
✅ CPU-intensive tree/graph traversal
✅ Map-reduce operations
When NOT to Use
❌ I/O-bound tasks
❌ Order-dependent processing
❌ Simple sequential tasks
❌ When execution order matters
Comparison Table
| Pool Type | Threads | Queue | Best For | Avoid For |
|---|---|---|---|---|
| FixedThreadPool | Fixed (N) | Unbounded | CPU-intensive, predictable load | I/O tasks, variable load |
| CachedThreadPool | 0 to ∞ | None | I/O-intensive, short bursts | High traffic, long tasks |
| SingleThreadExecutor | 1 | Unbounded | Sequential processing | Parallel processing |
| ScheduledThreadPool | Fixed (N) | Delayed Queue | Periodic tasks | High throughput |
| WorkStealingPool | N (CPU cores) | Per-thread deque | Recursive CPU tasks | I/O tasks, ordered execution |
Performance Impact Deep Dive
Memory Impact
// Each thread consumes ~1MB of stack memory
FixedThreadPool(100) → 100MB
CachedThreadPool(10000) → 10GB (dangerous!)
WorkStealingPool(8) → 8MB
SingleThreadExecutor → 1MB
CPU Impact
CPU Cores: 8
FixedThreadPool(8):
- CPU Usage: ~100% (optimal)
- Context Switches: Minimal
FixedThreadPool(100):
- CPU Usage: ~100%
- Context Switches: HIGH (poor performance)
CachedThreadPool (200 threads):
- CPU Usage: ~100%
- Context Switches: VERY HIGH (terrible performance)
WorkStealingPool(8):
- CPU Usage: ~100% (optimal)
- Context Switches: Minimal
- Load Balancing: Excellent
Throughput Impact
Scenario: Process 10,000 CPU-intensive tasks (100ms each)
FixedThreadPool(8):
- Throughput: 80 tasks/second
- Total time: 125 seconds
FixedThreadPool(100):
- Throughput: 70 tasks/second (context switching overhead)
- Total time: 143 seconds
CachedThreadPool:
- Throughput: 50 tasks/second (too many threads)
- Total time: 200 seconds
WorkStealingPool(8):
- Throughput: 85 tasks/second (work stealing benefit)
- Total time: 118 seconds
Common Pitfalls
Pitfall 1: Unbounded Queue OOM
// ❌ DANGEROUS: Can cause OutOfMemoryError
ExecutorService executor = Executors.newFixedThreadPool(10);
for (int i = 0; i < 1_000_000; i++) {
executor.submit(() -> slowTask());
}
// Queue grows to 1M tasks → OutOfMemoryError!
// ✅ SOLUTION: Use bounded queue
ThreadPoolExecutor executor = new ThreadPoolExecutor(
10, 10,
0L, TimeUnit.MILLISECONDS,
new ArrayBlockingQueue<>(1000) // Bounded to 1000
);
// Or use rejection policy
executor.setRejectedExecutionHandler(new ThreadPoolExecutor.CallerRunsPolicy());
Pitfall 2: CachedThreadPool Explosion
// ❌ DANGEROUS: Can create thousands of threads
ExecutorService executor = Executors.newCachedThreadPool();
// Sudden traffic spike
for (int i = 0; i < 50_000; i++) {
executor.submit(() -> httpRequest());
}
// Creates 50,000 threads → System crash!
// ✅ SOLUTION: Use FixedThreadPool with appropriate size
ExecutorService executor = Executors.newFixedThreadPool(100);
Pitfall 3: Forgetting to Shutdown
// ❌ BAD: Executor never shuts down (JVM doesn't exit)
ExecutorService executor = Executors.newFixedThreadPool(10);
executor.submit(() -> doWork());
// Application hangs!
// ✅ GOOD: Always shutdown
ExecutorService executor = Executors.newFixedThreadPool(10);
try {
executor.submit(() -> doWork());
} finally {
executor.shutdown();
executor.awaitTermination(60, TimeUnit.SECONDS);
}
Pitfall 4: Blocking in Thread Pool
// ❌ BAD: Blocking all threads can cause deadlock
ExecutorService executor = Executors.newFixedThreadPool(5);
for (int i = 0; i < 10; i++) {
executor.submit(() -> {
// This task submits another task and waits for it
Future<?> future = executor.submit(() -> doWork());
future.get(); // DEADLOCK if all 5 threads are blocked here!
});
}
// ✅ SOLUTION: Use separate executor or increase pool size
ExecutorService mainExecutor = Executors.newFixedThreadPool(5);
ExecutorService workerExecutor = Executors.newFixedThreadPool(10);
Pitfall 5: Exception Swallowing
// ❌ BAD: Exception is silently swallowed
executor.submit(() -> {
throw new RuntimeException("Error!");
// Exception is caught and ignored!
});
// ✅ GOOD: Handle exceptions
executor.submit(() -> {
try {
riskyOperation();
} catch (Exception e) {
logger.error("Task failed", e);
}
});
// Or use Future.get()
Future<?> future = executor.submit(() -> riskyOperation());
try {
future.get();
} catch (ExecutionException e) {
logger.error("Task failed", e.getCause());
}
Best Practices
1. Always Shutdown Executors
ExecutorService executor = Executors.newFixedThreadPool(10);
try {
// Submit tasks
executor.submit(() -> doWork());
} finally {
// Graceful shutdown
executor.shutdown();
try {
// Wait for tasks to complete
if (!executor.awaitTermination(60, TimeUnit.SECONDS)) {
// Force shutdown if timeout
executor.shutdownNow();
// Wait again
if (!executor.awaitTermination(60, TimeUnit.SECONDS)) {
logger.error("Executor did not terminate");
}
}
} catch (InterruptedException e) {
executor.shutdownNow();
Thread.currentThread().interrupt();
}
}
2. Size Thread Pools Correctly
// CPU-intensive tasks
int cpuCores = Runtime.getRuntime().availableProcessors();
ExecutorService cpuExecutor = Executors.newFixedThreadPool(cpuCores);
// I/O-intensive tasks (rule of thumb: 2x cores)
ExecutorService ioExecutor = Executors.newFixedThreadPool(cpuCores * 2);
// Formula for I/O tasks:
// threads = cores * (1 + waitTime / cpuTime)
// Example: 8 cores, 90% waiting, 10% CPU
// threads = 8 * (1 + 0.9 / 0.1) = 8 * 10 = 80
3. Use ThreadPoolExecutor Directly for Production
// ✅ PRODUCTION-READY: Full control
ThreadPoolExecutor executor = new ThreadPoolExecutor(
10, // corePoolSize
20, // maximumPoolSize
60L, // keepAliveTime
TimeUnit.SECONDS,
new ArrayBlockingQueue<>(1000), // Bounded queue
new ThreadPoolExecutor.CallerRunsPolicy() // Rejection policy
);
// Set thread factory for better debugging
executor.setThreadFactory(new ThreadFactory() {
private final AtomicInteger counter = new AtomicInteger(0);
@Override
public Thread newThread(Runnable r) {
Thread thread = new Thread(r);
thread.setName("MyApp-Worker-" + counter.incrementAndGet());
thread.setDaemon(false);
return thread;
}
});
4. Monitor Thread Pool Health
@Component
public class ThreadPoolMonitor {
@Scheduled(fixedRate = 60000) // Every minute
public void monitorThreadPool() {
ThreadPoolExecutor executor = (ThreadPoolExecutor) this.executor;
int activeThreads = executor.getActiveCount();
int queueSize = executor.getQueue().size();
long completedTasks = executor.getCompletedTaskCount();
logger.info("Thread Pool Stats - Active: {}, Queue: {}, Completed: {}",
activeThreads, queueSize, completedTasks);
// Alert if queue is filling up
if (queueSize > 800) {
alertOps("Thread pool queue is 80% full!");
}
}
}
5. Use Rejection Policies
// CallerRunsPolicy: Caller's thread runs the task (backpressure)
new ThreadPoolExecutor.CallerRunsPolicy()
// AbortPolicy: Throw RejectedExecutionException (default)
new ThreadPoolExecutor.AbortPolicy()
// DiscardPolicy: Silently discard the task
new ThreadPoolExecutor.DiscardPolicy()
// DiscardOldestPolicy: Discard oldest task in queue
new ThreadPoolExecutor.DiscardOldestPolicy()
// Custom policy
new RejectedExecutionHandler() {
@Override
public void rejectedExecution(Runnable r, ThreadPoolExecutor e) {
logger.warn("Task rejected, queuing to overflow queue");
overflowQueue.offer(r);
}
}
6. Name Your Threads
ThreadFactory namedThreadFactory = new ThreadFactoryBuilder()
.setNameFormat("API-Worker-%d")
.setDaemon(false)
.build();
ExecutorService executor = new ThreadPoolExecutor(
10, 10,
0L, TimeUnit.MILLISECONDS,
new LinkedBlockingQueue<>(),
namedThreadFactory
);
// Makes debugging much easier:
// "API-Worker-5" instead of "pool-1-thread-5"
Real-World Examples
Example 1: Web Server Request Handler
@Configuration
public class ThreadPoolConfig {
@Bean
public ExecutorService requestExecutor() {
int cores = Runtime.getRuntime().availableProcessors();
return new ThreadPoolExecutor(
cores * 2, // corePoolSize (I/O-bound)
cores * 4, // maxPoolSize
60L, // keepAliveTime
TimeUnit.SECONDS,
new ArrayBlockingQueue<>(1000),
namedThreadFactory("Request-Handler"),
new ThreadPoolExecutor.CallerRunsPolicy()
);
}
private ThreadFactory namedThreadFactory(String prefix) {
return new ThreadFactoryBuilder()
.setNameFormat(prefix + "-%d")
.build();
}
}
@RestController
public class ApiController {
@Autowired
private ExecutorService requestExecutor;
@GetMapping("/process")
public CompletableFuture<Response> processRequest() {
return CompletableFuture.supplyAsync(() -> {
// Process request asynchronously
return heavyComputation();
}, requestExecutor);
}
}
Example 2: Batch Processing System
@Service
public class BatchProcessor {
private final ExecutorService executor;
public BatchProcessor() {
int cores = Runtime.getRuntime().availableProcessors();
this.executor = Executors.newFixedThreadPool(cores);
}
public void processBatch(List<Record> records) {
// Split into chunks
int chunkSize = 1000;
List<List<Record>> chunks = partition(records, chunkSize);
// Submit tasks
List<Future<Result>> futures = chunks.stream()
.map(chunk -> executor.submit(() -> processChunk(chunk)))
.collect(Collectors.toList());
// Wait for all tasks to complete
List<Result> results = futures.stream()
.map(future -> {
try {
return future.get();
} catch (Exception e) {
throw new RuntimeException(e);
}
})
.collect(Collectors.toList());
aggregateResults(results);
}
@PreDestroy
public void shutdown() {
executor.shutdown();
try {
executor.awaitTermination(60, TimeUnit.SECONDS);
} catch (InterruptedException e) {
executor.shutdownNow();
}
}
}
Example 3: Notification Service
@Service
public class NotificationService {
// I/O-bound: sending emails/SMS
private final ExecutorService notificationExecutor;
public NotificationService() {
int cores = Runtime.getRuntime().availableProcessors();
this.notificationExecutor = Executors.newFixedThreadPool(cores * 2);
}
public void sendNotifications(List<User> users, String message) {
List<CompletableFuture<Void>> futures = users.stream()
.map(user -> CompletableFuture.runAsync(() -> {
try {
sendEmail(user, message);
sendSMS(user, message);
} catch (Exception e) {
logger.error("Failed to notify user: " + user.getId(), e);
}
}, notificationExecutor))
.collect(Collectors.toList());
// Wait for all notifications to be sent
CompletableFuture.allOf(futures.toArray(new CompletableFuture[0]))
.join();
}
}
Example 4: Multi-Level Task Processing
@Service
public class TaskProcessingService {
// Different executors for different task types
private final ExecutorService fastExecutor;
private final ExecutorService slowExecutor;
private final ScheduledExecutorService scheduledExecutor;
public TaskProcessingService() {
int cores = Runtime.getRuntime().availableProcessors();
// Fast tasks: CPU-bound
this.fastExecutor = Executors.newFixedThreadPool(cores);
// Slow tasks: I/O-bound
this.slowExecutor = Executors.newFixedThreadPool(cores * 2);
// Scheduled tasks
this.scheduledExecutor = Executors.newScheduledThreadPool(2);
// Schedule cleanup
scheduledExecutor.scheduleAtFixedRate(
this::cleanup,
1, 1, TimeUnit.HOURS
);
}
public void processTask(Task task) {
if (task.isFast()) {
fastExecutor.submit(() -> processFastTask(task));
} else {
slowExecutor.submit(() -> processSlowTask(task));
}
}
@PreDestroy
public void shutdown() {
shutdownExecutor(fastExecutor);
shutdownExecutor(slowExecutor);
shutdownExecutor(scheduledExecutor);
}
private void shutdownExecutor(ExecutorService executor) {
executor.shutdown();
try {
if (!executor.awaitTermination(60, TimeUnit.SECONDS)) {
executor.shutdownNow();
}
} catch (InterruptedException e) {
executor.shutdownNow();
Thread.currentThread().interrupt();
}
}
}
Decision Tree
Need periodic execution?
├─ YES → ScheduledThreadPool
└─ NO
│
Need sequential execution?
├─ YES → SingleThreadExecutor
└─ NO
│
CPU-intensive or I/O-intensive?
├─ CPU-intensive
│ │
│ Recursive/divide-and-conquer?
│ ├─ YES → WorkStealingPool
│ └─ NO → FixedThreadPool(cores)
│
└─ I/O-intensive
│
Predictable load?
├─ YES → FixedThreadPool(cores * 2)
└─ NO
│
Low traffic (<1000 req/min)?
├─ YES → CachedThreadPool
└─ NO → FixedThreadPool with custom size
Summary
| Question | Answer |
|---|---|
| Default choice? | FixedThreadPool with appropriate sizing |
| CPU-intensive? | FixedThreadPool(cores) or WorkStealingPool |
| I/O-intensive? | FixedThreadPool(cores * 2) |
| Scheduled tasks? | ScheduledThreadPool |
| Sequential processing? | SingleThreadExecutor |
| Bursty, low traffic? | CachedThreadPool |
| Recursive tasks? | WorkStealingPool |
Golden Rules
- ✅ Always shutdown executors
- ✅ Size appropriately - CPU tasks = cores, I/O tasks = cores * 2
- ✅ Use bounded queues in production
- ✅ Monitor thread pool metrics
- ✅ Name your threads for debugging
- ✅ Handle exceptions properly
- ❌ Never use Executors.newCachedThreadPool() in production high-traffic systems
- ❌ Avoid unbounded queues (OOM risk)
- ❌ Don't block in thread pools (deadlock risk)
Remember: Choose the right tool for the job. When in doubt, start with FixedThreadPool and measure performance!
Top comments (0)