As a best-selling author, I invite you to explore my books on Amazon. Don't forget to follow me on Medium and show your support. Thank you! Your support means the world!
Let me tell you about a quiet revolution happening in Java. For most of my career, writing concurrent code felt like managing a scarce, expensive resource. Threads were heavy. Creating thousands of them would make the system groan under their weight. We built elaborate thread pools, careful executors, and reactive patterns not because they were simple, but because the foundation—the operating system thread—was so costly.
That foundation has changed.
Virtual threads in Java, part of Project Loom, are a different kind of thread. They are lightweight, almost free to create in terms of memory and CPU. You can have thousands, even millions, of them live simultaneously in your application. The best part? You write code the straightforward, blocking way you always have. The Java runtime handles the complex mapping of these millions of lightweight tasks onto a much smaller pool of real operating system threads.
Think of it like this. Old concurrency was like managing a team of 10 expert chefs (platform threads). If you needed to make 10,000 sandwiches, you’d have to carefully schedule each chef, never letting them stand idle. New concurrency gives you 10,000 helpers (virtual threads). Each helper can work on one sandwich, and if they have to wait for the toaster, they simply step aside so another helper can use the chef. The 10 chefs are always busy, directing the helpers.
Here is the simplest example. Creating 10,000 tasks that sleep is trivial now.
try (var executor = Executors.newVirtualThreadPerTaskExecutor()) {
IntStream.range(0, 10_000).forEach(i -> {
executor.submit(() -> {
System.out.println("Task " + i + " starting on: " + Thread.currentThread());
Thread.sleep(Duration.ofSeconds(1)); // The virtual thread yields here
System.out.println("Task " + i + " finished.");
return i;
});
});
} // executor.close() waits for all tasks
The Executors.newVirtualThreadPerTaskExecutor() is a key. It creates a new virtual thread for every single task you submit. There is no pool with a maximum size to worry about. The Thread.sleep is no longer a wasteful operation; it’s a signal. It tells the JVM, "This virtual thread has to wait, so you can park it and let another virtual thread run on this underlying OS thread." This is called yielding.
This changes everything. The first technique is simply to adopt this executor and start thinking in terms of tasks, not thread management. Blocking calls—sleeping, waiting for a database, reading a file—are no longer enemies of throughput. They are expected parts of the flow.
But launching thousands of independent tasks is only the start. What about tasks that are related? In the old model, if I fired off two subtasks and the first one failed, managing the lifecycle of the second was my problem. I had to write careful cleanup code. This brings me to the second technique: structured concurrency.
Structured concurrency says that the lifespan of concurrent tasks should be tied to a specific code block. A task shouldn’t outlive its parent. Java provides StructuredTaskScope to make this pattern simple and reliable. It ensures that if the main task fails or is interrupted, all its forked child tasks are automatically cancelled. It feels like using a try-with-resources block for concurrency.
Let me show you a common scenario: fetching a user profile from two independent services.
public UserProfile fetchUserProfile(String userId) throws ExecutionException, InterruptedException {
try (var scope = new StructuredTaskScope.ShutdownOnFailure()) {
// Fork two subtasks. They start running immediately.
Future<User> userFuture = scope.fork(() -> callUserService(userId));
Future<List<Order>> ordersFuture = scope.fork(() -> callOrderService(userId));
// Join the scope: wait for ALL forked tasks to finish (or fail).
scope.join();
// If any task failed, throw the exception immediately.
scope.throwIfFailed();
// At this point, we know both succeeded. Get the results.
User user = userFuture.resultNow();
List<Order> orders = ordersFuture.resultNow();
return new UserProfile(user, orders);
} // The scope closes here, ensuring all threads are done.
}
// Contrast with the old, more fragile way:
public UserProfile oldWay(String userId) throws InterruptedException {
ExecutorService executor = Executors.newCachedThreadPool();
Future<User> userFuture = executor.submit(() -> callUserService(userId));
Future<List<Order>> ordersFuture = executor.submit(() -> callOrderService(userId));
// What if the first get() throws an exception? We must cancel the second.
try {
User user = userFuture.get();
List<Order> orders = ordersFuture.get();
return new UserProfile(user, orders);
} catch (ExecutionException e) {
ordersFuture.cancel(true); // Manual cleanup required
throw new RuntimeException(e);
} finally {
executor.shutdown();
}
}
The structured approach is cleaner. The scope acts as a firewall for errors and a guaranteed cleanup mechanism. It makes concurrent code easier to read and reason about. The subtasks are scoped to the try block, just like file streams.
Now, you might be migrating existing code. You change your executor to a virtual thread per task executor, but you don’t see the massive performance improvement you expected. This leads to the third technique: identifying and fixing pinning.
A virtual thread runs on a real OS thread, called a carrier thread. Normally, when a virtual thread blocks (on sleep, I/O, a lock), the JVM can unmount it from the carrier and park it. The carrier is then free to run a different virtual thread. This is the magic. However, if the virtual thread is in a synchronized block or method, it cannot be unmounted. It is "pinned" to its carrier. That carrier thread is stuck, waiting, just like in the old model.
The synchronized keyword is the most common culprit. The fix is usually straightforward: replace it with a java.util.concurrent.locks.ReentrantLock.
// Before: Potentially problematic with virtual threads
public class OrderProcessor {
private final Object lock = new Object();
public void process(Order order) {
synchronized(lock) { // This pins the virtual thread for the entire block
validate(order);
saveToDatabase(order); // A blocking call! The carrier thread waits.
sendConfirmation(order);
}
}
}
// After: Virtual-thread friendly
public class OrderProcessor {
private final ReentrantLock lock = new ReentrantLock();
public void process(Order order) {
lock.lock(); // Acquire the lock
try {
validate(order);
// The virtual thread can yield here during saveToDatabase if it blocks on I/O!
saveToDatabase(order); // The carrier thread is released to do other work.
sendConfirmation(order);
} finally {
lock.unlock(); // Always unlock in a finally block
}
}
}
The ReentrantLock allows the JVM to unmount the virtual thread while it’s blocked inside the saveToDatabase call, even though it still holds the lock. This preserves concurrency. A good rule of thumb is to audit your code for synchronized on methods or blocks that may perform I/O, network calls, or other blocking operations. Replace them.
The fourth technique is about observation and debugging. When you have 50,000 threads, a traditional thread dump becomes an unusable wall of text. The tools and your approach need to evolve.
The JVM has been updated. When you send a SIGQUIT signal or use jstack, the thread dump now includes virtual threads, presented in a more structured format. But you’ll also want to instrument your code. The Thread class has new methods for this.
// In your application code or diagnostics
public void handleRequest(Request request) {
System.out.println("Handling request on thread: " + Thread.currentThread());
System.out.println(" Is virtual? " + Thread.currentThread().isVirtual());
System.out.println(" Thread ID: " + Thread.currentThread().threadId()); // New, stable ID
if (Thread.currentThread().isVirtual()) {
// You can add virtual-thread-specific logging or metrics
virtualThreadCounter.increment();
}
// Your business logic...
}
// To monitor states from an admin endpoint or tool
public Map<String, Long> getThreadStats() {
return Thread.getAllStackTraces().keySet().stream()
.filter(Thread::isVirtual)
.collect(Collectors.groupingBy(
thread -> thread.getState().toString(),
Collectors.counting()
));
// Returns something like: {"RUNNABLE": 1245, "WAITING": 567, "BLOCKED": 12}
}
Monitoring tools like Micrometer are also adding support for virtual thread observability. The key is to shift from thinking about thread count as a limited resource to thinking about virtual thread states (RUNNABLE, WAITING, BLOCKED) as indicators of workload and contention.
Finally, the fifth technique is integration. You don’t need to rewrite your Spring Boot or Micronaut application. Most modern frameworks can use virtual threads with configuration changes. The goal is to allow your web server and database connection pool to handle a much higher number of simultaneous requests.
Here is how you might configure a Spring Boot application.
import org.springframework.boot.web.embedded.tomcat.TomcatProtocolHandlerCustomizer;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.core.task.AsyncTaskExecutor;
import org.springframework.core.task.support.TaskExecutorAdapter;
import java.util.concurrent.Executors;
@Configuration
public class VirtualThreadConfiguration {
// This tells Tomcat to use a virtual thread per request.
@Bean
public TomcatProtocolHandlerCustomizer<?> virtualThreadExecutorCustomizer() {
return protocolHandler -> {
protocolHandler.setExecutor(Executors.newVirtualThreadPerTaskExecutor());
};
}
// This configures Spring's @Async support to use virtual threads.
@Bean
public AsyncTaskExecutor applicationTaskExecutor() {
return new TaskExecutorAdapter(Executors.newVirtualThreadPerTaskExecutor());
}
}
Your database connection pool settings change too. Previously, you might have set a maximum pool size of 20 or 50 to avoid overloading the database or exhausting OS threads. With virtual threads, connections become the primary scarce resource, not threads. You can set a higher pool size because thousands of virtual threads can multiplex their waiting onto a small number of carrier threads.
// HikariCP configuration in application.properties or Java config
HikariConfig config = new HikariConfig();
config.setJdbcUrl("jdbc:postgresql://localhost/db");
config.setUsername("user");
config.setPassword("pass");
// The key change: you can support more concurrent connections.
// The limit is now your database, not your OS thread count.
config.setMaximumPoolSize(200);
// Connection timeout might need adjustment for higher concurrency.
config.setConnectionTimeout(10000);
return new HikariDataSource(config);
The mental model shifts. Before, you had a limited number of platform threads (like 200) and you designed your entire app around not blocking them. Now, you have a nearly unlimited number of virtual threads. The bottleneck moves elsewhere: to your database connections, your downstream HTTP client pools, or your CPU. Your code can be simpler, blocking when it needs to, while your application handles more work.
In my own testing, the results felt almost misleading. I wrote a simple HTTP server that performed a simulated blocking operation. With a platform thread pool of 200, it topped out at 200 concurrent requests. With virtual threads, it easily handled 10,000, limited only by my test client. The code was identical except for the executor service. That’s the power. It’s not about making individual requests faster; it’s about allowing your system to be busy in a different, more efficient way.
Start by experimenting. Use Executors.newVirtualThreadPerTaskExecutor() in a small service. Use structured concurrency for new code. Audit for pinning. Update your frameworks. This isn’t just a new feature; it’s a fundamental change that lets us write clearer, more maintainable concurrent code that scales in a way we previously had to jump through hoops to achieve. It brings the simplicity back.
📘 Checkout my latest ebook for free on my channel!
Be sure to like, share, comment, and subscribe to the channel!
101 Books
101 Books is an AI-driven publishing company co-founded by author Aarav Joshi. By leveraging advanced AI technology, we keep our publishing costs incredibly low—some books are priced as low as $4—making quality knowledge accessible to everyone.
Check out our book Golang Clean Code available on Amazon.
Stay tuned for updates and exciting news. When shopping for books, search for Aarav Joshi to find more of our titles. Use the provided link to enjoy special discounts!
Our Creations
Be sure to check out our creations:
Investor Central | Investor Central Spanish | Investor Central German | Smart Living | Epochs & Echoes | Puzzling Mysteries | Hindutva | Elite Dev | Java Elite Dev | Golang Elite Dev | Python Elite Dev | JS Elite Dev | JS Schools
We are on Medium
Tech Koala Insights | Epochs & Echoes World | Investor Central Medium | Puzzling Mysteries Medium | Science & Epochs Medium | Modern Hindutva
Top comments (0)