In 2006, I worked on a large logistics system for transporting wood for pulp production. One of the most critical modules managed the entry and exit of trucks from a storage yard. We implemented it using Threads in Delphi 7. The debugging process was an absolute nightmare.
Hundreds of trucks arriving and leaving, each one triggering database checks, sensor readings, queue management, and synchronization logic. Every thread was a heavyweight OS thread. Memory usage skyrocketed, context switching killed performance, and when something went wrong (which happened constantly), the debugger would freeze or show you a stack trace that made zero sense because the threads were all fighting for the same resources.
Sound familiar?
That exact pain — the same pain Java developers have felt for 30 years with regular Thread objects and ExecutorService backed by platform threads — is what Project Loom was created to eliminate.
Today, with Java 21+ (and fully mature in 2026), Virtual Threads are production-ready and change the game completely.
Let’s demystify Project Loom and see, with real Java code, why Virtual Threads are not just “better threads” — they are a completely different beast.
The Problem with Traditional (Platform) Threads
Since Java 1.0, every new Thread() or thread from a ThreadPoolExecutor is a platform thread:
- It maps one-to-one to an OS thread.
- It consumes ~1 MB of stack space (configurable, but rarely less).
- Creating thousands of them is expensive and risky.
- Blocking operations (I/O,
sleep(), database calls, HTTP requests) block the entire OS thread. - Context switching is handled by the operating system (expensive).
In the 2006 Delphi system, we could barely handle a few hundred concurrent trucks before the server started thrashing. The same limitation existed in Java until Project Loom.
Classic Platform Thread Example (The Old Way)
Here’s how we used to handle concurrent truck processing in Java (pre-Loom):
* All examples are in Java 25, you MUST enable preview features to execute them.
// OLD STYLE - Platform Threads
public class TruckYardPlatformThreads {
public static void main(String[] args) {
ExecutorService executor = Executors.newFixedThreadPool(200); // Limited by OS
for (int i = 0; i < 10_000; i++) { // Imagine 10,000 trucks arriving
final int truckId = i;
executor.submit(() -> processTruck(truckId));
}
executor.shutdown();
}
private static void processTruck(int truckId) {
try {
System.out.println("Truck #" + truckId + " entering yard [Thread: "
+ Thread.currentThread() + "]");
// Simulate blocking I/O (database, sensor, loading/unloading)
Thread.sleep(1000); // This BLOCKS an entire OS thread!
System.out.println("Truck #" + truckId + " leaving yard");
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
}
}
Problems with this code:
- Maximum practical concurrency: ~200–500 threads on most servers.
- Memory usage: 200 threads × 1 MB stack = 200 MB just for stacks.
- Debugging deadlocks or race conditions? Good luck.
Enter Project Loom and Virtual Threads
Project Loom (started in 2017) introduced Virtual Threads — lightweight threads managed entirely by the JVM, not the OS.
Key differences:
| Feature | Platform Threads (Old) | Virtual Threads (Loom) |
|---|---|---|
| Mapping | 1:1 with OS thread | Many:1 with carrier (platform) thread |
| Stack size | ~1 MB (fixed) | "A few KB (dynamic, grows as needed)" |
| Creation cost | Expensive | Extremely cheap |
| Maximum practical count | Thousands | Millions |
| Blocking behavior | Blocks OS thread | Does not block carrier thread |
| Scheduling | OS scheduler | JVM scheduler (work-stealing) |
| Debugging experience | Same as before | Identical to platform threads |
Virtual threads are not a new abstraction like coroutines or reactive streams. They are real Thread objects — you can use Thread.currentThread(), synchronized, locks, ThreadLocal (though ScopedValue is preferred now), and they behave exactly like the threads you already know.
The magic is under the hood: when a virtual thread blocks (e.g., sleep(), I/O, waiting for a lock), the JVM unmounts it from its carrier thread and mounts another virtual thread. The carrier thread (a real OS thread) never blocks.
Creating Virtual Threads – The New Way
// NEW STYLE - Virtual Threads (Java 21+)
try (var executor = Executors.newVirtualThreadPerTaskExecutor()) {
for (int i = 0; i < 1_000_000; i++) { // ONE MILLION trucks? No problem!
final int truckId = i;
executor.submit(() -> processTruck(truckId));
}
}
Blocking calls like Thread.sleep(), database queries, or HTTP requests no longer waste carrier threads.
Structured Concurrency (The Missing Piece)
Project Loom also introduced Structured Concurrency (via StructuredTaskScope), which makes concurrent code safer and more readable.
Old way (fire-and-forget):
// OLD STYLE - Fire-and-forget with platform threads
// Hard to manage lifecycle, cancellation, and errors
for (int i = 0; i < 5; i++) {
new Thread(() -> {
try {
processTruck(i);
} catch (Exception e) {
// Error handling is scattered and unreliable
e.printStackTrace();
}
}).start();
}
// No guarantee that threads will finish, no easy way to wait or cancel
This approach often leads to thread leaks, orphaned tasks, and difficult debugging — the same issues I faced 20 years ago in Delphi.
New way with Structured Concurrency:
// NEW STYLE - Structured Concurrency with Virtual Threads
private static String processTruckWithResult(int truckId) throws InterruptedException {
System.out.println("Truck #" + truckId + " entering yard [Virtual Thread: "
+ Thread.currentThread() + "]");
// Simulate blocking work (DB, sensor, etc.)
Thread.sleep(1000);
String result = "Truck #" + truckId + " processed successfully";
return result;
}
public static void main(String[] args) throws InterruptedException {
try (var scope = StructuredTaskScope.open(
StructuredTaskScope.Joiner.allSuccessfulOrThrow())) {
List<StructuredTaskScope.Subtask<String>> subtasks = new ArrayList<>();
for (int i = 0; i < 5; i++) {
final int truckId = i;
subtasks.add(scope.fork(() -> processTruckWithResult(truckId)));
}
scope.join(); // Wait for all subtasks
for (var subtask : subtasks) {
// Results are available and exceptions are propagated cleanly
System.out.println(subtask.get());
}
}
// Scope automatically shuts down and cleans up all subtasks
}
Benefits:
- Tasks are confined to a lexical scope (like try-with-resources for threads).
- If one task fails, remaining tasks are cancelled automatically.
- No thread leaks.
- Clear ownership and predictable lifecycle.
Additional Loom Features You Should Know
- ScopedValue (modern, safer replacement for
ThreadLocalin many cases). - Excellent backward compatibility — most existing thread-based code works with virtual threads with zero or minimal changes.
When Should You Use Virtual Threads?
Use Virtual Threads when:
- Your application is I/O bound (web services, APIs, databases, logistics systems).
- You need high concurrency with simple code.
Use Platform Threads (or combine) when:
- Pure CPU-bound heavy computation.
Final Thoughts
The 2006 logistics system I worked on would have been radically simpler and more reliable with Virtual Threads and Structured Concurrency. No more thread pool tuning nightmares. No more “out of threads” errors at 3 AM.
Project Loom delivers scalable concurrency without forcing you to abandon the familiar threaded programming model.
If you’re still managing fixed thread pools in 2026… it’s time to upgrade.
Have you migrated any production systems to Virtual Threads yet? What was your experience?
Drop your thoughts in the comments below. I’ll see you in the next post where we’ll explore Structured Concurrency and Scoped Values in much more depth.
Until then — keep those memory leaks under control. 🚚
Originally posted on my blog, Memory Leak
Top comments (0)