DEV Community

Jagdish Salgotra
Jagdish Salgotra

Posted on

Resource-Aware Structured Concurrency in Java 21

Resource-Aware Structured Concurrency in Java 21

Shortened crosspost version of Part 5 from the "Structured Concurrency in Java (Java 21 Preview Edition)" series.

Note
This article uses Java 21 preview StructuredTaskScope APIs (JEP 453). API changes in later previews are covered in Part 9. Compile and run with --enable-preview.

Originally published on engnotes.dev: Resource-Aware Structured Concurrency in Java 21

This is a shortened version with the same core code and takeaways.

Virtual threads and structured scopes make concurrency easier to write. They do not make resource limits disappear.

That is the part people forget. A service can look elegant in code and still overload a DB pool, exhaust HTTP connections, or push CPU saturation harder than it should. If there is no admission control, request fan-out can still hurt you.

Grouping Work by Real Limits

The article’s core example splits work by resource type and runs each group inside one scope:

public List<String> executeResourceAware(List<ResourceTask> tasks) throws Exception {
    var cpuTasks = tasks.stream().filter(t -> t.getType() == ResourceType.CPU).toList();
    var memoryTasks = tasks.stream().filter(t -> t.getType() == ResourceType.MEMORY).toList();
    var ioTasks = tasks.stream().filter(t -> t.getType() == ResourceType.IO).toList();

    try (var scope = new StructuredTaskScope.ShutdownOnFailure()) {

        var cpuResult = scope.fork(() -> executeResourceGroup(cpuTasks));
        var memoryResult = scope.fork(() -> executeResourceGroup(memoryTasks));
        var ioResult = scope.fork(() -> executeResourceGroup(ioTasks));

        scope.join();
        scope.throwIfFailed();

        List<String> allResults = new ArrayList<>();
        allResults.addAll(cpuResult.get());
        allResults.addAll(memoryResult.get());
        allResults.addAll(ioResult.get());

        return allResults;
    }
}
Enter fullscreen mode Exit fullscreen mode

I like this because it forces the code to acknowledge that not all parallel work is the same. CPU pressure, memory pressure, and I/O pressure are different problems, so they should not all share one implicit concurrency policy.

Structured Scope Still Helps

Inside the resource group, the orchestration stays simple:

private List<String> executeResourceGroup(List<ResourceTask> tasks) throws Exception {
    List<String> results = new ArrayList<>();

    try (var scope = new StructuredTaskScope.ShutdownOnFailure()) {
        List<StructuredTaskScope.Subtask<String>> subtasks = new ArrayList<>();

        for (ResourceTask task : tasks) {
            subtasks.add(scope.fork(() -> {
                Thread.sleep(task.getDuration() * 100);
                return task.getName() + " completed";
            }));
        }

        scope.join();
        scope.throwIfFailed();

        for (var subtask : subtasks) {
            results.add(subtask.get());
        }
    }

    return results;
}
Enter fullscreen mode Exit fullscreen mode

That is the useful combination here. Resource policy stays explicit, but the request lifecycle is still easy to follow.

Bulkheads Matter More Than People Want To Admit

The article also makes the right point about separating critical and non-critical work:

public String bulkheadPattern() throws Exception {
    try (var criticalScope = new StructuredTaskScope.ShutdownOnFailure();
         var nonCriticalScope = new StructuredTaskScope.ShutdownOnFailure()) {

        var criticalService1 = criticalScope.fork(() -> simulateServiceCall("critical-auth", 100));
        var criticalService2 = criticalScope.fork(() -> simulateServiceCall("critical-payment", 150));

        var nonCriticalService1 = nonCriticalScope.fork(() -> simulateServiceCall("analytics", 200));
        var nonCriticalService2 = nonCriticalScope.fork(() -> simulateServiceCall("logging", 50));
        criticalScope.join();
        criticalScope.throwIfFailed();
        try {
            nonCriticalScope.join();
            nonCriticalScope.throwIfFailed();
        } catch (Exception e) {
            logger.warn("Non-critical services failed: {}", e.getMessage());
        }

        return String.format("Bulkhead Pattern: Critical[%s, %s] Non-Critical[%s, %s]",
            criticalService1.get(), criticalService2.get(),
            "analytics-ok", "logging-ok");
    }
}
Enter fullscreen mode Exit fullscreen mode

This is a good reminder that not every task deserves the same priority. If critical work and optional enrichments share the same fate under load, the system usually becomes less predictable than it needs to be.

The Practical Caveat

Virtual threads can increase incoming concurrency faster than the downstream systems can handle it. That is why guard values, bulkheads, semaphores, and bounded CPU phases still matter.

I would review this kind of code with a few questions in mind:

  • are concurrency limits aligned with real downstream capacity
  • is critical work isolated from optional work
  • are CPU-heavy phases still bounded
  • is throwIfFailed() used consistently in ShutdownOnFailure scopes

Those decisions usually matter more than the concurrency primitive itself.

The Practical Takeaway

What structured concurrency gives you here is not automatic capacity management. It gives you a cleaner place to express capacity management.

That is useful, because the resource policy becomes visible in code instead of being left to chance, defaults, or accident.

Full article with more examples, operational metrics, testing notes, runnable repo, and live NoteSensei chat:

Resource-Aware Structured Concurrency in Java 21

Top comments (0)