Note
Examples in this article use Java 21 previewStructuredTaskScopeAPIs (JEP 453). See Part 9 for Java 25 migration mapping. Compile and run with--enable-preview.
Originally published on engnotes.dev: Progressive Results and Hierarchical Task Management
This is a shortened version with the same core code and takeaways.
Some workflows are more awkward than “fork a few tasks and wait for all of them.”
Sometimes you want progress updates while work is still running. Sometimes the workflow has natural parent-child boundaries. Sometimes one part can degrade while another part still has to succeed. That is where structured concurrency gets more interesting.
Progressive Progress Without Losing Scope
Java 21 preview gives you joinUntil(...), which is enough to build polling-style progress tracking:
try (var scope = new StructuredTaskScope.ShutdownOnFailure()) {
List<StructuredTaskScope.Subtask<T>> subtasks = new ArrayList<>();
for (int i = 0; i < request.getTasks().size(); i++) {
final int taskIndex = i;
Callable<T> task = request.getTasks().get(i);
subtasks.add(scope.fork(() -> {
try {
T result = task.call();
progressTracker.updateProgress(executionId, taskIndex, "completed");
request.getProgressCallback().accept(new ProgressUpdate<>(taskIndex, result, null));
return result;
} catch (Exception e) {
progressTracker.updateProgress(executionId, taskIndex, "failed: " + e.getMessage());
request.getProgressCallback().accept(new ProgressUpdate<>(taskIndex, null, e));
throw e;
}
}));
}
Instant deadline = Instant.now().plus(request.getTimeout());
while (totalCompleted < subtasks.size() && Instant.now().isBefore(deadline)) {
try {
scope.joinUntil(Instant.now().plusMillis(50));
} catch (TimeoutException e) {
}
// inspect subtask state here
}
if (Instant.now().isAfter(deadline)) {
scope.shutdown();
}
}
What I like about this pattern is that you can expose progress without falling back to a completely ad hoc concurrency model. The scope still owns the lifecycle. You are just checking state as work completes.
The tradeoff is pretty obvious too: if the polling interval gets too tight, you start paying for it in CPU noise. So this is something to tune, not cargo-cult.
Nested Scopes Can Actually Help
The hierarchical example in the article is simple, but it gets the point across:
public String executeHierarchical() throws Exception {
try (var parentScope = new StructuredTaskScope.ShutdownOnFailure()) {
var childTask1 = parentScope.fork(() -> executeChildTasks("Group-1"));
var childTask2 = parentScope.fork(() -> executeChildTasks("Group-2"));
var childTask3 = parentScope.fork(() -> executeChildTasks("Group-3"));
parentScope.join();
parentScope.throwIfFailed();
return String.format("Parent completed: [%s, %s, %s]",
childTask1.get(), childTask2.get(), childTask3.get());
}
}
With a child scope like:
private String executeChildTasks(String group) throws Exception {
try (var childScope = new StructuredTaskScope.ShutdownOnFailure()) {
var task1 = childScope.fork(() -> {
Thread.sleep(50);
return group + "-Task-1";
});
var task2 = childScope.fork(() -> {
Thread.sleep(100);
return group + "-Task-2";
});
childScope.join();
childScope.throwIfFailed();
return String.format("%s: [%s, %s]", group, task1.get(), task2.get());
}
}
This is useful when the workflow already has natural business layers. Parent scope owns the larger operation. Child scope owns the local details. That is a lot easier to reason about than one huge flat orchestration block.
Degraded Child Work Should Be Explicit
The article also shows a useful boundary: some scopes are critical, some are not.
If enrichment work can fail without breaking the main response, say that clearly in code. Catch it at that boundary. Do not let “optional” behavior hide in random places.
That is where hierarchical scopes help. They give you a clean spot to say which failures are terminal and which ones are acceptable degradation.
The Java 21 Review Rule
For this style of code, I would check a few things first:
- are progress callbacks lightweight
- is the polling interval sane
- does timeout handling call
scope.shutdown()when returning early - does every
ShutdownOnFailurepath still callthrowIfFailed() - is the scope hierarchy shallow enough to stay debuggable
Those details matter more than the pattern names.
The Practical Takeaway
What I like about these patterns is that they let you model more realistic workflows without throwing away the lifecycle clarity that made structured concurrency useful in the first place.
You can report progress. You can nest work. You can degrade one branch and keep another strict. But the boundaries still stay visible.
That is the part worth keeping.
Full article with more examples, timeout guidance, testing notes, runnable repo, and live NoteSensei chat:
Top comments (0)