Here is a production bug I have seen three times now, in three different codebases, written by three developers who all considered themselves experienced with async JavaScript.
A route handler fires three parallel database queries with Promise.all. One of them hits a slow external service and times out after 30 seconds. Promise.all rejects immediately. The handler sends a 500. The caller moves on. The other two queries are still running. They are holding database connection pool slots. At a few hundred concurrent requests, the pool exhausts. Every subsequent request queues waiting for a slot. The app looks hung, but the logs show mostly successes.
The fix everyone reaches for is adding a shorter timeout to the slow query. That helps but does not solve the underlying issue. When Promise.all rejects, it rejects. It does not cancel the tasks it was waiting on. Those tasks have no owner anymore. They run to completion or to error, nobody is listening, and the resources they hold are not released until they are done.
This is the async leak problem in JavaScript, and it is more common than most people realize because it is often invisible. The code "works" in the sense that it produces correct outputs. The resource leak shows up as a slow degradation under load, a pool exhaustion event, or a flaky test that passes locally and fails in CI on a slow machine.
ES2026 shipped the primitives to actually fix this. You do not need a library. You do need to understand what you are composing and why.
The Three Failure Modes Worth Knowing
Before the solution, the problem is worth making concrete. These are the three production patterns I have seen cause real incidents.
The Abandoned Fetch
async function loadDashboard(userId: string) {
const [user, settings, notifications] = await Promise.all([
fetchUser(userId),
fetchSettings(userId),
fetchNotifications(userId), // slow, sometimes takes 10 seconds
]);
renderDashboard(user, settings, notifications);
}
The user navigates away before the notifications fetch completes. The component unmounts. Your framework might fire a cleanup callback, but that cleanup has no way to reach inside Promise.all and abort the in-flight fetches. All three requests continue running. In a single-page app with heavy route churn, these orphaned fetches accumulate. They fill browser connection slots, they log errors to surfaces nobody checks, and they burn mobile data the user did not ask to spend.
The Zombie Database Query
const [userData, auditLog, recommendations] = await Promise.all([
db.users.findOne(id), // completes in 5ms
db.audit.findByUser(id), // completes in 12ms
externalService.recommend(id), // times out after 30s
]);
When recommend throws, Promise.all rejects. Your code catches the error and returns a 500. findOne and findByUser are still holding connection pool slots from the database. In a busy API, this pattern under load means your connection pool fills with queries attached to requests that have already failed, and new requests queue waiting for slots that are technically occupied by work nobody is waiting for.
The Port Still Bound
async function run() {
const server = await startServer(3000);
await performSetup(); // slow, sometimes takes a few seconds
await server.waitForShutdown();
}
process.on('SIGINT', () => process.exit(0));
You hit Ctrl-C during performSetup. The process.exit(0) fires synchronously, tearing down the event loop before performSetup has a chance to resume and reach any cleanup code. The port stays bound. You try to restart and get EADDRINUSE. You have seen this. The fix is usually "kill the process manually" rather than "understand why the port is not being released."
All three of these have the same root cause: the tasks you started have no owner. When the parent gives up, the children keep running. The language gave you a way to start concurrent work, but not a way to define what happens to that work when the context that started it goes away.
What ES2026 Actually Gives You
The honest framing first: JavaScript in 2026 does not have a "structured concurrency" primitive in the way Go, Kotlin, or Swift do. There is no native task scope that automatically propagates cancellation to children when the parent exits. That language feature does not exist yet.
What does exist is a set of composable primitives that were not in the language two years ago. Together they make it possible to build the pattern yourself without depending on an external library.
await using and Symbol.asyncDispose
The Explicit Resource Management proposal reached Stage 4 in May 2025. await using is now available natively in Node.js 24+ and Chrome 134+. TypeScript has supported it since version 5.2 with transpilation.
The core idea: any object that defines [Symbol.asyncDispose]() returning a Promise can be declared with await using. When the enclosing block exits, regardless of how it exits (normal return, thrown error, early return), the runtime calls and awaits that method before continuing.
class DatabaseConnection {
constructor(private conn: Connection) {}
async query<T>(sql: string, params: unknown[]): Promise<T> {
return this.conn.execute(sql, params);
}
async [Symbol.asyncDispose]() {
await this.conn.close();
}
}
async function getUser(id: string) {
await using db = new DatabaseConnection(await pool.acquire());
// the connection releases when this block exits, always
return db.query('SELECT * FROM users WHERE id = ?', [id]);
}
The important part is "always." Not "if we reach the cleanup code." Not "if the Promise chain resolved normally." The disposal runs if the function returns, if it throws, and if something higher up calls its AbortSignal. The LIFO ordering also matters: multiple await using declarations in the same block dispose in reverse order, which is what you want when resources depend on each other.
AsyncDisposableStack extends this for ad-hoc aggregation:
async function withCleanup() {
await using stack = new AsyncDisposableStack();
const conn = stack.use(await openConnection());
stack.defer(async () => await logCompletion());
// both cleanup when block exits, in reverse registration order
return conn.query('...');
}
The limitation worth knowing: Safari does not support await using natively as of early 2026. TypeScript's transpilation covers it for browser targets, but if you rely on native support in a Safari-heavy environment, test carefully.
AbortSignal.any() for Composed Cancellation
AbortSignal.any() shipped in all major browsers in March 2024 (Chrome 116+, Firefox 124+, Safari 17.4+) and is available in Node.js 20+. It takes an array of AbortSignal instances and returns a new signal that fires the moment any of the input signals fires.
const controller = new AbortController();
const timeoutSignal = AbortSignal.timeout(5000);
const combined = AbortSignal.any([controller.signal, timeoutSignal]);
const response = await fetch(url, { signal: combined });
The fetch aborts if the user cancels (via controller.abort()) or if the 5-second timeout fires, whichever comes first. The combined signal's reason property tells you which input triggered it.
The real value is in composition. You can have a request-scoped abort signal, a user-interaction abort signal, and a global shutdown signal, and combine them into one that you pass into all the work spawned for a given operation. Any of them firing aborts everything.
Building a Task Scope
These two primitives together make a small but useful abstraction possible. I have been using a version of this in a handful of projects.
class TaskScope {
private controller = new AbortController();
readonly signal = this.controller.signal;
private tasks: Promise<unknown>[] = [];
spawn<T>(fn: (signal: AbortSignal) => Promise<T>): Promise<T> {
const task = fn(this.signal).catch((err) => {
if (err.name !== 'AbortError') this.controller.abort(err);
throw err;
});
this.tasks.push(task);
return task as Promise<T>;
}
async [Symbol.asyncDispose]() {
this.controller.abort();
await Promise.allSettled(this.tasks);
}
}
Using it:
async function loadDashboard(userId: string, parentSignal: AbortSignal) {
const scopeSignal = AbortSignal.any([
parentSignal,
AbortSignal.timeout(8000),
]);
const scopeController = new AbortController();
const combinedSignal = AbortSignal.any([scopeSignal, scopeController.signal]);
await using scope = new TaskScope();
const [user, settings, notifications] = await Promise.all([
scope.spawn((sig) => fetchUser(userId, sig)),
scope.spawn((sig) => fetchSettings(userId, sig)),
scope.spawn((sig) => fetchNotifications(userId, sig)),
]);
return { user, settings, notifications };
}
When any of the spawned tasks fails, the catch handler in spawn calls this.controller.abort(). All other spawned tasks receive the abort signal and should stop work. When the await using block exits, the asyncDispose method fires the abort and waits for all tasks to settle before releasing.
This does not magically make your fetch calls abort cleanly. Each function you pass to spawn needs to actually respect the signal. That means threading the signal through to every fetch call, every database query, every async operation that has a cancellation mechanism. The scope provides the structure; you still do the wiring.
The fetch case is easy because the fetch API accepts a signal. The database case depends on your driver. Many modern Node.js database drivers support AbortSignal on query calls. If yours does not, you wrap the query in a Promise.race against the abort signal and release the connection in the losing branch. It is more boilerplate, but the intent is explicit.
AsyncLocalStorage as Context Carrier
One more tool that ties this together, particularly in server environments: AsyncLocalStorage from Node.js.
The use case is ambient context, values that need to be available to anything spawned within a request without being passed as arguments everywhere. Request IDs, user sessions, cancellation tokens, tracing metadata.
Node.js 24 changed the internal implementation of AsyncLocalStorage from the legacy async_hooks machinery to a new AsyncContextFrame backend. The public API did not change but the correctness did. Earlier versions had edge cases where context could be silently lost across certain microtask boundary patterns. The Node 24 implementation is more reliable, which matters specifically for patterns where context carries cancellation tokens through nested async call chains.
import { AsyncLocalStorage } from 'node:async_context'; // Node 24+
const requestContext = new AsyncLocalStorage<{ signal: AbortSignal; requestId: string }>();
app.use((req, res, next) => {
const controller = new AbortController();
res.on('close', () => controller.abort(new Error('client disconnected')));
requestContext.run({ signal: controller.signal, requestId: req.id }, next);
});
async function anywhereInTheStack() {
const ctx = requestContext.getStore();
if (!ctx) throw new Error('called outside a request context');
// ctx.signal is the request-scoped abort signal
// no need to thread it through every function signature
}
This pattern composes cleanly with TaskScope. The scope reads the ambient signal from the store, combines it with its own signal, and any work spawned inside inherits both.
When to Reach for Effection
The primitives above get you a long way. For most server routes and browser interactions, await using plus AbortSignal.any() plus a thin scope abstraction covers the problem.
Effection is worth knowing about for cases where the generator-based model is a better fit. It is a maintained library (~5KB gzipped) that enforces the lifetime guarantees at the library level: no task outlives its parent, cancellation propagates down the entire task tree, and cleanup always runs.
await main(function* () {
const result = yield* race([
function* () { return yield* fetchUser(id); },
function* () { yield* sleep(5000); throw new Error('timeout'); },
]);
// the losing task is actively cancelled, not just abandoned
});
The difference from Promise.race is that Effection's race actively cancels the loser and awaits its cleanup before resolving. Promise.race abandons the loser. That distinction is exactly the failure mode described at the start.
The tradeoff is the generator syntax. It is not familiar to most JavaScript developers, it requires buy-in from the whole team, and it does not incrementally compose with existing async/await code. I would reach for Effection on greenfield CLIs and servers where correctness is the priority and the team is willing to adopt the model. For existing codebases, the await using approach is easier to add incrementally.
The Honest Limitation
I said this at the start and it is worth repeating: JavaScript in 2026 does not enforce task lifetime guarantees. The language lets you build the pattern. It does not require it.
Compare this with Go's goroutines, where passing a context.Context is idiomatic and cancellation propagation is expected by every library you use. Or Kotlin coroutines with structured concurrency enforced by the CoroutineScope. Or Swift's async let, which lexically bounds the lifetime of the spawned task. In those languages, "structured" is a property the runtime or compiler enforces.
In JavaScript, "structured" is a property you add to your codebase through discipline and a thin abstraction. The discipline part is the limiting factor. A new engineer joins, writes Promise.all without threading signals through, and the leak is back.
The TC39 Concurrency Control proposal (Stage 1) is about concurrency limiting, not lifetime management. It adds a governor model for capping concurrent operations, which is useful but a different problem. There is no proposal on the standards track for native task lifetime management as of mid-2026.
What we have is enough to write correct code. What we do not have is a language that makes incorrect code hard to write. That gap is worth being honest about, particularly if you are introducing this pattern to a team that is used to Promise.all and considers the topic closed.
Making It Stick in Practice
The structural change that actually made this work in a production codebase I maintain: treat task scope as a first-class part of the request lifecycle, not an optional add-on.
Every route handler receives an abort signal from the framework (or creates one tied to the response close event). That signal flows into a TaskScope that wraps the handler. Every async operation inside the handler uses scope.spawn rather than raw Promise.all. New code added later follows the same pattern because the pattern is already in the scaffolding.
The cost of adoption is the upfront wiring: making sure fetch calls and database queries actually accept and respect an abort signal. Most modern Node.js libraries do. For the ones that do not, a wrapper that races against the signal is worth writing once and reusing.
The benefit is not academic. Database connection pool exhaustion under load is a genuinely painful incident. Orphaned fetches in a React app are a common source of "this bug only happens after you navigate quickly" reports. Ports that stay bound after Ctrl-C are a small irritation that adds up over a development day.
These primitives exist now, they are stable in Node.js 24 and modern browsers, and they compose cleanly without pulling in a new runtime model. The question is whether you add the pattern to your scaffolding now or explain the connection pool leak to your on-call engineer six months from now.
Given how central async JavaScript is to AI agent tooling and multi-step pipelines where task cancellation actually matters, this is one of those patterns that goes from "good practice" to "necessary" as the complexity of what you are building goes up. The primitives are there. Worth using them.
Top comments (0)