DEV Community

AdmilsonCossa
AdmilsonCossa

Posted on

AI agents do not fail in one place

They fail across concurrency, retries, timeouts, queues, tools, streams, and provider calls.

That is why the JavaScript ecosystem has huge demand for separate async primitives:

Package Weekly Downloads
p-limit ~204M
p-map ~53M
p-timeout ~36M
p-retry ~38M
async-retry ~24M
p-queue ~23M
bottleneck ~10M

These libraries are good.

But the production pain is deeper:

You want concurrency → add p-limit

You want retries → add p-retry

You want timeouts → add p-timeout

You want queues → add p-queue

You want rate limits → add bottleneck

Now each primitive owns a different part of the lifecycle.

None coordinate cancellation together.

When an AI agent fails mid‑flight:

Who stops the retry?

Who clears the timeout?

Who drains the queue?

Who cleans up the tool call?

Who prevents the losing provider from continuing to bill?


WorkIt explores a different model:

work(items)
  .inParallel(8)
  .withRetry(3)
  .withTimeout("5s")
  .do(fn)
Enter fullscreen mode Exit fullscreen mode

One scope. One owner. One cancellation path. One cleanup model.

Concurrency, retry, timeout – under the same ownership tree.

👉 Read the full article:

Concurrency, Retry, and Timeout Under One Owner

npm install @workit/core
Enter fullscreen mode Exit fullscreen mode

Top comments (0)