DEV Community

Pat
Pat

Posted on

I Rebuilt My Agent Framework From Scratch. Here's Why TypeScript Won.

Six days ago I shipped a hacky Node.js CLI that could coordinate Claude, GPT, and Gemini agents. It was 700 lines of JavaScript, it spawned child processes, and it stored state in JSON files. It worked, barely.

Then OpenAI acquired OpenClaw.

If you missed it: OpenClaw hit 145K GitHub stars in three weeks. It's a personal AI agent that runs locally and connects through your messaging apps. The creator got acqui-hired by OpenAI on Feb 15. The project is moving to an open-source foundation.

And I thought: the market just got validated. Multi-agent orchestration isn't a nice-to-have anymore. It's the next platform layer. But here's the thing nobody's talking about -- the entire multi-agent ecosystem is Python-first.

LangGraph? Python. CrewAI? Python. AutoGen? Python. smolagents? Python.

Meanwhile, TypeScript surpassed Python in GitHub contributors in 2025. There are more TS developers writing production code than ever. And when they want to build multi-agent systems, their options are... limited.

So I rewrote MyUru from scratch. TypeScript-first. Provider-agnostic. Production-grade.

What MyUru v2 Actually Does

At its core, MyUru gives you five primitives:

1. Agent -- wraps any LLM with instructions, tools, and built-in tracing.

import { Agent } from 'myuru';
import { anthropic } from '@ai-sdk/anthropic';

const agent = new Agent({
  name: 'researcher',
  model: anthropic('claude-sonnet-4-5'),
  instructions: 'You are a research assistant.',
  maxSteps: 10,
  budgetPerRun: 0.50, // hard USD limit
});

const result = await agent.run('Find info about TypeScript frameworks');
console.log(result.text);
console.log(`Cost: $${result.usage.estimatedCostUsd.toFixed(4)}`);
Enter fullscreen mode Exit fullscreen mode

Every single run tells you exactly how many tokens it used and what it cost. No surprises.

2. Pipeline -- chain agents together. Sequential, parallel, or mixed.

import { Pipeline } from 'myuru';

const pipeline = new Pipeline({
  name: 'research-and-write',
  agents: {
    researcher: { name: 'researcher', model, instructions: 'Find key facts.' },
    writer: { name: 'writer', model, instructions: 'Write clear content.' },
    editor: { name: 'editor', model, instructions: 'Polish the final draft.' },
  },
  steps: [
    { agent: 'researcher', input: (ctx) => `Research: ${ctx.task}` },
    { agent: 'writer', input: (ctx) => `Write using: ${ctx.results.researcher}` },
    { agent: 'editor', input: (ctx) => `Edit: ${ctx.results.writer}` },
  ],
});

const result = await pipeline.run('TypeScript in 2026');
Enter fullscreen mode Exit fullscreen mode

Steps can run in parallel, have conditional execution, and require human approval before proceeding.

3. ModelRouter -- picks the right model for each task.

import { ModelRouter } from 'myuru';

const router = new ModelRouter({
  strategy: 'cost-optimized',
  models: {
    complex: anthropic('claude-opus-4-6'),
    standard: anthropic('claude-sonnet-4-5'),
    simple: anthropic('claude-haiku-4-5'),
  },
  budget: { maxPerDay: 10.00 },
});

const model = router.select(userInput);
// Short inputs -> Haiku. Complex analysis -> Opus. Smart defaults.
Enter fullscreen mode Exit fullscreen mode

4. Trace -- observability built into every run. Token counts, cost estimates, step-by-step execution logs.

5. Checkpoint -- persist pipeline state to disk. If your pipeline crashes at step 4 of 7, resume from where it stopped.

The Stack

MyUru is built on the Vercel AI SDK which gives us provider-agnostic access to 40+ LLM providers out of the box. Anthropic, OpenAI, Google, Mistral, Groq, Fireworks, you name it. One interface, any model.

The framework itself is:

  • TypeScript strict mode -- full type safety everywhere
  • ESM-only -- modern module system
  • Zero Python -- runs anywhere Node.js runs
  • 51 unit tests -- tested before it shipped

What I Learned From the Competition

I spent time studying what works and what doesn't across the ecosystem:

From OpenClaw: Their semantic snapshots (parsing accessibility trees instead of screenshots) are brilliant. 100x token reduction for web browsing. But their security is a disaster -- 13.4% of skills on their marketplace are malicious.

From LangGraph: Graph-based orchestration is powerful but the DX is painful. 33M downloads/month proves the demand; the complexity proves the opportunity.

From CrewAI: Role-based agent metaphors ("researcher", "writer") are incredibly intuitive. But CrewAI breaks in production at scale.

From OpenAI Agents SDK: Radical simplicity wins. Five concepts, that's it. But it locks you into OpenAI models.

MyUru tries to hit the sweet spot: simple enough for a single agent, powerful enough for production pipelines, and never locked to one provider.

Getting Started

npm install myuru @ai-sdk/anthropic
Enter fullscreen mode Exit fullscreen mode
import { Agent } from 'myuru';
import { anthropic } from '@ai-sdk/anthropic';

const agent = new Agent({
  name: 'assistant',
  model: anthropic('claude-sonnet-4-5'),
  instructions: 'Be helpful and concise.',
});

const { text, usage } = await agent.run('Explain async/await in 3 sentences');
console.log(text);
console.log(`${usage.totalTokens} tokens, $${usage.estimatedCostUsd.toFixed(4)}`);
Enter fullscreen mode Exit fullscreen mode

What's Next

This is v2.0.0-alpha.1. The foundation is solid -- agents, pipelines, routing, tracing, checkpoints -- but there's more coming:

  • Agent-to-agent handoff (structured delegation between agents)
  • Long-term memory (knowledge graphs that persist across runs)
  • Visual pipeline builder (design orchestrations without code)
  • MCP integration (connect to any MCP server as a tool source)

The repo is at github.com/Wittlesus/myuru. Stars, issues, and PRs all welcome.

If you're building multi-agent systems in TypeScript and tired of wrapping Python frameworks, give it a shot. npm install myuru and let me know what breaks.

Top comments (0)