DEV Community

Jangwook Kim
Jangwook Kim

Posted on • Originally published at effloow.com

Mastra AI 1.0: The TypeScript Agent Framework Developers Are Actually Shipping

TypeScript developers building AI agents in 2025 had a problem: Python had LangChain, LlamaIndex, and a growing ecosystem. JavaScript had roughly nothing that felt production-ready.

Mastra changed that. Built by the team behind Gatsby and backed by Y Combinator (W25), Mastra shipped its 1.0 release in January 2026 and crossed 22,000 GitHub stars within 15 months of launch. By 1.0, weekly npm downloads had hit 300,000. As of May 2026, the package is at v1.8.1 — actively maintained, not a prototype.

This guide walks through what Mastra actually includes, how its core primitives work, and where it fits in the agent framework landscape.

What Mastra Is

Mastra is a TypeScript framework for building AI-powered applications and agents. It ships six primitives in a single package:

  • Agents — LLM-driven actors that can call tools and maintain memory
  • Memory — persistent working context that survives across agent runs
  • Workflows — graph-based orchestration for multi-step processes
  • RAG — document chunking, embedding, and vector store retrieval
  • Evals — automated quality scoring for LLM outputs
  • Observability — built-in tracing and monitoring

The design philosophy is TypeScript-first, not TypeScript-as-an-afterthought. The team explicitly chose to support only JavaScript/TypeScript — no Python bindings — so the API design fits how TypeScript developers think.

Getting Started

Mastra requires Node.js 22.13.0 or later. The fastest path is the scaffold CLI:

npx create-mastra@latest
Enter fullscreen mode Exit fullscreen mode

This runs an interactive wizard that sets up a project with the right package structure, a Mastra config file, and working example code. You can also add it to an existing project:

npm install mastra@1.8.1
Enter fullscreen mode Exit fullscreen mode

The core package is mastra. Additional capabilities like evals and memory come from scoped packages:

npm install @mastra/evals @mastra/memory
Enter fullscreen mode Exit fullscreen mode

Agents and Tools

The central primitive is the agent. You define an agent with a model, instructions, and a list of tools it can call:

import { Mastra, createTool } from "mastra";
import { anthropic } from "@ai-sdk/anthropic";
import { z } from "zod";

const searchTool = createTool({
  id: "search",
  description: "Search the web for current information",
  inputSchema: z.object({ query: z.string() }),
  execute: async ({ query }) => {
    // your search implementation
    return { results: [] };
  },
});

const researchAgent = new Mastra().agent({
  name: "researcher",
  model: anthropic("claude-sonnet-4-5"),
  instructions: "You are a research assistant. Use the search tool when you need current information.",
  tools: { search: searchTool },
});
Enter fullscreen mode Exit fullscreen mode

Tool schemas use Zod — Mastra generates the JSON schema for the model automatically. There's no manual schema writing.

The agent exposes a generate() method for single-turn calls and a stream() method for streaming responses:

const result = await researchAgent.generate("What changed in AI this week?");
console.log(result.text);
Enter fullscreen mode Exit fullscreen mode

Memory

One of the gaps in earlier TypeScript agent toolkits was memory — agents that forget everything between runs. Mastra's memory system gives agents persistent working context.

Memory connects to a storage backend (SQLite by default, with options for Redis and PostgreSQL). You attach it to an agent at definition time:

import { Memory } from "@mastra/memory";

const agent = mastra.agent({
  name: "assistant",
  model: anthropic("claude-sonnet-4-5"),
  memory: new Memory(),
  // ...
});
Enter fullscreen mode Exit fullscreen mode

Once memory is attached, the agent automatically reads and writes to it across separate generate() calls, keyed by thread ID. The thread ID is what ties a conversation together:

const response = await agent.generate("Remember this for later: the deadline is Friday", {
  threadId: "user-123-session-1",
});

// Later call — agent has context from the previous turn
const followUp = await agent.generate("When is the deadline?", {
  threadId: "user-123-session-1",
});
Enter fullscreen mode Exit fullscreen mode

Workflows

When you need to sequence multiple agent calls or mix LLM work with deterministic logic, Mastra's workflow engine handles orchestration.

Workflows use a method-chaining API that should feel familiar to anyone who has written Promise chains:

const pipeline = mastra
  .workflow("content-pipeline")
  .step("fetch", async ({ input }) => {
    return { content: await fetch(input.url).then(r => r.text()) };
  })
  .step("summarize", async ({ fetch }) => {
    return await summaryAgent.generate(`Summarize: ${fetch.content}`);
  })
  .branch([
    {
      condition: ({ summarize }) => summarize.text.length > 500,
      steps: longFormPipeline,
    },
    {
      condition: () => true,
      steps: shortFormPipeline,
    },
  ]);
Enter fullscreen mode Exit fullscreen mode

The .branch() method handles conditional routing. .parallel() runs multiple steps concurrently and waits for all of them. Step outputs are typed — TypeScript infers the shape of each step's return value and makes it available to downstream steps.

RAG

Mastra's RAG support covers the full pipeline: document ingestion, chunking, embedding, and retrieval.

import { MastraVectorStore } from "@mastra/memory";

const vectorStore = new MastraVectorStore({ provider: "pinecone" });

// Ingest documents
await mastra.rag.ingest({
  documents: [{ content: articleText, metadata: { slug: "my-article" } }],
  vectorStore,
  embedder: openai.embedding("text-embedding-3-small"),
});

// Retrieve in an agent
const ragAgent = mastra.agent({
  name: "rag-assistant",
  model: anthropic("claude-sonnet-4-5"),
  tools: {
    retrieve: mastra.rag.createRetrieveTool({ vectorStore }),
  },
});
Enter fullscreen mode Exit fullscreen mode

The vector store abstraction supports Pinecone, Qdrant, Chroma, and SQLite (via @mastra/memory). Switching providers means changing one line.

Evals

Evals are where Mastra stands apart from most TypeScript agent frameworks. Rather than leaving output quality to manual spot-checks, Mastra ships a built-in evaluation system.

import { ToxicityMetric, RelevanceMetric } from "@mastra/evals";

const evalResult = await mastra.evals.run({
  agent: researchAgent,
  input: "What are the risks of using AI in medical diagnosis?",
  metrics: [new ToxicityMetric(), new RelevanceMetric()],
});

console.log(evalResult.scores);
// { toxicity: 0.02, relevance: 0.91 }
Enter fullscreen mode Exit fullscreen mode

Built-in metrics cover toxicity, bias, relevance, and factual accuracy. Each metric uses a combination of model-graded scoring, rule-based checks, and statistical methods. You can also define custom metrics.

The practical use is in CI — run evals against a fixed test set after every model or prompt change to catch regressions before they reach users.

Where Mastra Fits

Mastra competes with LangChain.js, the Vercel AI SDK, and the OpenAI Agents SDK for TypeScript.

The key differences:

vs. LangChain.js: Mastra is more opinionated and ships more complete primitives. LangChain has a larger ecosystem of integrations. If you want to wire up custom chains, LangChain gives more flexibility. If you want a production-ready agent with memory and evals out of the box, Mastra does that faster.

vs. Vercel AI SDK: The AI SDK is excellent for adding streaming LLM calls to a web app. It does not have workflow orchestration, RAG, or evals. They are complementary — Mastra actually supports the AI SDK as a model provider interface.

vs. OpenAI Agents SDK (TypeScript): OpenAI's SDK is tightly coupled to OpenAI models. Mastra is model-agnostic via the Vercel AI SDK under the hood, so it works with Anthropic, Google, Mistral, and open models.

Practical Considerations

A few things worth knowing before adopting Mastra:

Node.js 22+ is required. If your deployment environment is pinned to an older Node.js version, this is a hard blocker.

The framework is TypeScript-only by design. There is no Python port planned. If your team uses Python, this is not the right tool.

The Apache 2.0 license covers the core. Commercial use is allowed without payment. Mastra's cloud platform (observability, hosted agents) is a separate paid product, but the framework itself is free.

The ecosystem is young. At v1.8.1, Mastra is post-1.0 and stabilizing, but it does not have the breadth of integrations that LangChain has built over two years.

What to Build

Mastra's primitives are well-suited for:

  • Customer support agents — memory for conversation history, tools for CRM lookups, evals for quality monitoring
  • Research pipelines — RAG for document retrieval, workflows for multi-step research → draft → review
  • Code review agents — tools for git integration, workflows for sequential analysis steps
  • Internal knowledge assistants — RAG over internal docs, memory for user context

It is less suited for low-latency applications where the agent orchestration overhead matters, or for use cases that need the full breadth of LangChain's integration library.

Getting the Docs

The official documentation is at mastra.ai/docs. The GitHub repo at mastra-ai/mastra has working examples in the examples/ directory — these are a better starting point than the README for understanding how the primitives compose.

The npm package is mastra (not @mastra/mastra). Ecosystem packages are scoped under @mastra/.


For TypeScript teams building production AI agents in 2026, Mastra 1.0 is the most complete single-package solution available. It does not require piecing together separate libraries for memory, orchestration, and evaluation — those primitives ship together and are designed to compose.

The framework is actively maintained and moving fast. At 22,000 stars and 1.8 million monthly npm downloads, it has enough adoption to validate that the API design is working. For new TypeScript agent projects, it is the framework to start with.

Top comments (0)