DEV Community

Renato Marinho
Renato Marinho

Posted on

Vurb.ts: The MCP Framework with Zero Learning Curve


Enter fullscreen mode Exit fullscreen mode

Why Frameworks Matter for AI Agents

Every framework ever built assumed one thing: a human is reading the output.

Rails, Laravel, Express, tRPC — beautiful frameworks. But they all have a blind spot: they were designed for browsers and humans, not for AI agents.

In 2025, MCP (Model Context Protocol) changed everything. Now AI agents are the consumers of our APIs. But here's the problem: every MCP framework still builds for humans.

When you build a raw MCP server, you typically do something like this:

server.setRequestHandler(CallToolRequestSchema, async (request) => {
  const { name, arguments: args } = request.params;
  if (name === 'get_invoice') {
    const invoice = await db.invoices.findUnique(args.id);
    return {
      content: [{ type: 'text', text: JSON.stringify(invoice) }]
    };
    // AI receives: { password_hash, internal_margin, customer_ssn, ... }
  }
});
Enter fullscreen mode Exit fullscreen mode

Three catastrophic problems emerge:

  1. Data exfiltrationJSON.stringify(invoice) sends password_hash, internal_margin, customer_ssn straight to the LLM provider
  2. Token explosion — Every tool schema sent on every turn, even when irrelevant
  3. Context DDoS — Unbounded findMany() dumps thousands of rows into context window

The MVA Solution: Presenters as Perception Layer

Vurb.ts replaces JSON.stringify() with a Presenter — a deterministic perception layer that controls exactly what the agent sees, knows, and can do next.

import { createPresenter, suggest, ui, t } from '@vurb/core';

const InvoicePresenter = createPresenter('Invoice')
  .schema({
    id: t.string,
    amount_cents: t.number.describe('Amount in cents — divide by 100'),
    status: t.enum('paid', 'pending', 'overdue'),
  })
  .rules([
    'CRITICAL: amount_cents is in CENTS. Divide by 100 for display.'
  ])
  .redactPII(['*.customer_ssn', '*.credit_card'])
  .ui((inv) => [
    ui.echarts({
      series: [{ type: 'gauge', data: [{ value: inv.amount_cents / 100 }] }]
    }),
  ])
  .suggest((inv) =>
    inv.status === 'pending'
      ? [suggest('billing.pay', 'Invoice pending — process payment')]
      : [suggest('billing.archive', 'Invoice settled — archive it')]
  )
  .limit(50);

export default f.query('billing.get_invoice')
  .describe('Get an invoice by ID')
  .withString('id', 'Invoice ID')
  .returns(InvoicePresenter)
  .handle(async (input, ctx) =>
    ctx.db.invoices.findUnique({ where: { id: input.id } })
  );
Enter fullscreen mode Exit fullscreen mode

The handler returns raw data. The Presenter shapes everything the agent perceives:

  • Egress firewall — Only declared fields pass through
  • PII redaction — Late Guillotine Pattern masks sensitive fields
  • Domain rules — Travel with data, not in system prompt
  • UI blocks — Server-rendered charts (ECharts, Mermaid)
  • Next actions — HATEOAS hints computed from state

Ship a SKILL.md, Not a Tutorial

Vurb.ts ships a SKILL.md — a machine-readable architectural contract that your AI agent ingests before generating a single line.

One prompt. Working server. Zero iterations:

→ "Build an MCP server for patient records with Prisma. 
   Redact SSN and diagnosis from LLM output. 
   Add an FSM that gates discharge tools until attending physician signs off."
Enter fullscreen mode Exit fullscreen mode

The agent reads SKILL.md and produces:

const PatientPresenter = createPresenter('Patient')
  .schema({ id: t.string, name: t.string, ssn: t.string, diagnosis: t.string })
  .redactPII(['ssn', 'diagnosis'])
  .rules(['HIPAA: diagnosis visible in UI blocks but REDACTED in LLM output']);

const gate = f.fsm({
  id: 'discharge',
  initial: 'admitted',
  states: {
    admitted: { on: { SIGN_OFF: 'cleared' } },
    cleared: { on: { DISCHARGE: 'discharged' } },
    discharged: { type: 'final' },
  },
});

export default f.mutation('patients.discharge')
  .describe('Discharge a patient')
  .bindState('cleared', 'DISCHARGE')
  .returns(PatientPresenter)
  .handle(async (input, ctx) =>
    ctx.db.patients.update({
      where: { id: input.id },
      data: { status: 'discharged' },
    })
  );
Enter fullscreen mode Exit fullscreen mode

Correct Presenter with .redactPII(). FSM gating that makes patients.discharge invisible until sign-off. File-based routing. Typed handler. First pass — no corrections.

Get Started in 5 Seconds

vurb create my-server
cd my-server && vurb dev
Enter fullscreen mode Exit fullscreen mode

That's it. A production-ready MCP server with:

  • File-based routing
  • Presenters with egress firewall
  • Middleware (auth, permissions)
  • Pre-configured for Cursor, Claude Desktop, VS Code + Copilot, Windsurf, Cline

Key Features

1. Zero Trust Sandbox — Computation Delegation

The LLM sends JavaScript logic to your data instead of shipping data to the LLM. Code runs inside a sealed V8 isolate:

export default f.query('analytics.compute')
  .describe('Run a computation on server-side data')
  .sandboxed({ timeout: 3000, memoryLimit: 64 })
  .handle(async (input, ctx) => {
    const data = await ctx.db.records.findMany();
    const engine = f.sandbox({ timeout: 3000, memoryLimit: 64 });
    try {
      const result = await engine.execute(input.expression, data);
      if (!result.ok) return f.error('VALIDATION_ERROR', result.error)
        .suggest('Fix the JavaScript expression and retry.');
      return result.value;
    } finally {
      engine.dispose();
    }
  });
Enter fullscreen mode Exit fullscreen mode

Zero access to process, require, fs, net, fetch, Buffer. Timeout kill, memory cap, automatic isolate recovery.

2. FSM State Gate — Temporal Anti-Hallucination

The first framework where it is physically impossible for an AI to execute tools out of order.

If the workflow state is empty, the cart.pay tool doesn't exist in tools/list:

const gate = f.fsm({
  id: 'checkout',
  initial: 'empty',
  states: {
    empty: { on: { ADD_ITEM: 'has_items' } },
    has_items: { on: { CHECKOUT: 'payment', CLEAR: 'empty' } },
    payment: { on: { PAY: 'confirmed', CANCEL: 'has_items' } },
    confirmed: { type: 'final' },
  },
});

const pay = f.mutation('cart.pay')
  .describe('Process payment')
  .bindState('payment', 'PAY') // Visible ONLY in 'payment' state
  .handle(async (input, ctx) => ctx.db.payments.process(input.method));
Enter fullscreen mode Exit fullscreen mode

3. Code Generators

OpenAPI → MCP in One Command:

npx openapi-gen generate -i ./petstore.yaml -o ./generated
API_BASE_URL=https://api.example.com npx tsx ./generated/server.ts
Enter fullscreen mode Exit fullscreen mode

Prisma → MCP with Field-Level Security:

model User {
  id           String @id @default(uuid())
  email        String @unique
  passwordHash String /// @vurb.hide
  stripeToken  String /// @vurb.hide
  tenantId     String /// @vurb.tenantKey
}
Enter fullscreen mode Exit fullscreen mode

n8n Workflows → MCP Tools:

const n8n = await createN8nConnector({
  url: process.env.N8N_URL!,
  apiKey: process.env.N8N_API_KEY!,
  includeTags: ['ai-enabled'],
  pollInterval: 60_000,
});
Enter fullscreen mode Exit fullscreen mode

Deploy Anywhere

Same code on Stdio, SSE, Vercel Functions, Cloudflare Workers:

// Vercel Functions
import { vercelAdapter } from '@vurb/vercel';
export const POST = vercelAdapter({ registry, contextFactory });
export const runtime = 'edge';

// Cloudflare Workers
import { cloudflareWorkersAdapter } from '@vurb/cloudflare';
export default cloudflareWorkersAdapter({ registry, contextFactory });
Enter fullscreen mode Exit fullscreen mode

Why This Matters

Most MCP frameworks are just wrappers around JSON.stringify(). Vurb.ts is different:

Security-first — Egress firewall, PII redaction, sandboxed execution
Zero learning curve — Ship a SKILL.md, agents write the code
Production-ready — State sync, FSM gating, middleware, testing
Type-safe — Full TypeScript, tRPC-style client, compile-time validation
Deploy anywhere — Stdio, SSE, Vercel, Cloudflare Workers

Resources


Stop building MCP servers for humans. Start building for AI agents.

Top comments (0)