You changed a prompt. The output still looks fine. But your agent stopped reading the config before deploying and switched from running tests to running builds.
Nobody noticed until production broke.
The problem
Most agent failures aren't bad text — they're bad behavior. The agent calls the wrong tools, in the wrong order, with the wrong arguments. Output evals don't catch this because the final response
still looks plausible.
Teams try to catch it manually:
- reviewing traces in agent UIs
- parsing raw session logs
- comparing old vs new runs by hand
- debugging regressions only after users report them
What TracePact does
TracePact is a behavioral testing framework for AI agents. It works at the tool-call level, not the text level.
1. Write behavior contracts:
import { TraceBuilder } from '@tracepact/vitest';
const trace = new TraceBuilder()
.addCall('read_file', { path: 'src/service.ts' }, '...')
.addCall('write_file', { path: 'src/service.ts', content: '...' })
.addCall('run_tests', {}, 'PASS')
.build();
// Did it read before writing?
expect(trace).toHaveCalledToolsInOrder([
'read_file', 'write_file', 'run_tests'
]);
// Did it avoid shell?
expect(trace).toNotHaveCalledTool('bash');
No API calls. No tokens. Runs in milliseconds.
2. Record & replay:
# Record a baseline (one-time, live)
npx tracepact run --live --record
# Replay without API calls (instant, deterministic)
npx tracepact run --replay ./cassettes
3. Diff runs to catch drift:
npx tracepact diff baseline.json latest.json --fail-on warn
3 changes detected:
- read_file (seq 1) (removed)
+ write_file (seq 3) (added)
~ bash.cmd: "npm test" -> "npm run build"
Summary: 1 removed, 1 added, 1 arg changed[BLOCK]
Filter noisy args and irrelevant tools:
npx tracepact diff baseline.json latest.json \
--ignore-keys timestamp,requestId \
--ignore-tools read_file
Severity levels: none (identical), warn (args changed), block (tools added/removed). Use --fail-on in CI to gate deployments.
Good fit
- Coding agents — read before write, run tests before finishing, never edit restricted files
- Ops agents — inspect before restarting, check evidence before acting
- Workflow agents — validate before mutation, avoid duplicate side effects
- Internal assistants — use correct system for correct task
Less useful for
Pure chatbots, style evaluation, creative tasks, or systems where only text output matters. TracePact is for behavioral guarantees, not response quality.
MCP server for IDEs
TracePact ships an MCP server that works with Claude Code, Cursor, and Windsurf:
{
"mcpServers": {
"tracepact": {
"command": "npx",
"args": ["@tracepact/mcp-server"]
}
}
}
Tools: tracepact_audit, tracepact_run, tracepact_capture, tracepact_replay, tracepact_diff, tracepact_list_tests.
Get started
npm install @tracepact/core @tracepact/vitest @tracepact/cli
npx tracepact init
npx tracepact
GitHub: https://github.com/dcdeve/tracepact
We built this because we kept running into the same problem: prompt or model changes that silently break agent behavior while the output still looks fine. If you're testing AI agents, I'd love to hear
how you're handling tool-call regressions today.
Top comments (0)
Some comments may only be visible to logged-in visitors. Sign in to view all comments.