Most AI agent libraries are just folders of markdown files.
You get prompts. Maybe nice formatting. But no way to know programmatically:
- What tools the agent actually requires to run
- What context it needs as input
- What format its output will be in
- Which agent should run next in a pipeline
I spent the last few weeks building operator-agents to fix this. Here's what I learned.
The Core Problem: Agents as Unstructured Prompts
Take any popular agent library. Clone it. Look at a file:
# Senior Developer Agent
You are a senior developer. Review code carefully. Consider security, performance, and maintainability...
This works for a human reading it. But for a pipeline orchestrator or an automation tool? It's opaque. You can't programmatically answer "what does this agent need to run?" or "what should happen after it finishes?"
The Solution: AGENT_SPEC.md
I designed a YAML frontmatter schema that every agent in the library must implement:
---
name: senior-developer
display_name: Senior Developer
version: 1.0.0
category: engineering
vertical: null
runtimes:
- claude-code
- codex
- gemini-cli
- cursor
- aider
- raw
tools:
required:
- name: file_system
type: file
description: "Read and write code files"
- name: terminal
type: cli
description: "Run linters, tests, build commands"
optional:
- name: database_client
type: db
description: "Inspect schemas and query data"
context:
required:
- key: task_description
description: "What needs to be reviewed or built"
example: "Review the authentication module for security issues"
optional:
- key: existing_code
description: "Codebase context"
output:
format: markdown
schema: null
example: |
## Code Review: auth.ts
**Critical:** SQL injection risk on line 47...
pipeline:
handoff_to:
- agent: reality-checker
condition: When implementation is complete
- agent: project-shepherd
condition: When blockers are found
parallel_with: []
tags: [architecture, review, adr, senior]
author: operator-agents
license: MIT
---
Now any tool can parse this. Want all agents that require a terminal? One grep. Want to build a pipeline that routes to the right agent based on output? Read pipeline.handoff_to.
66 Agents, 10 Divisions, 5 Verticals
The library ships with 66 agents organized into:
Core divisions: Engineering (7), Design (6), Marketing (8), Product (3), Project Management (5), Testing (7), Support (6), Operations (3), Specialized (3)
Verticals: E-Commerce (5), SaaS (4), Agency (3), Finance (3), Automation (3)
The vertical agents are domain-specific. For example, invoice-processor in the Finance vertical understands DATEV format for German accounting. n8n-workflow-builder generates actual importable n8n workflow JSON โ not pseudocode.
Composable Pipelines
The library includes 5 pipeline definitions. Here's the ecommerce ops pipeline:
graph TD
A[ops-manager] -->|order_processed| B[customer-lifecycle]
B -->|revenue_data_ready| C[finance-tracker]
C -->|performance_metrics| D[analytics-reporter]
A -->|payment_failed| E[error-handler]
Each pipeline stage defines typed data contracts:
{
"from_agent": "ops-manager",
"to_agent": "customer-lifecycle",
"handoff_context": {
"order_id": "ORD-12345",
"status": "fulfilled",
"customer_email": "user@example.com",
"trigger": "post_purchase_flow"
},
"next_agent": "customer-lifecycle"
}
This makes orchestration deterministic: the next agent knows exactly what it's receiving.
6 Runtime Support
The library isn't locked to Claude Code. Install script auto-detects your runtime:
./scripts/install.sh
# Detects: claude-code, codex, gemini-cli, cursor, aider
# Installs to the correct path for your runtime
For raw API usage, strip the YAML frontmatter:
./scripts/strip-frontmatter.sh agents/engineering/senior-developer.md
# Outputs clean system prompt, no frontmatter
Validation
Every agent must pass the spec validator before merge:
./scripts/validate-agents.sh
# Checks: frontmatter fields, required sections, runtime declarations
CI runs this automatically via .github/workflows/validate.yml.
What's Next
The next step is building a runtime-agnostic orchestrator that reads agent-index.json and routes tasks automatically based on agent schemas. The data is all there โ parse-agents.sh --output json generates a complete index.
If you're building multi-agent systems, I'd love feedback on the pipeline protocol and schema design.
Repo: https://github.com/fatihkutlar/operator-agents
v1.0.0 Release: https://github.com/fatihkutlar/operator-agents/releases/tag/v1.0.0
66 agents, MIT licensed, contributions welcome.
Top comments (0)