DEV Community

Aleksandr Liukov
Aleksandr Liukov

Posted on

I needed Claude Code as a network service for my pipelines. So I built one.

I use Anthropic for a range of code-related tasks — vulnerability analysis, documentation generation, defect triage. I'm not going to argue it's the best LLM for everything — it's simply what I prefer for this kind of work, and it's been delivering well for me.

Since I already rely on Anthropic, the next question was: which agent should drive it? There are plenty of open-source agents that work with Anthropic's API. But Anthropic has their own — Claude Code. Built by the same team, for the same models. For anything file-heavy, it felt like the natural choice. Why adopt a third-party agent when the vendor provides one that's purpose-built?

Claude Code can be automated — but it's still designed around a single actor. One developer, one set of credentials, one workspace. That works fine for personal scripts and local pipelines. But I needed more. Specifically, I needed to:

  • run Claude Code as part of automated workflows (CI/CD, n8n, custom scripts)
  • trigger it over the network via a REST API
  • use it from multiple automations, possibly managed by different teams
  • keep each automation's work isolated — separate credentials, separate workspaces, no cross-visibility

There's nothing out of the box for that.

So I built one.

What it does

Claude Code API Server is a Python (FastAPI) service that wraps the Claude Agent SDK and exposes it as a REST API. The workflow is simple:

# 1. Upload your codebase as a ZIP
curl -X POST /v1/uploads -F "file=@project.zip"

# 2. Submit a job
curl -X POST /v1/jobs -d '{"upload_id": "...", "prompt": "Find security vulnerabilities"}'

# 3. Poll for results
curl /v1/jobs/{job_id}
# → status, Claude's analysis, any files it created (base64-encoded)
Enter fullscreen mode Exit fullscreen mode

That's it. Fire and forget — give it a clear job, get back a complete result.

When it's not just you using it

Most AI wrappers are built for personal use — one user, one agent. The moment you serve multiple clients, the rules change. Tenant A's prompts, files, and network activity must be invisible to Tenant B. And any prompt from any client could be hostile — a compromised CI pipeline, a prompt injection buried in a repo. The service has to assume this will happen and limit the blast radius.

Process-level sandboxing. Every job runs in a bwrap namespace with seccomp filtering. Isolated filesystem, isolated process tree — a hostile prompt can't read other jobs' data or poke around the host.

Network isolation. Per-client security profiles control what the sandbox can reach — allowlisted domains, blocked IP ranges, or full network cutoff. A prompt can't phone home if there's nowhere to call.

Separate auth layer. Server-side API keys (Argon2-hashed), completely independent from Anthropic tokens. Revoke a client without touching anyone's credentials. Cost tracking is per-client, so you always know who spent what.

Minimal footprint. No Redis, no Postgres, no message queue. One container, file-based state. Fewer components, smaller attack surface.

It also supports MCP servers, custom subagents, and plugin-based skills if you need to extend Claude's capabilities — all managed through an admin API.

What it's not

This is a tool for teams, not a platform for planet-scale SaaS. No streaming, no horizontal scaling, no clustering. It comfortably handles tens of jobs per day — if you need more, scale vertically or run multiple instances.

Why I'm sharing this

I built it for my own workflows — mostly automated security reviews of merge requests. It's early (v0.1), but it works and I use it daily. I figured if it's useful to me, maybe someone else will find it useful too.

I'd genuinely love to hear what you think. Feedback, ideas on where to take it, use cases I haven't thought of, or even harsh criticism — all welcome. That's how good tools get better.

GitHub: claude-code-api-server

Top comments (0)