DEV Community

Cover image for Infrastructure-as-Code Was Built for Humans. AI Agents Need Infrastructure-as-Tools.

Infrastructure-as-Code Was Built for Humans. AI Agents Need Infrastructure-as-Tools.

Infrastructure-as-Code Was Built for Humans. AI Agents Need Infrastructure-as-Tools.

Last week I watched a Claude agent write a complete Next.js app with a Postgres-backed API, authentication, and tests. Took about 8 minutes. Then it said "here's your code" and I spent the next hour manually provisioning the database, configuring the deployment, setting environment variables, and debugging TLS.

The agent did the creative, hard work. I did the mechanical, boring work. Something is backwards here.

This isn't about agents getting smarter. It's about infrastructure not speaking their language yet.

The gap nobody's solving

Look at the last mile of every AI coding session:

  1. Agent writes the code (automated)
  2. You provision a database (manual)
  3. You set up hosting (manual)
  4. You configure a domain + TLS (manual)
  5. You wire environment variables (manual)
  6. You deploy and debug (manual)

Steps 2-6 aren't creative work. They're mechanical, repetitive, and well-defined — exactly the kind of tasks agents are good at. But agents can't do them because infrastructure tools weren't designed for tool-calling.

Terraform assumes a human writes HCL files, runs terraform plan, reviews the diff, runs terraform apply, and manages state files. Pulumi assumes you're writing a program. Even cloud CLIs assume you're typing commands in a terminal.

An agent doesn't want to write a Terraform file. It wants to call create-database and get a connection string back.

What agent-native infrastructure looks like

At Open Source Cloud, we've built what we think this layer should be: infrastructure exposed as MCP tools.

MCP (Model Context Protocol) is the open standard for giving AI agents access to external tools. It's already baked into Claude, ChatGPT, Cursor, Copilot, Windsurf, and most AI coding assistants. When you connect our MCP server, the agent gets 40+ tools for provisioning real infrastructure:

  • Databases: Postgres, MariaDB, CouchDB, ClickHouse, Valkey (Redis-compatible)
  • Storage: S3-compatible object storage buckets
  • Applications: Deploy any Git repo (GitHub, Gitea, GitLab) as a running service
  • Configuration: Parameter stores, secrets management, environment variables
  • Domains: Custom domain mapping with automatic TLS via Let's Encrypt
  • Intelligence: An OSC Architect AI that helps design multi-service architectures

Each tool is a single function call with structured JSON input/output. No config files. No state management. No plan/apply cycle.

Agent: "I need a Postgres database for the user service"
→ calls create-database(type: "postgres", name: "userdb")
→ gets back: { url: "postgres://...", port: 5432, status: "running" }
→ stores the connection string in a parameter store
→ deploys the app with the parameter store attached
Enter fullscreen mode Exit fullscreen mode

Four tool calls. The agent doesn't need to know what Kubernetes is.

"But Vercel and Railway have MCP servers too"

They do. And you should look at what those MCP servers actually expose.

Vercel's MCP server lets your agent read logs, inspect project metadata, and browse documentation. Railway's lets you query metrics and manage environments. These are useful, but they're read-only — dashboard operations wrapped in a protocol.

OSC's MCP server exposes write operations. Create a database. Deploy an application. Provision storage. Wire a domain. Set a secret. Your agent doesn't just monitor your infrastructure — it builds it.

That's the difference between giving an agent a window and giving it a workbench.

From 36 hours of agents to production

This isn't a thought experiment. Streaming Tech TV+ is a production streaming platform built in 36 hours by one developer (Jonas Birme) directing 6 AI agents. The architecture:

  • 2 Claude Opus 4 agents: team lead + architect
  • 1 Claude Opus 4 agent: UX designer
  • 3 Claude Sonnet 4 agents: backend, frontend, QA

13 OSC services in the final stack: PostgreSQL, two Valkey caches, MinIO object storage, video transcoding (SVT Encore), HLS packaging, auto-generated subtitles (Whisper), ClickHouse analytics, plus the app frontends and backends.

15,000+ lines of code. 76 files. 99 commits. ~150 agent-hours of work.

The agents didn't just write code. They provisioned databases, created storage buckets, deployed services, and wired event pipelines. The human directed the architecture. The agents handled everything else — including infrastructure.

Other community examples show the same pattern across different domains: a CRM (SpecterCRM), a social reading platform (PageTurner), an audio social network (VoiceCircle), an RSS curation pipeline, a gaming activity tracker. All built with AI assistance, all deployed on OSC infrastructure.

Why open source is non-negotiable for agent-provisioned infra

Here's a scenario that should worry you: your agent spins up three proprietary managed services during a conversation. You didn't explicitly choose those vendors — the agent picked what was available. Six months later, you realize you're paying 3x what you expected and migration means rewriting your data layer.

On OSC, every service is open source. Postgres is actual Postgres. Storage is MinIO-compatible. Caching is Valkey (the Redis fork). Your Next.js app runs on an open source web runner container. You can take any component — or the whole stack — and move it to AWS, GCP, bare metal, or your laptop.

When agents make infrastructure decisions at speed, open source is your safety net. Not a philosophical preference — a practical one.

The infrastructure-as-code to infrastructure-as-tools shift

Infrastructure-as-code was the right abstraction for human developers. Declarative files. Version control. Code review. Plan/apply. It fits how humans think about systems.

Agents think differently. They need:

  • Immediate feedback: Call a tool, get a result in seconds. Not "write a file, run a command, wait for convergence, parse the output."
  • Atomic operations: One tool does one thing. Not "here's a 200-line manifest that creates 15 resources and might partially fail."
  • Structured I/O: JSON in, JSON out. URLs, connection strings, status codes. Not log streams to regex-parse.

This doesn't mean IaC goes away — it's still the right choice for version-controlled, team-reviewed infrastructure changes. But there's a new layer emerging on top: one that speaks the agent's language while orchestrating the same underlying infrastructure. The Terraform MCP server is one approach (wrapping IaC in tool calls). OSC takes a different approach: the tools are the infrastructure API directly, no IaC intermediary.

What this changes

When agents can provision infrastructure natively, development workflows shift:

Prototyping becomes a conversation. "Build me a REST API with a Postgres backend and deploy it" goes from a day of setup to a 5-minute chat.

The feedback loop collapses. Agent writes code → deploys → hits an error → reads logs → fixes → redeploys. No human in the loop for the mechanical steps.

Full-stack is the default. When a database is one tool call away, there's no reason for an agent to stop at "here's your code, now go set up a database yourself."

The barrier drops to intent. If you can describe what you want, an agent can build and deploy it. The infrastructure knowledge lives in the tools, not in your head.

Try it

Connect the OSC MCP server to Claude Desktop, Cursor, Copilot, or any MCP-compatible tool. Then don't ask it to "write code for" something. Ask it to build something. Database, backend, frontend, deployed, live.

The free tier is enough to test the full workflow.


Open Source Cloud provides 200+ open source projects as managed services with native MCP integration. Every component is open source — zero vendor lock-in. Revenue is shared with the creators of the open source projects that power the platform.

Top comments (0)