DEV Community

Cover image for Introducing Aerostack: Workflows, MCPs, and Intelligent Bots on the Edge
Aerostack
Aerostack

Posted on

Introducing Aerostack: Workflows, MCPs, and Intelligent Bots on the Edge

The Problem

It starts with configuration sprawl.

I was building a project and kept switching between three code editors — VS Code, Cursor, Windsurf. Each one handles MCP servers differently. Each one has its own config file format, its own way of specifying credentials, its own quirks around which MCP features it supports. If I wanted to use the same MCP — say, a PostgreSQL query tool — I configured it in Cursor's mcp.json, then again in VS Code's settings, then again when I opened a different project. Every new project, every machine switch, same configuration from scratch.

But configuration sprawl is just the surface problem. Underneath it, there are four deeper ones.

MCPs are isolated islands. You install five MCPs — a database, Jira, Slack, GitHub, a monitoring tool. Each one works on its own. But there's no orchestration layer. You can't say "query the database for recent errors, then if severity is critical, create a Jira ticket, then post a summary to Slack." Each MCP is a standalone tool with no awareness of the others. The composition has to happen in your head, manually, one tool call at a time.

MCPs are trapped inside code editors. The MCP ecosystem is powerful — but it only works when a developer is sitting in an IDE. Your Discord bot can't call an MCP. Your webhook can't trigger one. Your scheduled job can't use one. Your API can't expose one. The tools exist, but they're locked behind an editor window that has to be open on someone's laptop.

There's no observability. When an MCP call fails, you get a cryptic error in your editor. There's no log trail, no execution trace, no way to know which team member hit rate limits or which tool is slow. If your AI agent made a bad decision three tool calls ago, good luck debugging it. There is zero visibility into what's happening across your MCP infrastructure.

Credentials are a security problem, not just an inconvenience. Every MCP needs secrets — API keys, database passwords, tokens. I had them scattered across .env files, ~/.config/ directories, hardcoded in local configs I kept telling myself I'd clean up. When I wanted to share an MCP with a teammate, the only option was sending raw credentials over Slack. There was no way to give someone tool access without giving them the actual keys.

Agents can't build on this. An AI agent can use tools it's been given. But it can't browse a marketplace, discover a useful MCP, install it into a workspace, wire it into a workflow, and deploy that workflow — all autonomously. The entire MCP ecosystem is human-configured. Every integration requires a developer to manually set it up.

That's five problems, not one. Aerostack solves all of them.

Five problems with MCP infrastructure — and how Aerostack solves each one


What Aerostack Is

A developer infrastructure platform that solves each of these problems with a specific primitive:

  • Config sprawlWorkspaces. One gateway URL. Credentials configured once. Share a token, not secrets.
  • Isolated MCPsWorkflow engine. 19 node types on Durable Objects that compose MCPs with LLM reasoning, branching, loops, and human approval.
  • Editor-locked toolsBots, Agent Endpoints, Smart Webhooks. Your MCPs work from Discord, Telegram, Slack, WhatsApp, REST APIs, and scheduled jobs — not just code editors.
  • No observabilityGateway logging. Every tool call is traced — which MCP, which tool, which token, latency, success/failure, per-member usage.
  • Human-only configurationAerostack MCP. Agents can plan(), scaffold(), create(), deploy(), and publish() infrastructure programmatically.

Everything runs on Cloudflare Workers. No servers to manage. No regions to pick.

Aerostack workspace composing MCPs, Skills, and Functions into one endpoint


Workspaces: The Core Primitive

This is where everything starts. A workspace is a managed gateway that sits in front of your MCP servers.

Create a workspace, give it a name, and you get a gateway URL:

https://mcp.aerostack.dev/ws/engineering-team
Enter fullscreen mode Exit fullscreen mode

That URL is your team's single endpoint. Behind it: every MCP you've added, credentials encrypted, access controlled by token.

Adding MCPs: You browse the registry or add your own MCP server, then link it to the workspace. You configure which secrets each MCP needs — the workspace handles injection at runtime.

Secrets: Stored separately from MCPs, encrypted with AES-256-GCM. At runtime, when a tool call hits the gateway, the relevant secrets are decrypted and injected into the MCP request. The LLM never sees credentials. They don't appear in logs.

Access control: Workspace tokens are issued per member with Admin, Developer, or Read-only roles. When someone leaves, you revoke their token from the dashboard. Access stops everywhere immediately — no per-machine credential cleanup.

Observability: Every tool call through the gateway is logged — which MCP, which tool, which token, latency, success/failure. You can see per-developer usage without exposing what data flowed through.


MCPs and Skills

The MCP ecosystem is growing fast, but every MCP still lives in isolation. You install it in your editor, configure credentials locally, and it helps nobody else on your team.

Aerostack's registry changes this. You deploy an MCP server — either to our hosted infrastructure (we run it as a Cloudflare Worker in a dispatch namespace) or point to your own external URL — and publish it to the registry. It gets a slug: @your-username/postgres-inspector.

From there, anyone can add it to their workspace. The MCP server runs once. Credentials are per-workspace. Ten developers share one MCP deployment with ten different access tokens.

Skills are the same concept for prompt-based tools. A skill wraps a system prompt, input schema, and optional function logic into a callable tool. Publish it once, use it from any workflow or bot in your workspace.

The registry doubles as a community marketplace. Anyone can publish MCPs, Skills, and Functions for others to discover and install into their workspaces — one click, no code to clone. The marketplace is share-only (no buying or selling), which keeps the incentive structure clean: build useful tools, get adoption.


Intelligent Bots

We think of bot evolution in three generations — this is our framing, not an industry standard, but it's useful for understanding what we built.

Gen 1 — keyword matching and decision trees. If the message contains "refund", go to branch 47. Brittle. One unexpected question and the bot falls through to a generic fallback.

Gen 2 — RAG. Vector search plus an LLM. The bot retrieves relevant documents from an index and synthesizes an answer. This was meaningful progress — bots could suddenly answer questions they weren't explicitly programmed for. But they were read-only. A user could ask "what's my order status?" and get an answer. They couldn't say "cancel it" and have the bot actually cancel it.

Gen 3 — what we built. The LLM has access to MCP tools and decides which to call. It reads, writes, and acts. A user asks "what errors happened in the last hour?" — the bot calls the database MCP, queries the error logs, reads the results, and offers to create a Jira ticket. If the user says yes, it calls the Jira MCP and creates the ticket. Nobody programmed that sequence. The LLM composed it from the available tools and the conversation context.

The difference between Gen 2 and Gen 3 isn't incremental. Gen 2 bots retrieve information. Gen 3 bots orchestrate tools.

Under the hood, every bot is backed by a workspace. Discord, Slack, Telegram, WhatsApp — they're platform adapters. Your bot logic lives in one place: a system prompt and a set of workspace MCPs. The platform is just transport.


Workflows: AI-Native Orchestration

This is the piece that evolved to become the most technically interesting part of the platform.

Traditional workflow engines are deterministic. You draw a DAG: trigger → transform → condition → action. Every path is predetermined. That works for ETL pipelines and approval chains. It doesn't work when the next step depends on what an LLM decides.

We built a workflow engine for AI workloads. 19 node types, all running on Cloudflare Durable Objects. Here are the ones that don't exist on other platforms:

agent_loop — an autonomous ReAct cycle. You give it a goal and a list of available tools. The LLM decides what to call, reads the result, decides if it needs another tool or if it's done.

mcp_tool — any MCP in your workspace becomes a workflow node. The gateway handles secret injection, so the workflow just says "call this tool with these parameters."

confidence_router — classifies message complexity and routes to different models. Simple queries hit a cheap, fast model. Complex reasoning goes to your most capable LLM.

parallel — runs multiple branches simultaneously and merges results. Call three MCPs at once, wait for all, combine the output for the next node.

auth_gate — multi-turn identity verification inside a workflow. The node challenges the user (OTP via email, magic link, or custom provider), waits for proof, then maps the verified identity into the workflow context.

guardrail — validates LLM output against a policy before it leaves the workflow. Safety as a structural property of the execution layer, not a comment in the prompt.

Every node runs on a Durable Object. Workflows survive worker restarts and network interruptions. You can pause a workflow at an auth_gate or a human review step, come back hours later, and resume exactly where execution stopped.

Beyond bots: Workflows aren't limited to chat. Agent Endpoints expose any workflow as a streaming API. Smart Webhooks trigger workflows from external events (Stripe, GitHub, your own systems). Bots, APIs, and webhooks all share one workflow engine.


Why Cloudflare Workers

Everything runs on Cloudflare Workers — globally distributed, no servers to manage, no regions to pick.

We chose Workers because of Durable Objects. Workflow state needs to survive across async boundaries. An agent_loop might call an MCP tool, wait for the response, call another tool, then pause for human approval that comes hours later. Traditional serverless functions are stateless — you'd need an external state store and polling. Durable Objects give us co-located state and compute in the same process. No external coordination layer.

One deploy, and the entire platform — API, bots, workflows, gateway — is running globally. No VPCs. No load balancers. No auto-scaling configs. No infrastructure on-call.


Agent-to-Agent: Agents Building for Agents

Aerostack ships its own MCP server. That means any AI agent — Claude, GPT, Gemini, Cursor, your own custom agent — can operate the entire platform programmatically. Through the same MCP protocol your agent already speaks.

The lifecycle:

  1. plan() — describe what you want to build in natural language. The agent returns a complete infrastructure blueprint.
  2. scaffold() — generate the full config from the plan.
  3. create() — create the resource. Bot, workflow, function, endpoint.
  4. deploy() — push it to the edge. Live in seconds.
  5. publish() — publish it to the marketplace for other agents and developers to discover.

No dashboard. No CLI. No YAML. An agent describes intent, and infrastructure appears.

This creates a network effect: every agent that builds and publishes makes the platform more valuable for every other agent. The marketplace isn't just a catalog humans browse — it's a composable supply chain that agents read, write, and extend programmatically.

Agent-to-Agent network effect — agents building infrastructure for other agents


Try It

Start with one workspace. Add one MCP. Share the gateway URL with your team.

The configuration sprawl problem — the per-editor, per-machine, per-project credential dance — goes away. Your MCPs live in one place. Your bots use them. Your workflows orchestrate them. Your team accesses everything through one token.

Create your workspace →

Top comments (0)