I have been writing about API sprawl for fifteen years. The shape of the problem has not really changed — large enterprises now run an average of 1,295 SaaS applications and over 14,000 internal APIs, with AI-related APIs up 807% year over year — but the stakes have. The agents are here. They want to consume everything. And nobody has a clear story for how to give them safe, governed access to what already exists without rebuilding the world.
Today we are shipping the first alpha of the Naftiko Framework. It is Apache 2.0, declarative, Java-based, and built around one simple idea: your existing data and APIs are not technical debt. They are strategic inventory. They just need to be made discoverable, governed, and reusable as capabilities instead of scattered projects nobody can find.
This post is a walkthrough of what shipped, why the spec-driven integration approach matters, and what it looks like to actually build with it.
The problem: MCP sprawl on top of API sprawl
If you are reading dev.to, you have probably already lived this. The first MCP integrations go great. The second wave starts to feel a little ad-hoc. By the time you are wiring your fifth or sixth tool, you realize you are just re-creating API sprawl one layer up — except now the ungoverned thing is talking directly to a model that will happily call it a thousand times in a runaway loop.
The trigger symptoms are familiar:
- MCP sprawl on top of API sprawl
- Teams shipping prototypes, but nothing makes it to production
- Security and compliance becoming the bottleneck
- Costs that are impossible to predict (tokens, models, and upstream API usage)
- Specs that "age like bananas"
- You can't govern, secure, or reuse what you can't see
The honest answer is that the existing toolchain — API gateways, iPaaS, ad-hoc MCP servers — was not built for this. We need a different unit of work.
The unit: a capability
Naftiko's answer is the capability. A capability is a YAML file that declares two things: what it consumes (the upstream APIs it calls) and what it exposes (the surfaces it offers — MCP, Skill, or REST).
Here is a real example from the Shipyard tutorial:
naftiko: "1.0.0-alpha1"
capability:
consumes:
- namespace: registry
type: http
baseUri: "https://registry.shipyard.dev/api/v1"
resources:
- name: ships
path: "/ships"
operations:
- name: list-ships
method: GET
exposes:
- type: mcp
port: 3001
namespace: shipyard-tools
description: "Shipyard MCP tools for fleet management"
tools:
- name: list-ships
description: "List ships in the shipyard"
call: registry.list-ships
outputParameters:
- type: array
mapping: "$."
items:
type: object
consumes declares where the data lives. exposes declares what the agent sees. call: registry.list-ships is the wire between the two. The outputParameters block reshapes the upstream payload — renaming imo_number to imoNumber, dropping fields the agent does not need — so the model never sees the raw API surface.
That is the whole idea. The spec is the integration. Not a description of it, not documentation written after the fact — the actual runnable artifact. The Naftiko Engine reads this YAML at runtime and serves it as a live MCP server. There is no code generation step, no compilation, no drift between what is documented and what is running.
Spec-driven integration, in plain terms
The methodology behind this is what we are calling Spec-Driven Integration (SDI). It is the integration-domain cousin of spec-driven development, and it rests on a few principles that I keep coming back to:
- Specifications as the lingua franca. The spec is the single source of truth. Maintaining an integration means evolving its spec. Everything else is derived.
- Executable specifications. A spec is only valuable if it is precise enough to produce a working integration directly. If it cannot be executed as-is, the gap is a signal of incompleteness — not an invitation for interpretation.
- Continuous refinement. Specs are linted, validated, and analyzed for ambiguity throughout their lifecycle, not at a single approval gate.
- Bidirectional feedback. Production behavior feeds back into the spec. The spec is a living artifact, not a snapshot.
Why this matters for AI integration specifically: context engineering is fundamentally an integration problem. Agents need the right data, in the right shape, at the right time. When the spec is the artifact, agents (and the humans operating them) can reason about integrations, propose refinements, and validate consistency against a structured contract — instead of poking at opaque runtime behavior.
Four levels of progressive abstraction
Naftiko meets your APIs where they are. The framework introduces a four-level progressive abstraction model for consumed APIs, so you do not have to rip and rebuild anything:
- Forwarding HTTP client — forward any HTTP call with shared endpoint behavior like authentication.
- Templatized HTTP client — reuse predefined request collections imported from HAR, Postman, or similar formats.
- Structured API client — abstract HTTP calls into clean web API paths and operations, enabling reuse across teams.
- Functional MCP client — abstract structured APIs into MCP tools, resources, and prompts for context engineering and agent orchestration.
You can climb the ladder one capability at a time. Start by forwarding. Move to structured. Reach the top when it makes sense to. Nothing forces a wholesale rearchitecture.
What ships in Alpha 1
The alpha release covers three areas:
-
Right-sized AI context — declarative applied capabilities that expose Agent Skills, plus MCP support for Resources and Prompts alongside existing Tools support. The
outputParametersstory above is the key here: capabilities shape response payloads into smaller, typed outputs aligned to tasks instead of dumping raw provider complexity into a context window. - API reusability — lookups within API call steps, consumer authentication and permissions for API and MCP servers, reusable source HTTP adapter declarations across capabilities, and applied capabilities that compose multiple sources.
- Core developer experience — published artifacts on Maven Central and Docker Hub, Javadocs on javadoc.io, a Naftiko Skill based on the CLI, a Spectral-based ruleset for spec governance, JSON structure validation, a GitHub Action template based on Super Linter, and a comprehensive wiki with FAQ, getting started guide, and roadmap.
Alongside the framework, the Naftiko Fleet Community Edition is also free. It ships with a VS Code extension for live structure and rules validation of capability YAML files, and Backstage templates that scaffold new capabilities and bootstrap their Git repositories. The Community Edition will always be free. Standard and Enterprise editions will add what large organizations need — SLAs, federation, continuous compliance, domain-level governance — but the framework itself stays open source.
Governance as guidance, not gatekeeping
One of the things I keep saying out loud in conversations with platform teams: governance has to be guidance, not gatekeeping. Naftiko's Spectral-based ruleset enforces 15 consistency rules at lint time, the VS Code extension surfaces them as you type, and the GitHub Action template runs them in CI. The point is to catch the things that matter — naming, schema, identity propagation, exposed contract shape — before a capability ships, not to slow anyone down with a review board.
When procurement eventually starts asking the question Sarah Guo predicts in her "Dark Code" essay — "what did your agents do with our data on a Tuesday in March?" — the answer is in the spec, the governance ruleset, and the engine telemetry. Not reconstructed from scattered logs after the fact.
Where this goes next
The roadmap is on the wiki. The short version:
- Alpha 2 (May 2026) — A2A server adapter, MCP authentication and gateway integration, webhook server adapter, conditional / for-each / parallel orchestration steps, OpenAPI-to-Naftiko import tooling, a Control API accessible via CLI, and starter capability templates.
- Beta (June 2026) — stable spec, resilience patterns (retry, circuit breaker, rate limiter, time limiter, bulkhead), MCP server-side code mode, expanded governance with tags and labels.
- GA (September 2026) — production-ready v1.0 with full test coverage, stable spec, and JSON Schema published to JSON Schema Store.
Try it
This is an alpha. The spec will move. The edges are rough in places. But the core idea — that capabilities are the right unit, that governance is guidance not gatekeeping, that the producer-consumer relationship in API land has to be rethought for the agent era — that part I am clear on.
If you have spent any time thinking about MCP, AI integration, or how to give your security and compliance teams something better than a SOC 2 PDF, this is for you. Roll up your sleeves, pull the Docker image, walk through the Shipyard tutorial, and tell me where it breaks.
- GitHub: github.com/naftiko/fleet
- Wiki & docs: github.com/naftiko/fleet/wiki
Top comments (0)