Most enterprise integration work follows a familiar pattern. A team needs to connect system A to system B. A developer writes a client, adds authentication, shapes the response, exposes it somewhere. It works. It ships. Within a sprint it starts drifting from whatever documentation existed. Within a quarter, three other teams have written their own version of the same integration with different assumptions.
This is not a tooling failure. It is a methodology failure. The integration intent — what to consume, how to shape it, what to expose — was never captured in a durable, inspectable artifact. It went straight into code and started drifting immediately.
Spec-Driven Integration
Let me talk for a little bit about Spec-Driven Integration (SDI), and why it provides a new way to look at how we integrate.
Spec-Driven Integration (SDI) is a methodology that treats the specification as the primary integration artifact. Not documentation. Not generated code. The executable definition itself.
A capability specification declares:
- What upstream APIs are consumed (base URIs, authentication, resources, operations)
- How data is shaped (input parameters, output extraction via JSONPath)
- What is exposed downstream (REST endpoints, MCP tools, Agent Skills)
The specification is complete enough to execute directly. An engine reads it at runtime, handles HTTP consumption, data transformation, format conversion, and protocol exposure. No code generation. No compilation.
naftiko: "1.0.0-alpha1"
capability:
consumes:
- type: http
namespace: payments
baseUri: https://api.internal.company/v2
authentication:
type: bearer
token: "{{PAYMENTS_TOKEN}}"
resources:
- name: transactions
path: /transactions
operations:
- name: list-transactions
method: GET
outputParameters:
- name: transactions
type: array
value: $.data
exposes:
- type: mcp
port: 3001
namespace: payments-tools
tools:
- name: list-recent-transactions
description: "List recent payment transactions"
call: payments.list-transactions
outputParameters:
- type: array
mapping: "$.data"
That YAML is the integration. Run it in a container. The MCP tool is live on port 3001.
Why Code Generation Creates Drift
The common objection is: "We already generate code from our OpenAPI specs." Code generation moves the specification one step away from execution. The generated code becomes the artifact that runs, not the spec. The moment someone modifies the generated code — to add a retry, to shape a response, to handle an edge case — the spec and the implementation diverge. Drift is reintroduced.
I am looking for SDI to avoid this entirely. The spec IS what executes. There is no intermediate code artifact that can diverge.
The SDI Workflow
- Specify — Write a YAML capability spec capturing the integration intent
- Validate — Lint and analyze for completeness before execution
- Execute — Run the engine. No build step. No code generation.
- Refine — Evolve the spec based on production feedback
This cycle keeps the spec primary. What is deployed always matches documented intent because they are the same artifact.
Why This Matters for AI
AI agents need structured contracts to reason about. An agent can parse a capability spec, understand what tools are available, what inputs they require, and what outputs they produce. It cannot do the same with custom integration code spread across multiple repos.
When the spec is the integration, AI agents can:
- Discover available capabilities by reading specs
- Call tools reliably because the contract is deterministic
- Propose spec refinements based on usage patterns
- Validate consistency without reverse-engineering code
SDI is not just a cleaner way to build integrations. It is the prerequisite for AI agents to use your integrations reliably. What I like about SDI, is it takes what we have been investing in for the last decade and helps me integrate AI into existing operational workflows I've worked hard on for years.
Top comments (0)