DEV Community

divyank jain
divyank jain

Posted on

I Stopped Running API Workflows. I Taught My Agent to Own Them

Every developer hits this moment: you have a powerful coding agent running next to you, handling real tasks — and you're still over here manually calling an endpoint, copying a value, pasting it into the next request, clicking through a UI to verify something ran.

The agent isn't the problem. You are still doing the work.

I was there. I had Claude Code open, doing real things — browsing the codebase, writing tests, running commands. But my operational flows? Still me. Because the agent technically could read API documentation and figure out the calls — but doing that every single session is a waste of tokens and context. It re-reads the same docs, re-discovers the same endpoints, re-reasons about the same auth pattern. Every time.

Skills solve exactly that. You teach the agent once. It knows forever.

But here's the part most people miss: a skill isn't just an instruction file. A skill combined with a script is an executable workflow. The agent doesn't just know what the API does — it can run operations against it, deterministically, without loading the entire docs into context again. That's the difference between a cheat sheet and a power tool.


What is an agent skill?

A skill is a folder containing a SKILL.md file. That file packages instructions, context, and optional executable scripts that a coding agent loads on demand. Think of it like an onboarding guide you write once — the agent pulls it up whenever it's relevant.

The format follows the Agent Skills open standard, which means the same SKILL.md works across multiple agents, not just Claude Code.

A well-built skill does three things:

  • Tells the agent what endpoints exist and how to call them
  • Handles auth setup so the agent never has to re-figure it out
  • Bundles a runnable Python CLI so the agent can actually execute operations — not just describe them That last point is what upgrades a skill from documentation to automation. When the agent invokes the bundled script, the script's code never even loads into context — only the output does. Efficient, repeatable, and fast.

The tool: api-skill-creator

api-skill-creator generates a complete agent skill — SKILL.md plus a Python CLI — from any API specification. One command.

git clone https://github.com/jn-divyank/api-skill-creator
cd api-skill-creator
pip install pyyaml  # only dependency
Enter fullscreen mode Exit fullscreen mode

It accepts OpenAPI 3.x, Swagger 2.0, Postman collections, and plain HTML docs pages. No external API calls. Works offline.

Before vs after — the manual 6-step process vs one command

Before: 4–6 hours per API. After: 10 seconds.


Seeing it work: the Open-Meteo example

Let's run it against a real API. Open-Meteo is a free weather API with no authentication required — no account, no API key. They publish an official openapi.yml directly in their GitHub repo.

python create_skill.py \
  --spec https://raw.githubusercontent.com/open-meteo/open-meteo/main/openapi.yml
Enter fullscreen mode Exit fullscreen mode

That's it. No tokens. No setup. Run it and watch.

In the ./output/open-meteo/ directory, you get:

open-meteo/
├── SKILL.md        # Claude's playbook for this API
├── open_meteo_cli.py   # Runnable Python CLI
└── README.md       # Usage guide
Enter fullscreen mode Exit fullscreen mode

The generated SKILL.md groups endpoints by resource, includes auth detection (none needed here), and writes request examples directly from the spec's schema. The check command runs a connectivity test before you build anything.

Now drop the skill into your agent:

cp -r output/open-meteo ~/.claude/skills/
Enter fullscreen mode Exit fullscreen mode

And ask Claude Code: "Get me the 7-day weather forecast for Bangalore in metric units."

The agent loads the skill, understands the endpoint, formats the request, and runs it. No back-and-forth. No you explaining what a query parameter is.

That's the shift. You stopped being the API translator. The agent took over.


It also works when there's no OpenAPI spec

Not every API publishes a machine-readable spec. For those cases, pass a docs URL instead:

python create_skill.py --url https://restcountries.com
Enter fullscreen mode Exit fullscreen mode

The tool scrapes the page, identifies endpoints, infers HTTP methods and parameters, and generates a skill from the HTML. Same output format. Same result.

Your internal Swagger UI works the same way — point --url at it.

Any input type — OpenAPI, Postman, HTML docs, internal Swagger — flows into the tool and outputs one skill


One skill, every agent

Here's what most people miss about the Agent Skills format: it's not a Claude-only thing.

The SKILL.md format is an open standard. The same file you generate with this tool works across:

  • Claude Code — Anthropic's terminal-native coding agent
  • Cursor — loads skills from .claude/skills/ automatically
  • Codex CLI — OpenAI's agent reads the same format
  • OpenCode — open-source agent, native SKILL.md support
  • Antigravity — Google's agent tooling
  • GitHub Copilot — via compatible skill loaders

You generate the skill once. Your whole team uses it. In whatever agent they prefer.

One SKILL.md in the center, six agents surrounding it with arrows

This matters because teams aren't mono-agent. The senior engineer uses Claude Code. The new hire uses Cursor. The platform team runs Codex in CI. One skill covers everyone.


What you can actually automate now

Once generating a skill is a 10-second task, the question shifts from "can I automate this?" to "what's worth automating?"

Operational API workflows. Any repetitive sequence — trigger a job, poll for status, fetch a result, update a record — that you've been running manually. The agent now owns that flow end to end.

SDLC and CI/CD pipelines. Teach the agent your Jenkins API. It can trigger builds, check pipeline status, retry failed stages, and report results — all through the skill's bundled CLI. Same for any CI system that exposes an API.

Deployment and infrastructure operations. Point the tool at your cloud provider's API (AWS, GCP, or their internal tooling). The agent can describe running services, trigger deployments, check health endpoints, and coordinate rollbacks — as a skill-driven workflow, not a one-off conversation.

Internal APIs your team already owns. Generate a skill from your internal Swagger UI. The agent now understands your own platform without you explaining endpoints every session. Commit the skill to .claude/skills/ in your repo — everyone on the team gets it.

API exploration. Curious about a new service? Generate a skill in 10 seconds, ask the agent to walk you through endpoints. Faster than reading the docs yourself.

Team standardization. Every external service your team touches gets the same skill format, the same auth handling, the same structure. Consistency without the effort.


API changed? Regenerate, not rewrite

When an API updates, you run the command again:

python create_skill.py --spec ./stripe.json
# → SKILL.md already exists. Showing diff...
# → 3 endpoints added, 1 parameter changed. Overwrite? [y/n]
Enter fullscreen mode Exit fullscreen mode

The tool diffs against your existing skill and shows what changed before overwriting. You review, confirm, done. The work that used to mean an afternoon is now a 30-second prompt.


Get started

The project is open source under MIT license.

GitHub: github.com/jn-divyank/api-skill-creator

The fastest way to try it: clone the repo, run the Open-Meteo command above. No token required. The output drops into your agent in under a minute.

If you hit something — an edge case, a spec format that breaks, an output that's off — open an issue. The more real-world specs get thrown at it, the better the parser gets.


I'm a Senior Software Engineer working on AI integration across the SDLC — multi-agent orchestrators, Claude Code adoption, and the kind of tooling that lets agents do the work I used to do by hand. If this was useful, follow for more.

Top comments (0)