DEV Community

Cover image for What Is Wiggum CLI? The Autonomous Coding Agent That Ships Features While You Sleep
Federico Neri
Federico Neri

Posted on • Originally published at wiggum.app

What Is Wiggum CLI? The Autonomous Coding Agent That Ships Features While You Sleep

The short version

Wiggum CLI is an open-source command-line tool that turns feature requests into shipped code — autonomously. You describe what you want in plain English, Wiggum interviews you to nail down the details, generates an implementation-ready spec, then hands it to a coding agent (Claude Code, Codex, or any CLI-based agent) to execute in an autonomous loop.

Install it in one line:

npm install -g wiggum-cli
Enter fullscreen mode Exit fullscreen mode

Why another AI coding tool?

Most AI coding tools help you write code faster while you're sitting at the keyboard. Copilot autocompletes your lines. Cursor helps you refactor. They're great at accelerating your workflow.

Wiggum operates at a different level. Instead of helping you write code, it helps you specify what needs to be built — then builds it without you.

The difference is like giving someone driving directions turn-by-turn versus handing them a detailed itinerary and letting them navigate. Wiggum generates the itinerary, and your coding agent drives.

How it works

Wiggum has three commands that form a complete pipeline:

1. Scan with wiggum init

Point Wiggum at any project and it auto-detects your tech stack — frameworks, languages, file structure, conventions. Zero config. This context feeds into every spec and loop to ensure generated code actually fits your project.

wiggum init
Enter fullscreen mode Exit fullscreen mode

It produces a structured context file that captures how your project works, not just what files exist.

2. Interview with wiggum new

Describe what you want to build. Wiggum doesn't just take your description at face value — it interviews you using AI that understands your codebase. It asks clarifying questions about edge cases, design decisions, and tradeoffs you might not have considered.

The output is a detailed, implementation-ready spec in markdown — structured so any coding agent can execute it consistently.

3. Execute with wiggum run

This is where the Ralph loop kicks in. Wiggum hands your spec to a coding agent and runs autonomous plan-implement-test-verify-PR cycles. Each phase has checkpoints. The agent plans the implementation, writes the code, runs tests, verifies everything works, and opens a pull request.

You can monitor progress in the TUI, background the process, and come back later to review the results.

What makes Wiggum different

Spec-first architecture. Most autonomous coding tools skip the specification step. They take a vague description and start writing code immediately. Wiggum forces a structured specification process — the AI interview — before any code gets written. This dramatically improves output quality because the agent has clear, unambiguous instructions.

Codebase-aware context. The init scan means Wiggum understands your project's patterns, conventions, and dependencies. Generated specs reference your actual file structure and coding style, not generic best practices.

Agent-agnostic execution. Wiggum generates specs that work with any CLI-based coding agent. It's been tested with Claude Code and Codex, but the specs are just markdown — any agent that can read a file and execute code can use them.

The Ralph loop. Named after the Ralph loop technique by Geoffrey Huntley, this is the autonomous execution engine. It's not just "run the agent until it's done" — it's a structured multi-phase loop with plan, implement, test, verify, and PR review stages.

Who it's for

Wiggum is for developers who want to ship features faster by delegating implementation to AI agents — without sacrificing code quality. If you're comfortable reviewing pull requests but want to spend less time writing boilerplate, Wiggum is for you.

It works best when you know what you want to build but want to automate the how.

Getting started

npm install -g wiggum-cli
Enter fullscreen mode Exit fullscreen mode

Then, in your project directory:

wiggum init
wiggum new
wiggum run
Enter fullscreen mode Exit fullscreen mode

That's it. Three commands from zero to pull request.

The CLI is free and open source. You bring your own API keys. Pro plans add managed keys, a web dashboard, and push notifications — but the core tool is yours to use without limits.

Check out the GitHub repository for full documentation, or visit wiggum.app to see the roadmap and pricing.

Top comments (2)

Collapse
 
alifunk profile image
Ali-Funk

"I get nervous when I hear 'ships while you sleep' for me...that usually means 'pages me while I sleep' because of edge cases.

However, the 'Spec-first' approach is the real killer feature here.
it differentiates your tool from the others. Those tools are plentiful but this is a feature I haven´t seen or heard of before.

Most AI tools skip the requirements engineering and jump straight to coding, which creates, what I call, "technical debt" at light speed.
By forcing an 'Interview' phase to generate a Markdown spec BEFORE execution is exactly how senior engineers work.

Good move on enforcing structure over speed.

Collapse
 
federiconeri profile image
Federico Neri

Thanks @alifunk, "technical debt at light speed" is perfect, might steal that.
And your concern about "pages me while I sleep" is totally valid; that's exactly why the interview phase exists. Catch the ambiguity before it becomes a 3 am incident.

The spec-first approach came out of necessity. I come from product, not engineering. So when I started building with AI agents, I kept living exactly what you described: fast output, wrong result. So I tried to translate how senior engineers actually work into a repeatable, plug-and-play process.

The next step is making the agent also an orchestrator. Breaking features into dependency-aware sub-tasks with human-in-the-loop checkpoints. Moving from "execute this spec" to "manage this project."

If you ever get a chance to try it, I'd genuinely love your feedback. Especially from someone who thinks about this the right way. Fair warning: it's still early and needs polish, but the core flow works, and I'd value an engineer's perspective on where it falls short.