If you use Claude Code, you can already create an agent file, write some instructions in markdown, and run a multi-step workflow from your terminal. It works. It's simple. And for a lot of developers, that's
enough.
So why did I spend months building a desktop app that does the same thing with a drag-and-drop canvas?
Because I watched my teammates stare at a wall of terminal output with no idea what step just failed, which ones already finished, or how much the whole thing cost. And I realized that the people who benefit most from AI automation are often the ones least comfortable setting it up.
The Markdown Way Works. Until It Doesn't.
Here's what running an agent from a markdown file looks like:
claude --agent agents/my-workflow.md
Simple. Clean. And then:
- Step 4 of 8 fails. You start over from the beginning.
- You have no idea how much that run just cost until your invoice arrives.
- Your teammate asks "what does this workflow do?" and you tell them to read a markdown file.
- You want to pause at step 5 for a human review before it continues. Good luck.
- You want steps 3, 4, and 5 to run at the same time. Now you're writing bash scripts.
None of these are dealbreakers if you're comfortable in the terminal. But for a lot of people, they are.
What AgentFlow Adds on Top
AgentFlow is a free desktop app that sits on top of Claude Code. It doesn't replace agent files or the CLI. It gives you a visual layer with things you can't get from a markdown file:
You can see what's happening
Every step is a block on a canvas. When a workflow runs, blocks light up in real time. Blue means running. Green means done. Red means failed. Gray means waiting.
You don't have to read terminal output to figure out where things are. You just look at the canvas.
You can recover without starting over
If step 5 of 8 fails, click Resume from failure. AgentFlow skips everything that already succeeded and picks up from the failure point. With a markdown agent, you'd rerun the whole thing.
You know what you're spending
Every run tracks token usage and cost in dollars, broken down by step. There's a dashboard that shows your total spend, your most expensive workflows, and your most expensive steps. You can set a budget limit so it stops automatically before you blow through your API credits.
You can pause for human review
Drop an Approval Gate between any two steps. The workflow pauses and waits for you to review what happened so far before continuing. Try doing that with a markdown file.
You can run things in parallel
Drag a Parallel block onto the canvas and put steps inside it. They run at the same time. No bash scripting, no background processes, no wait commands.
Your team can understand it without reading your code
Share your workflow with a teammate. They open AgentFlow, see the canvas, and immediately understand what the workflow does just by looking at it. No reading markdown. No figuring out what that bash script is supposed to do.
Who Is This Actually For?
Not everyone needs this. If you're already comfortable writing agent files and running them from the CLI, and you don't care about cost tracking or failure recovery, you're fine without it.
AgentFlow is for:
- People who are new to AI automation. If you've never heard of agent files or pipelines, the visual canvas is a way easier starting point than reading docs about markdown syntax and CLI flags.
- People who want visibility. Seeing your workflow execute step by step on a canvas beats watching text scroll in a terminal.
- Teams where not everyone is technical. A product manager can look at an AgentFlow canvas and understand what it does. They can't do that with a markdown file.
- Anyone who's been burned by a failed run. Restarting a 10-step workflow from scratch because step 7 failed is painful. Resume from failure fixes that.
A Real Example
Here's a workflow I use: Bug Report to Fix
- AI Task: "Read this bug report and identify the root cause"
- AI Task: "Write a fix for the bug"
- Shell: npm test
- Approval Gate: I review the fix before committing
- Git: Commit and push
I drew this on the canvas in about 30 seconds. I've run it dozens of times. When tests fail at step 3, I click resume and it retries from there instead of re-reading the bug report and rewriting the fix.
Could I write this as a markdown file? Sure. But I wouldn't get the approval gate, the cost tracking, or the ability to resume from step 3 when tests fail.
Getting Started
What you need: docs installed and authenticated. AgentFlow uses Claude Code as its engine.
Grab a pre-built installer from the https://github.com/jadessoriano/agent-flow/releases/latest for Linux, macOS, or Windows.
Or build from source:
git clone https://github.com/jadessoriano/agent-flow.git
cd agent-flow
npm install
npm run tauri dev
Your first workflow in 60 seconds:
- Open AgentFlow and pick your project folder
- Click + New
- Drag an AI Task block onto the canvas, type "Write unit tests for the login function"
- Drag a Shell block, type npm test
- Connect them with an arrow
- Hit Run
That's it. Two steps, automated, with real-time status and cost tracking.
The Tech Stack (If You Care)
- Desktop: Tauri v2 (Rust)
- Frontend: React 19, TypeScript, Tailwind CSS
- Canvas: React Flow
- State: Zustand
- Database: SQLite
- Async: Tokio
Cross-platform. Lightweight. Not Electron.
It's Open Source. Tell Me What's Missing.
AgentFlow is MIT licensed and actively maintained. If you try it and something is confusing, broken, or missing, open an issue. I read all of them.



Top comments (0)