DEV Community

Jarvis Specter
Jarvis Specter

Posted on

Most "AI agents" are just expensive workflows in disguise

Most "AI agents" are just expensive workflows in disguise

The word "agent" is being applied to everything from a scheduled cron job to a genuinely autonomous reasoning system. This matters because the two require fundamentally different architecture — and most teams are paying reasoning-model prices for problems that don't need reasoning.

Here's how to tell them apart.

The Actual Difference

A workflow is a deterministic sequence of steps. Every time you run it, given the same input, it produces the same output. There's no decision-making happening — just execution. An email that triggers a sequence of API calls is a workflow. A Zapier zap is a workflow. A Python script that scrapes data and puts it in a spreadsheet is a workflow.

An agent is a reasoning loop. It observes the environment, decides what to do next, takes action, observes the result, and adapts. The key word is adapts. If the next step is always predetermined, it's not an agent — it's a workflow with extra steps and a higher API bill.

The test: could you replace the "AI" in your system with if/else logic and get the same result? If yes, you have a workflow.

When You Actually Need an Agent

Agents earn their cost when the task involves:

  1. Ambiguity — the right next step depends on what was observed, not what was planned
  2. Error recovery — when something goes wrong, the system needs to diagnose and adapt, not just fail
  3. Novel situations — inputs that couldn't have been anticipated when the workflow was designed
  4. Multi-step planning — achieving a goal requires sequencing actions where each step depends on the result of the last

Concrete examples that actually need agents:

  • "Go through my inbox and decide what needs my attention" (ambiguity, novel inputs)
  • "Book me a flight but handle whatever edge cases the airline site throws at you" (error recovery, adaptation)
  • "Monitor this service and resolve the common issues autonomously" (multi-step, novel situations)

Concrete examples that don't:

  • "Send a Slack message when a new row is added to this spreadsheet" — workflow
  • "Summarise this document" — single LLM call, not an agent
  • "Check if this email is from a known domain and route it accordingly" — classification, not reasoning

The Cost Mistake

Most teams building "agents" are running reasoning models on tasks that don't require reasoning. This is expensive and slow.

The fix: two-tier routing.

Does this task require reasoning? (novel, ambiguous, multi-step)

  • YES → expensive reasoning model (Claude Sonnet, GPT-4o)
  • NO → cheap or local model (Haiku, GPT-4o-mini, Ollama)

In our fleet of 23 agents, roughly 70% of calls go to cheaper/local models. Status checks, format transformations, log parsing, heartbeat acknowledgements — none of these need a PhD-level model. The expensive model sees hard problems. Costs dropped significantly without any quality degradation on the reasoning tasks.

When Workflows Beat Agents

This is the underrated point: for predictable, deterministic tasks, a workflow is better than an agent.

Workflows are:

  • Faster — no reasoning loop overhead
  • Cheaper — no expensive model calls
  • More reliable — deterministic means testable, auditable, debuggable
  • Easier to explain — you can show the exact sequence to a non-technical stakeholder

An agent that could handle ambiguity but is being used on a completely predictable task is just a slow, expensive workflow.

The right architecture uses both: agents handle the parts that require judgment, workflows handle the parts that don't. They're not alternatives — they're different tools for different jobs.

A Decision Framework

Most "AI agent" projects belong in the first or second row. That's not a failure — it's the right tool for the job. The expensive reasoning loop is only justified when you genuinely need it.

The Right Default

Start deterministic. Add reasoning loops only where you can identify a specific type of ambiguity or adaptation the deterministic version can't handle.

"We should use an agent" is not a reason to use an agent. "This step requires the system to decide X based on Y, which can't be predetermined" is a reason.

The word "agent" has gotten ahead of the reality. Most teams would be better served by fewer agents and better workflows — and using the saved budget on the genuine reasoning problems that actually justify the cost.


If you're building multi-agent systems, check out Mission Control OS — we've been running it in production for a year: https://jarveyspecter.gumroad.com/l/pmpfz

Top comments (0)