DEV Community

Cover image for From Workflows to Autonomous Agents: How AI Agents Actually Work
Jefree Sujit
Jefree Sujit

Posted on

From Workflows to Autonomous Agents: How AI Agents Actually Work

When people talk about AI agents, the conversation usually jumps straight to tools, frameworks, or demos. But in real systems, the most important question isn’t what model you’re using — it’s how decisions are made and in what order actions happen.

Some systems follow a strict sequence.
Others are given a goal and decide the steps on their own.

Knowing this difference is the key to understanding How AI Agents Actually Work

This article is Part II of a three-part series on AI Agents.
In Part I, we explored what an AI agent is and how agents differ from simple prompts.
Here, we focus on how agents are structured and controlled — through workflows or autonomy. If you haven't checked it yet, here's the link

You can start with this article on its own, but it builds naturally on the ideas introduced earlier.

A Simple Analogy Before the Tech

Imagine two ways of getting dinner made.

In the first, you follow a recipe.
Every step is written down. You chop first, heat later, season at a specific moment, and cook for an exact duration. If you skip a step or change the order, the dish fails.

In the second, you hire a head chef.
You tell them what kind of dinner you want. They decide what dishes to make, in what order, what can be prepared in parallel, and when to adjust if something isn’t working.

Both approaches can produce great food.
But the control model is completely different.

That difference maps almost perfectly to how workflow agents and autonomous agents operate.

Before we get there, though, we need one foundational concept.

State Machines.

What Is a State Machine?

A state machine is a system that progresses through a series of well-defined states, one at a time, based on rules and transitions.

Each state represents a point in the process.
Each transition represents a condition that allows the system to move forward.

For example, lets take any authentication flow:

Authentication flow state machine

You cannot jump ahead.
You cannot reorder steps.
Every state depends on the previous one.

This model shows up everywhere in software: authentication flows, payment pipelines, order fulfillment, approvals, and booking systems. The reason is simple: many real-world processes depend on strict order.

Workflow agents are essentially state machines with intelligence embedded into each step.

A Quick Recap: What Is an AI Agent?

As we learnt from the previous article, an AI agent is something that can:

  • Perceive its environment (inputs, memory, context)
  • Decide what to do next (reasoning or logic)
  • Act using tools, APIs, or side effects

What changes between different kinds of agents is who controls the decision flow.

Is the sequence of actions decided upfront by the developer/orchestrator?
Or does the agent decide dynamically based on a goal?

That single difference gives us two major design patterns.

Orchestration-Centric Agents (Workflow Agents)

Workflow agents, also called orchestration-centric agents, operate inside a predefined structure. The flow of execution is known ahead of time.

You decide:

  • Which agents exist
  • The order in which they run
  • Whether steps run sequentially, in parallel, or in loops

The agent itself does not decide what comes next.
The workflow does.

This makes workflow agents highly predictable, easy to debug, and safe for critical systems.

When Workflow Agents Make Sense

Workflow agents work best when the process itself has strong dependencies.

Consider a travel booking assistant. You cannot plan hotels before knowing arrival times. You cannot arrange transport before the destination is finalized. Each step relies on the previous one being completed successfully.

This is not a creativity problem.
It’s a state transition problem.

Workflow agents are ideal when:

  • Order matters
  • Steps depend on earlier outputs
  • Reliability is more important than flexibility
  • You want full control over execution

Workflow Agents in Practice (Google ADK, TypeScript)

Here’s how a workflow agent looks using Google’s Agent Development Kit (ADK) in TypeScript.

import { LlmAgent, SequentialAgent, Runner } from "@google/adk";
import * as dotenv from "dotenv";

dotenv.config();

// Title agent — generates a blog title
const titleAgent = new LlmAgent({
  name: "TitleAgent",
  model: "gemini-2.5-flash",
  instruction: "Generate a catchy title for a blog post about AI agents.",
  outputKey: "title"
});

// Outline agent — creates an outline
const outlineAgent = new LlmAgent({
  name: "OutlineAgent",
  model: "gemini-2.5-flash",
  instruction: "Create a detailed outline for the blog post based on title.",
  outputKey: "outline"
});

// Content agent — writes content from outline
const contentAgent = new LlmAgent({
  name: "ContentAgent",
  model: "gemini-2.5-flash",
  instruction: "Write blog content following the given outline.",
  outputKey: "content"
});

// The workflow agent — runs sub-agents in sequence
const blogWorkflowAgent = new SequentialAgent({
  name: "BlogWorkflow",
  subAgents: [titleAgent, outlineAgent, contentAgent],
});

async function main() {
  const runner = new Runner({ agent: blogWorkflowAgent });

  const response = await runner.run({
    input: { topic: "Understanding Workflow vs Autonomous Agents" },
  });

  console.log("Workflow Result:", response.output);
}

main();
Enter fullscreen mode Exit fullscreen mode

The important thing here is the controlled flow. The directions and next steps are previously defined. Although in some cases, we might have an conditional edge that takes to more than one agent, but the control is within the agent orchestrator.

That’s orchestration-centric design.

Autonomy-Centric Agents (Autonomous Agents)

Autonomous agents flip the model.

Instead of being told how to proceed, they are told what outcome to achieve. The agent then decides which actions to take, in what order, and whether to retry or adjust.

This is autonomy-centric design.

The agent becomes a coordinator. It may invoke sub-agents, route tasks dynamically, loop on failures, or stop early if the goal is met.

When Autonomous Agents Make Sense

Autonomous agents shine when the order of actions is flexible and the goal matters more than the path.

Take content creation as an example. Writing a blog does not require a strict sequence. You can generate a title, outline, images, and content in any order. Some tasks can happen in parallel. Some may need refinement.

This is not a state machine problem.
It’s a goal resolution problem.

Autonomous agents are well suited when:

  • Tasks can be done in any order
  • Parallel execution is beneficial
  • Exploration or creativity is involved
  • The agent needs to adapt mid-execution

Autonomous Agent Example (Google ADK, TypeScript)

Below is a more realistic autonomous setup using multiple agents coordinated by a parent controller.

Specialized Sub-Agents

import { LlmAgent } from "@google/adk";

export const titleAgent = new LlmAgent({
  name: "TitleAgent",
  model: "gemini-2.5-flash",
  instruction: "Generate a clear, catchy blog title.",
  outputKey: "title"
});

export const outlineAgent = new LlmAgent({
  name: "OutlineAgent",
  model: "gemini-2.5-flash",
  instruction: "Create a structured blog outline.",
  outputKey: "outline"
});

export const imageAgent = new LlmAgent({
  name: "CoverImageAgent",
  model: "gemini-2.5-flash",
  instruction: "Suggest cover image ideas.",
  outputKey: "images"
});

export const contentAgent = new LlmAgent({
  name: "ContentAgent",
  model: "gemini-2.5-flash",
  instruction: "Write the full blog content.",
  outputKey: "content"
});

export const reviewAgent = new LlmAgent({
  name: "ReviewAgent",
  model: "gemini-2.5-flash",
  instruction: "Review and suggest improvements.",
  outputKey: "review"
});
Enter fullscreen mode Exit fullscreen mode

Parent (Controller) Agent

import {
  titleAgent,
  outlineAgent,
  imageAgent,
  contentAgent,
  reviewAgent
} from "./agents";

import { LlmAgent } from "@google/adk";

const autonomousBlogController = new LlmAgent({
  name: "AutonomousBlogController",
  model: "gemini-2.5-flash",
  instruction: `
You are a parent autonomous agent.

Goal:
Produce a complete blog package with a title, outline, images, content, and review.

Rules:
- Decide which agent to call next.
- Retry or refine outputs if needed.
- Order is not fixed.
- Stop only when the blog is complete and satisfactory.
`,
  tools: [
    titleAgent,
    outlineAgent,
    imageAgent,
    contentAgent,
    reviewAgent
  ],
  outputKey: "finalBlog"
});

await autonomousBlogController.execute({
  input: {
    topic: "Workflow Agents vs Autonomous Agents"
  }
});
Enter fullscreen mode Exit fullscreen mode

Here, there is no predefined sequence. The agents are defined, but what needs to happen when is not controlled by the Orchestrator, but by the agent itself. The agent reasons about what to do next and adapts as it goes.

That’s autonomy.

Workflow vs Autonomous: A Clear Comparison

Here's a quick comparison between workflow and autonomous agents

Aspect Workflow Agents Autonomous Agents
Control flow Predefined Dynamic
Structure State machine Goal-driven
Order Fixed Decided at runtime
Parallelism Explicit Emergent
Best suited for Pipelines, business flows Creative, adaptive tasks
Debuggability High Moderate
Flexibility Lower Higher

Final Thoughts

Workflow agents and autonomous agents aren’t competing ideas. They’re different answers to different problems.

When order, safety, and predictability matter, workflows are the right tool.
When goals, adaptability, and delegation matter, autonomy shines.

Most real systems use both — autonomous agents at the top to decide what needs to be done, and workflow agents underneath to execute critical steps safely.

In the next article, we’ll zoom out even further and address a common source of confusion:

AI Agents vs Agentic AI — why they’re not the same thing, and why that distinction matters.

Top comments (0)