DEV Community

Cover image for Agentic DevOps: Letting AI Subagents Audit Terraform Infrastructure
Goodness Ojonuba
Goodness Ojonuba

Posted on

Agentic DevOps: Letting AI Subagents Audit Terraform Infrastructure

Agentic DevOps: Letting AI Subagents Audit Terraform Infrastructure

What if your DevOps workflow included AI workers that could review your infrastructure the same way a teammate would?

As part of my learning journey with Agentic AI, I’ve been exploring how modern AI systems can move beyond simple prompt-response interactions and begin operating more like structured engineering workflows.

Most agent systems operate using a simple loop:

Gather → Act → Verify

But there is another important idea that makes these systems far more powerful:

Delegation.

Instead of one AI trying to do everything, work can be delegated to specialized AI workers that focus on one responsibility.

In Claude Code, these workers are called subagents.


Skills vs Subagents

Earlier in my project I worked with Skills — reusable slash commands such as:

/tf-plan
/tf-apply
/deploy

Skills help standardize repeatable workflows and run inside the same conversation context.

But subagents work differently.

A subagent operates in its own isolated environment, with its own tools and sometimes its own model.

Think of it like assigning work to a specialist engineer instead of asking a general assistant to handle everything.

Feature Skills Subagents
How they start Triggered manually with slash commands Automatically delegated by the main agent
Context Shared conversation context Isolated context
Chat history Full conversation visible No chat history
Tools Uses the main agent’s tools Own restricted toolset
Model Uses the session model Can use a different model

Rule of thumb

If the task needs conversation context → use a Skill

If the task is self-contained → use a Subagent


The Three Subagents I Added

To experiment with this setup, I added three subagents to my DevOps project.

security-auditor

Reviews Terraform files and detects potential security risks.

tf-writer

Generates Terraform infrastructure following best practices.

cost-optimizer

Analyzes infrastructure configuration for potential cost inefficiencies.

Each subagent focuses on a single responsibility, which keeps the analysis precise and avoids context overload.


Running the Audit

To test the setup, I gave Claude Code a simple instruction:

Audit my Terraform files for security issues

The interesting part is what happened next.

The main AI agent did not attempt to perform the audit itself.

Instead, it recognized that the request matched the security-auditor subagent and delegated the task automatically.

From that point forward, the subagent handled the entire audit independently.

Issues I Didn’t Notice

One of the most valuable parts of the audit was how it surfaced issues I had overlooked during deployment.

The security-auditor flagged several things including:

  • CloudFront access logging disabled
  • No Web Application Firewall (WAF) protection
  • Missing security headers
  • S3 versioning not enabled

Each issue was linked to the specific Terraform resource responsible, along with explanations and suggested fixes.

This level of detail makes infrastructure reviews far easier to understand and act on.


Why Isolation Matters

Another interesting detail is how the subagent executed the task.

The security-auditor ran in read-only mode with a clean context, focused purely on auditing.

This isolation prevents unintended infrastructure changes and keeps the analysis focused on one responsibility.

In practice, it behaved like a dedicated security reviewer examining Terraform configuration.


Architecture of the Agentic DevOps Workflow


What This Means for DevOps

This small experiment showed me how AI can play a larger role in DevOps workflows.

Instead of using AI only to generate infrastructure code, it can also assist with:

  • Infrastructure auditing
  • Security validation
  • Cost optimization
  • Configuration reviews

When combined with specialized workers like subagents, AI begins to look less like a chatbot and more like a team of automated engineering assistants.


Key Takeaways

Working through this exercise gave me a clearer picture of how agentic workflows can fit into DevOps practices.

A few things stood out:

  • Delegation matters.

    The main agent didn’t attempt to do the security review itself. It delegated the task to a specialized subagent designed for that purpose.

  • Isolation improves safety.

    The security-auditor ran in read-only mode with a clean context, preventing unintended infrastructure changes.

  • Structured output makes reviews easier.

    Instead of vague suggestions, the audit returned categorized findings with severity levels and clear remediation steps.

  • Specialized agents reduce complexity.

    By splitting responsibilities across subagents (security, cost, code generation), the system stays focused and avoids context overload.

This exercise showed me that AI in DevOps doesn’t have to stop at generating Terraform code.

It can also help review, audit, and improve infrastructure configurations in a structured way.

What I found most valuable wasn’t just the speed of the audit, but the structured way the work was delegated.

The agent handled orchestration, while the subagent focused entirely on the security review — a workflow that fits naturally into how DevOps teams already operate.

Top comments (0)