DEV Community

Cover image for Using AI to Explain Terraform Plans to Humans
Yogesh VK
Yogesh VK

Posted on • Originally published at Medium

Using AI to Explain Terraform Plans to Humans

Turning raw infrastructure diffs into decisions engineers can actually understand.

INTRODUCTION

Terraform plans are incredibly precise. They show every resource change, attribute modification, and dependency update that will occur during an apply.
But precision is not the same as clarity.
For many engineers reviewing infrastructure changes, Terraform plans feel more like a wall of text than a meaningful explanation of what is about to happen. The information is there, but extracting the real implications often requires experience and careful reading.

This is exactly where AI can become useful. Not by executing infrastructure changes, but by translating Terraform plans into something humans can reason about.

THE PROBLEM WITH RAW TERRAFORM PLANS

Terraform's plan output is designed for correctness, not readability.
It faithfully lists changes such as resource replacements, attribute updates, and dependency adjustments. While this is ideal for machines and precise workflows, it can make reviews difficult for humans, especially in larger environments.
A simple plan might include:

  • hundreds of attribute updates
  • nested resource changes
  • implicit dependencies across modules

What reviewers actually want to know is much simpler:

  • What changed?
  • Why does it matter?
  • Is the risk acceptable?

Terraform itself does not answer those questions.

WHERE HUMAN REVIEW BREAKS DOWN

Experienced engineers eventually develop an instinct for reading Terraform plans. They scan for dangerous signals:
resource replacement

  • subnet or network changes
  • IAM policy expansions
  • scaling changes in compute clusters

But this intuition takes time to build, and even experienced reviewers can miss subtle interactions when reviewing large changes late in the day or under delivery pressure.
The real problem isn't lack of information. It's cognitive load.

Terraform tells us everything. Humans only need to understand the important parts.

WHY AI IS GOOD AT THIS PROBLEM

AI models are particularly good at summarizing structured text and identifying patterns.
A Terraform plan contains many signals that AI can interpret effectively:

  • which resources will be created, updated, or destroyed
  • whether replacements will occur
  • potential cost changes
  • security-sensitive modifications
  • large blast-radius changes

Instead of forcing humans to parse hundreds of lines of output, AI can produce a concise summary describing the operational impact.
This transforms the Terraform plan from a raw diff into an explanation.

AI AS A REVIEW ASSISTANT IN CI/CD

A practical place to integrate this capability is within CI/CD pipelines.
After generating a Terraform plan, a pipeline step can feed the plan output into an AI model. The model then produces a human-readable summary that is attached to the pull request.
Instead of reviewing raw plan text alone, engineers see a structured explanation such as:

Risk Summary: This change replaces the EKS node group, which will trigger a rolling replacement of worker nodes.
Security Impact: No IAM policies were expanded.
Cost Impact: Estimated monthly increase: approximately $120 due to increased instance size.
Operational Notes: Node replacement may temporarily reduce cluster capacity during rollout.
Enter fullscreen mode Exit fullscreen mode

This type of explanation does not replace the Terraform plan. It simply helps humans understand it faster.

USING GITHUB ACTIONS FOR AI-ASSISTED PLAN REVIEWS

GitHub Actions provides a natural place to implement this pattern.
A typical pipeline already includes steps like formatting, validation, and plan generation. Adding an AI analysis step is straightforward and can operate entirely in read-only mode.
The workflow might look like:

  • Run terraform plan
  • Export plan output as JSON
  • Send plan summary to an AI model
  • Post a structured explanation as a pull request comment

The key point is that the AI does not change infrastructure or execute Terraform commands. It only interprets the plan and produces a human-readable summary.
This keeps the decision-making process firmly in human hands.

WHY THIS IMPROVES INFRASTRUCTURE SAFETY

When infrastructure reviews fail, it is rarely because Terraform produced incorrect output.
Failures occur because reviewers misinterpret the impact or miss important signals hidden within large plans.
AI-assisted explanations reduce that risk by highlighting the kinds of changes humans care about most:

  • replacements
  • deletions
  • network changes
  • permission expansions
  • scaling adjustments

The AI becomes a second set of eyes, helping reviewers focus their attention where it matters.

THE IMPORTANT BOUNDARY

Even though AI can interpret plans effectively, it should never be allowed to execute them.
Running terraform apply still requires human ownership and operational judgment. AI can explain consequences, but it cannot decide whether those consequences are acceptable.
That boundary is what keeps AI useful rather than dangerous.

CLOSING THOUGHT

Terraform already tells us what will change.
AI helps answer the more useful question: What does this change actually mean?

By turning raw infrastructure diffs into clear explanations, AI allows DevOps teams to review changes faster, understand risk better, and make more confident decisions.
And that is exactly where AI belongs in infrastructure workflows - helping humans think more clearly, not replacing their judgment.

How does your team review Terraform plans today - raw output, custom tooling, or something smarter?

Originally published on Medium:

Top comments (0)