Terraform used to feel like homework
I actually read the docs. Like, the whole thing. If there's a "See also" link, I click it. If there's a provider page with 14 tabs, I'll go through all of them.
And still... Terraform ground me down.
Not because HCL is "hard." It's because it's specific. It's a thousand tiny "do you remember the exact name of this field?" moments. You can know what you want and still lose 45 minutes to the shape of the syntax.
That's why pairing Terraform with an LLM feels like a small relief. Not "the future is here" relief. More like "cool, I don't have to memorize every argument name in AWS this week" relief.
Terraform is declarative. LLMs like declarative.
Terraform works best when you think in desired state.
You don't tell it: "create a VPC, then create three subnets, then attach a route table..." You say: "here's a VPC, here are subnets, here are routes." Terraform figures out the plan.
LLMs map nicely to that because most human requests are also declarative:
- "I need an S3 bucket with versioning and encryption."
- "Put this service in private subnets behind an ALB."
- "Turn on deletion protection."
You can ask an LLM to translate that intent into HCL, and it will usually get you most of the way there quickly. The remaining part is where you check details, line up the resource with your standards, and make sure it won't do something dumb on apply.
Here's the kind of prompt that's genuinely useful:
Generate Terraform for an AWS S3 bucket:
- versioning enabled
- SSE-S3 encryption
- block all public access
- tags: app=crowdwave, env=prod
Use aws provider and output the bucket name.
The output won't be magic. But it'll be a solid starting point that saves you from re-typing the same resource blocks for the 200th time.
The real win: turning "lookup work" into "review work"
Most infra work isn't hard. It's tedious.
You're constantly doing little searches like:
- "What's the argument name for deletion protection again?"
- "Is it
enable_deletion_protectionordeletion_protection?" - "Does this resource want a list or a set?"
- "Which block is nested under which block?"
That's context fatigue. It's not fun. It's paying the tax of exactness.
LLMs reduce that tax. You can ask the question in plain English, then verify the answer against the docs and your standards. That loop beats clicking through provider docs until your brain starts drafting a resignation letter.
Confidence comes from process, not vibes
None of this works if you treat the LLM like a wizard.
Confidence doesn't come from trusting the AI. It comes from systems that catch mistakes:
terraform fmtterraform validateterraform plan- policy checks (OPA, Sentinel, whatever you use)
- test cases
- logging and error handling
- backups and rollback paths
Those are the guardrails that let you move fast without pretending you're psychic.
You will ship something based on bad info at some point. That's fine. The goal is making the blast radius small and the recovery boring.
The pairing I actually want
Terraform is the part that enforces reality. State. Drift detection. Plans. Applies. The whole "this is what exists" discipline.
The LLM is the part that helps you move through the syntax jungle without getting scratched to death.
It's a good pairing. Not because it replaces you. It doesn't. It replaces the dumb stuff: copy/paste, remembering field names, and the "why is this nested here" garbage.
Run plan. Read the diff. Make the call.
-Sethers
Top comments (0)