AI coding assistants promise to accelerate infrastructure delivery, but organizations are discovering a hidden cost: code that passes syntax validation often fails security audits. Recent research shows that while AI-generated infrastructure code may look correct, only 9% meets security compliance standards. When MyCoCo's platform team generated dozens of Terraform modules with AI assistance, a security scan revealed a sobering truth—speed without guardrails creates technical debt that compounds with every deployment.
TL;DR
The Problem: AI-generated Terraform passes terraform validate but fails organizational compliance—missing tags, overly permissive IAM, exposed resources.
The Solution: Implement OPA-based policy guardrails at the PR level that catch AI blind spots before code reaches production.
The Impact: MyCoCo reduced security findings from 47 to 3 per AI-generated module while retaining 70% of velocity gains.
Key Implementation: Custom OPA policies targeting common AI omissions: required tags, encryption enforcement, least-privilege IAM.
Bottom Line: AI accelerates IaC development, but only with organizational context injected through automated policy enforcement.
The Challenge: MyCoCo's AI Experiment
Jordan, MyCoCo's Platform Engineer, was convinced AI would transform their infrastructure delivery. With a major product launch approaching, the platform team faced an impossible timeline: 30 new Terraform modules in six weeks. Using GitHub Copilot and Claude, Jordan's team produced the modules in just two weeks.
"We were shipping infrastructure faster than ever. The AI understood Terraform syntax perfectly. Every module passed validation on the first try."
Then Maya, the Security Engineer, ran her pre-production Checkov scan.
The results stopped the launch cold: 47 security findings per module on average. S3 buckets without encryption. Lambda functions with wildcard IAM permissions. And the most painful discovery—not a single resource had MyCoCo's required Environment, Owner, or CostCenter tags.
"The AI wrote syntactically perfect Terraform. But it had no idea about our tagging policies, our naming conventions, or our security baseline. It generated code like we were a greenfield startup, not a company preparing for SOC 2."
Sam, the Senior DevOps Engineer, had warned the team from the start. The confidence gap was real—the team trusted AI-generated code more than manually written code, despite having less visibility into its logic.
Alex, VP of Engineering, faced a choice: delay the launch to manually fix every module, or find a way to make AI-generated code meet MyCoCo's standards automatically.
The Solution: OPA Guardrails for AI-Generated Code
MyCoCo's solution wasn't to abandon AI—it was to teach their pipeline what the AI didn't know. The team implemented a three-layer policy enforcement approach using Open Policy Agent (OPA) integrated with Conftest.
Layer 1: Required Tags Policy
The most common AI omission was resource tagging. MyCoCo created an OPA policy that blocks any PR missing required tags:
# policy/tags.rego
package terraform.tags
required_tags := ["Environment", "Owner", "CostCenter"]
deny[msg] {
resource := input.resource_changes[_]
resource.change.actions[_] == "create"
tags := object.get(resource.change.after, "tags", {})
missing := [tag | tag := required_tags[_]; not tags[tag]]
count(missing) > 0
msg := sprintf(
"%s '%s' missing required tags: %v",
[resource.type, resource.name, missing]
)
}
Layer 2: Encryption Enforcement
AI-generated S3 buckets and RDS instances frequently lacked encryption configuration—a SOC 2 requirement:
# policy/encryption.rego
package terraform.encryption
deny[msg] {
resource := input.resource_changes[_]
resource.type == "aws_s3_bucket"
resource.change.actions[_] == "create"
# Check for server-side encryption configuration
not has_encryption_config(resource.address)
msg := sprintf(
"S3 bucket '%s' must have encryption enabled",
[resource.name]
)
}
Layer 3: IAM Least Privilege
The most dangerous AI pattern was wildcard IAM permissions. This policy catches overly permissive policies before they reach production:
# policy/iam.rego
package terraform.iam
deny[msg] {
resource := input.resource_changes[_]
resource.type == "aws_iam_policy"
policy_doc := json.unmarshal(resource.change.after.policy)
statement := policy_doc.Statement[_]
statement.Effect == "Allow"
statement.Action[_] == "*"
msg := sprintf(
"IAM policy '%s' contains wildcard Action - use least privilege",
[resource.name]
)
}
Pipeline Integration
The team integrated these policies into their GitHub Actions workflow, running conftest against every Terraform plan:
- name: Policy Check
run: |
terraform plan -out=tfplan
terraform show -json tfplan > tfplan.json
conftest test tfplan.json --policy policy/
Any policy violation blocks the PR merge, with clear error messages explaining exactly what needs to be fixed. Jordan found that AI assistants could often fix the violations when given the specific error message—turning the guardrail into a feedback loop.
Results: MyCoCo's Transformation
Within three weeks of implementing OPA guardrails, MyCoCo's metrics shifted dramatically:
- Security findings per AI-generated module: 47 → 3 (94% reduction)
- Development velocity: Retained approximately 70% of the original speed gains
- Unexpected benefit: The guardrails improved manually-written code too—engineers discovered their own modules had tagging gaps
"We stopped thinking of AI as a code generator and started thinking of it as a fast first draft. The guardrails aren't a speed bump—they're the quality gate that makes the speed sustainable."
Maya added the policies to MyCoCo's security documentation, creating an "AI-Generated Code Checklist" that new team members review before using coding assistants. The launch proceeded on schedule, with infrastructure that passed SOC 2 audit on the first attempt.
Key Takeaways
Syntax validity does not equal security compliance. AI-generated code that passes
terraform validatemay still fail 90%+ of security requirements.AI lacks organizational context by design. Your tagging policies, naming conventions, and security baselines don't exist in training data. Guardrails inject that context automatically.
The confidence gap is dangerous. Teams often review AI-generated code less carefully than human-written code, despite it being more likely to have compliance gaps. Invert this assumption.
Guardrails create feedback loops. When AI assistants receive specific policy violation messages, they can often self-correct—making the guardrail an accelerator, not just a gate.
Start with the obvious omissions. Required tags, encryption, and least-privilege IAM catch the majority of AI blind spots with minimal policy complexity.
Conclusion
AI-generated infrastructure code isn't going away—it's too fast and too useful. But speed without guardrails creates security debt that compounds with every deployment. The solution isn't to abandon AI; it's to inject your organizational context through automated policy enforcement.
Start with three policies: required tags, encryption enforcement, and IAM least privilege. These catch the majority of AI blind spots and give you a foundation to build on.
Your infrastructure can be both fast and compliant. You just need the right guardrails.
What's your experience with AI-generated infrastructure code? Have you implemented guardrails, or are you still reviewing everything manually? Share your lessons learned in the comments below!

Top comments (0)