This is Part 1 of a series on building agentic AI workflows for platform engineering teams. The series covers workspace design, encoding standards, agent architecture, tool integrations, and the refinement loop that makes it all compound over time.
If you're running a platform engineering team in 2026 and your AI tooling still consists of "paste Terraform into ChatGPT and hope for the best," you're leaving serious velocity on the table.
But here's the thing most people get wrong: the answer isn't better prompts. It's better structure.
In my current engagement, we've been building agentic AI workflows into platform engineering for a while now. The stack starts where most platform teams start: AWS, Terraform for IaC, GitLab for source control and CI/CD. Multiple accounts, multiple environments, and a growing collection of modules that encode your team's opinions about how infrastructure should look.
No single person holds all of those opinions in their head. And neither does an LLM; not without help.
The Problem With Ad-Hoc AI
Every platform engineer has done this: you're writing a Terraform module, you ask your AI assistant to generate an IAM policy, and it hands you a jsonencode() block with inline JSON. It works. It's also wrong: your team uses data.aws_iam_policy_document exclusively, for good reasons (readability, composability, Checkov compatibility). But the AI doesn't know that.
You correct it. It apologises. Next session, it does the same thing again.
Or this: you ask it to create an EKS add-on configuration, and it generates a kubectl apply command. Your team is GitOps-first: everything goes through ArgoCD. But the AI doesn't know that either.
The pattern is always the same. The AI is competent at the language level but ignorant at the team level. It knows Terraform syntax but not your Terraform conventions. It knows Kubernetes but not your Kubernetes workflow.
Most teams try to fix this with longer prompts, or by pasting their standards into the chat window. That works for about ten minutes, until the context window fills up or you start a new session.
What If Your Standards Were Built Into the Tools?
Imagine this instead: every time an AI agent writes Terraform in your workspace, it has already read your module structure conventions, your naming rules, your IAM policy patterns, your provider configuration, and your security baseline. Not because someone pasted them in; because they're part of the workspace itself.
Every time it creates a merge request, it knows your commit message format, your branch naming convention, your CI template patterns, and your cross-linking strategy between tickets and code.
Every time it designs a new feature, it can check your existing codebase for similar patterns, identify which repos are affected, and plan the work in the right order.
That's what an AI-powered workspace gives you. Not smarter AI but better-informed AI.
The Big Picture
Over this series, I'll walk through how to build this from scratch. Here's what we'll cover:
The foundation: steering files that encode your non-negotiable rules. These are loaded into every AI conversation automatically. Your Terraform patterns, your git conventions, your CI/CD standards. Write them once, enforce them forever.
Deep reference material: skills that agents opt into when they need domain-specific knowledge. Your landing zone structure, your account vending patterns, your CI template library. Too detailed for every conversation, essential for the right ones.
Specialised agents: purpose-built agents for different roles: one that writes infrastructure code, one that reviews merge requests from security and compliance perspectives, one that blueprints features into implementation tasks, one that ships code end-to-end. Each with its own tools, context, and boundaries.
Tool integrations: connecting your agents to the systems they need: your ticket tracker for work management, AWS documentation for reference, your CI/CD pipelines for deployment status. Agents that can only read and write files are useful. Agents that participate in your actual workflow are transformative.
The refinement loop: the part that makes it all compound. Every time the AI gets something wrong, you encode the correction in the workspace. Next session, it gets it right. Over weeks and months, your workspace accumulates the team's collective judgement.
And here's the part that doesn't get talked about enough: onboarding becomes trivial. A new engineer clones the workspace and immediately has access to every convention, every pattern, every hard-won lesson the team has learned; not as a Confluence page they'll never read, but as active rules built into the tools they use from minute one. No more three-month ramp-up. No more "ask Sarah, she knows how we do IAM policies." The workspace is the institutional knowledge.
The Tech Stack
To keep this concrete, the series assumes a specific (but common) platform engineering stack:
- Cloud: AWS, multi-account (Control Tower for landing zone)
- IaC: Terraform, multi-environment
- Source Control & CI/CD: GitLab with shared CI templates
- Secret Management: AWS Secrets Manager, never in code
Later in the series, we'll layer on Kubernetes (EKS), a developer portal (Backstage), and GitOps (ArgoCD). But the foundation starts here: with Terraform and the rules your team already has but hasn't encoded yet.
If your stack differs, the principles still apply. The workspace structure is stack-agnostic; only the content of the steering files and skills changes.
The Tooling Choice
The workspace structure in this series is built around Kiro, an AI-powered IDE from AWS. It's an opinionated choice, and deliberately so.
Kiro provides a layered context model through its .kiro/ directory:
- Steering files: always injected into every conversation, non-negotiable
- Skills: deeper reference material that specific agents opt into
- Agent definitions: role-specific behaviour, tools, and context
This enforced separation of concerns is what makes the system scale. Your Terraform rules don't bloat every conversation with Kubernetes context. Your CI patterns are available when needed but not loaded when irrelevant.
If your team uses a different AI tool, the AGENTS.md file at the workspace root serves as a portable fallback: it's a plain markdown file that tools like Claude Code, Cursor, and others pick up automatically. You won't get the layered context model, but you'll get the basics.
Getting Started Today, Before Part 2
You don't need to wait for the rest of this series to start. Here's what you can do right now:
1. Create one steering file.
Pick the area where your AI assistant causes the most damage. For most platform teams, that's Terraform. Write down the rules you find yourself repeating:
- What's your module file structure?
- How do you write IAM policies? (
data.aws_iam_policy_document?jsonencode()? Something else?) - What's your naming convention?
- What provider version do you pin?
- What security rules are non-negotiable?
Put it in .kiro/steering/terraform.md (or whatever your AI tool's equivalent is). It doesn't need to be perfect. It needs to exist.
2. Create an AGENTS.md file.
At your workspace root, write a plain markdown file that describes your project: what it is, how it's structured, how to build it, and the three or four rules that matter most. This works with any AI tool, no configuration required.
3. Test it.
Ask your AI assistant to generate something it usually gets wrong: an IAM policy, a CI pipeline, a Kubernetes manifest. See if the steering file corrects the behaviour. If it doesn't, tighten the rule. If it does, you've just experienced the refinement loop.
That's the foundation. In Part 2, we'll go deep on steering files, the specific rules that prevent the most common AI-generated mistakes in Terraform, GitLab CI, and git workflows.
Next in the series: **Steering Files: Teaching AI Your Non-Negotiable Rules**
Follow along for the rest of the series, or connect if you're building something similar. I'd love to compare notes.
Top comments (1)
The refinement loop—encode the correction in the workspace so the next session gets it right—is the mechanism that turns AI from a tool you argue with into a tool that accumulates your team's judgment. Every platform team has the same experience: you correct the AI, it apologizes, next session it makes the same mistake. The correction was verbal, ephemeral, lost when the context window scrolled. Encoding it as a steering file rule makes the correction persistent. Over weeks, the workspace stops being a generic AI interface and starts being your team's AI interface.
What I find myself thinking about is the onboarding implication. A new engineer joins, clones the workspace, and immediately has access to every convention the team has encoded—not as a wiki page they might read, but as active rules that shape every AI interaction from minute one. The institutional knowledge isn't something they have to absorb. It's something the environment enforces on their behalf. They don't need to remember to use
data.aws_iam_policy_documentinstead ofjsonencode(). The AI just does it, because the steering file says so, and the new engineer learns the convention by seeing it applied, not by memorizing a style guide. That's a different onboarding model entirely—learning by observing correct behavior rather than by being corrected after making mistakes.The AGENTS.md fallback is a pragmatic touch. Not every team will adopt Kiro. Not every tool supports layered context. But almost every AI coding tool reads a markdown file at the workspace root. The layered model is better—your Terraform rules shouldn't bloat every conversation with Kubernetes context—but the flat file is the portable baseline. It's the "works everywhere" version of the same idea. How much of your current steering file content is Terraform-specific versus general platform engineering conventions that apply regardless of the IaC tool? I'm curious where teams draw the line between "this goes in terraform.md" and "this belongs in something more foundational."