<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Luigi Di Fraia</title>
    <description>The latest articles on DEV Community by Luigi Di Fraia (@luigidifraia).</description>
    <link>https://dev.to/luigidifraia</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/luigidifraia"/>
    <language>en</language>
    <item>
      <title>Transformative AI-Powered Platform Engineering</title>
      <dc:creator>Luigi Di Fraia</dc:creator>
      <pubDate>Sat, 25 Apr 2026 06:36:38 +0000</pubDate>
      <link>https://dev.to/luigidifraia/transformative-ai-powered-platform-engineering-2902</link>
      <guid>https://dev.to/luigidifraia/transformative-ai-powered-platform-engineering-2902</guid>
      <description>&lt;p&gt;&lt;em&gt;This is Part 1 of a series on building agentic AI workflows for platform engineering teams. The series covers workspace design, encoding standards, agent architecture, tool integrations, and the refinement loop that makes it all compound over time.&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;If you're running a platform engineering team in 2026 and your AI tooling still consists of "paste Terraform into ChatGPT and hope for the best," you're leaving serious velocity on the table.&lt;/p&gt;

&lt;p&gt;But here's the thing most people get wrong: the answer isn't better prompts. It's better structure.&lt;/p&gt;

&lt;p&gt;In my current engagement, we've been building agentic AI workflows into platform engineering for a while now. The stack starts where most platform teams start: AWS, Terraform for IaC, GitLab for source control and CI/CD. Multiple accounts, multiple environments, and a growing collection of modules that encode your team's opinions about how infrastructure should look.&lt;/p&gt;

&lt;p&gt;No single person holds all of those opinions in their head. And neither does an LLM; not without help.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Problem With Ad-Hoc AI
&lt;/h2&gt;

&lt;p&gt;Every platform engineer has done this: you're writing a Terraform module, you ask your AI assistant to generate an IAM policy, and it hands you a &lt;code&gt;jsonencode()&lt;/code&gt; block with inline JSON. It works. It's also wrong: your team uses &lt;code&gt;data.aws_iam_policy_document&lt;/code&gt; exclusively, for good reasons (readability, composability, Checkov compatibility). But the AI doesn't know that.&lt;/p&gt;

&lt;p&gt;You correct it. It apologises. Next session, it does the same thing again.&lt;/p&gt;

&lt;p&gt;Or this: you ask it to create an EKS add-on configuration, and it generates a &lt;code&gt;kubectl apply&lt;/code&gt; command. Your team is GitOps-first: everything goes through ArgoCD. But the AI doesn't know that either.&lt;/p&gt;

&lt;p&gt;The pattern is always the same. The AI is competent at the language level but ignorant at the team level. It knows Terraform syntax but not your Terraform conventions. It knows Kubernetes but not your Kubernetes workflow.&lt;/p&gt;

&lt;p&gt;Most teams try to fix this with longer prompts, or by pasting their standards into the chat window. That works for about ten minutes, until the context window fills up or you start a new session.&lt;/p&gt;




&lt;h2&gt;
  
  
  What If Your Standards Were Built Into the Tools?
&lt;/h2&gt;

&lt;p&gt;Imagine this instead: every time an AI agent writes Terraform in your workspace, it has already read your module structure conventions, your naming rules, your IAM policy patterns, your provider configuration, and your security baseline. Not because someone pasted them in; because they're part of the workspace itself.&lt;/p&gt;

&lt;p&gt;Every time it creates a merge request, it knows your commit message format, your branch naming convention, your CI template patterns, and your cross-linking strategy between tickets and code.&lt;/p&gt;

&lt;p&gt;Every time it designs a new feature, it can check your existing codebase for similar patterns, identify which repos are affected, and plan the work in the right order.&lt;/p&gt;

&lt;p&gt;That's what an AI-powered workspace gives you. Not smarter AI but better-informed AI.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Big Picture
&lt;/h2&gt;

&lt;p&gt;Over this series, I'll walk through how to build this from scratch. Here's what we'll cover:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The foundation&lt;/strong&gt;: steering files that encode your non-negotiable rules. These are loaded into every AI conversation automatically. Your Terraform patterns, your git conventions, your CI/CD standards. Write them once, enforce them forever.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Deep reference material&lt;/strong&gt;: skills that agents opt into when they need domain-specific knowledge. Your landing zone structure, your account vending patterns, your CI template library. Too detailed for every conversation, essential for the right ones.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Specialised agents&lt;/strong&gt;: purpose-built agents for different roles: one that writes infrastructure code, one that reviews merge requests from security and compliance perspectives, one that blueprints features into implementation tasks, one that ships code end-to-end. Each with its own tools, context, and boundaries.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tool integrations&lt;/strong&gt;: connecting your agents to the systems they need: your ticket tracker for work management, AWS documentation for reference, your CI/CD pipelines for deployment status. Agents that can only read and write files are useful. Agents that participate in your actual workflow are transformative.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The refinement loop&lt;/strong&gt;: the part that makes it all compound. Every time the AI gets something wrong, you encode the correction in the workspace. Next session, it gets it right. Over weeks and months, your workspace accumulates the team's collective judgement.&lt;/p&gt;

&lt;p&gt;And here's the part that doesn't get talked about enough: &lt;strong&gt;onboarding becomes trivial&lt;/strong&gt;. A new engineer clones the workspace and immediately has access to every convention, every pattern, every hard-won lesson the team has learned; not as a Confluence page they'll never read, but as active rules built into the tools they use from minute one. No more three-month ramp-up. No more "ask Sarah, she knows how we do IAM policies." The workspace &lt;em&gt;is&lt;/em&gt; the institutional knowledge.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Tech Stack
&lt;/h2&gt;

&lt;p&gt;To keep this concrete, the series assumes a specific (but common) platform engineering stack:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Cloud&lt;/strong&gt;: AWS, multi-account (Control Tower for landing zone)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;IaC&lt;/strong&gt;: Terraform, multi-environment&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Source Control &amp;amp; CI/CD&lt;/strong&gt;: GitLab with shared CI templates&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Secret Management&lt;/strong&gt;: AWS Secrets Manager, never in code&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Later in the series, we'll layer on Kubernetes (EKS), a developer portal (Backstage), and GitOps (ArgoCD). But the foundation starts here: with Terraform and the rules your team already has but hasn't encoded yet.&lt;/p&gt;

&lt;p&gt;If your stack differs, the principles still apply. The workspace structure is stack-agnostic; only the content of the steering files and skills changes.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Tooling Choice
&lt;/h2&gt;

&lt;p&gt;The workspace structure in this series is built around &lt;a href="https://kiro.dev" rel="noopener noreferrer"&gt;Kiro&lt;/a&gt;, an AI-powered IDE from AWS. It's an opinionated choice, and deliberately so.&lt;/p&gt;

&lt;p&gt;Kiro provides a layered context model through its &lt;code&gt;.kiro/&lt;/code&gt; directory:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Steering files&lt;/strong&gt;: always injected into every conversation, non-negotiable&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Skills&lt;/strong&gt;: deeper reference material that specific agents opt into&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Agent definitions&lt;/strong&gt;: role-specific behaviour, tools, and context&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This enforced separation of concerns is what makes the system scale. Your Terraform rules don't bloat every conversation with Kubernetes context. Your CI patterns are available when needed but not loaded when irrelevant.&lt;/p&gt;

&lt;p&gt;If your team uses a different AI tool, the &lt;code&gt;AGENTS.md&lt;/code&gt; file at the workspace root serves as a portable fallback: it's a plain markdown file that tools like Claude Code, Cursor, and others pick up automatically. You won't get the layered context model, but you'll get the basics.&lt;/p&gt;




&lt;h2&gt;
  
  
  Getting Started Today, Before Part 2
&lt;/h2&gt;

&lt;p&gt;You don't need to wait for the rest of this series to start. Here's what you can do right now:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Create one steering file.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Pick the area where your AI assistant causes the most damage. For most platform teams, that's Terraform. Write down the rules you find yourself repeating:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What's your module file structure?&lt;/li&gt;
&lt;li&gt;How do you write IAM policies? (&lt;code&gt;data.aws_iam_policy_document&lt;/code&gt;? &lt;code&gt;jsonencode()&lt;/code&gt;? Something else?)&lt;/li&gt;
&lt;li&gt;What's your naming convention?&lt;/li&gt;
&lt;li&gt;What provider version do you pin?&lt;/li&gt;
&lt;li&gt;What security rules are non-negotiable?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Put it in &lt;code&gt;.kiro/steering/terraform.md&lt;/code&gt; (or whatever your AI tool's equivalent is). It doesn't need to be perfect. It needs to exist.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Create an AGENTS.md file.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;At your workspace root, write a plain markdown file that describes your project: what it is, how it's structured, how to build it, and the three or four rules that matter most. This works with any AI tool, no configuration required.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Test it.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Ask your AI assistant to generate something it usually gets wrong: an IAM policy, a CI pipeline, a Kubernetes manifest. See if the steering file corrects the behaviour. If it doesn't, tighten the rule. If it does, you've just experienced the refinement loop.&lt;/p&gt;

&lt;p&gt;That's the foundation. In Part 2, we'll go deep on steering files, the specific rules that prevent the most common AI-generated mistakes in Terraform, GitLab CI, and git workflows.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Next in the series: **Steering Files: Teaching AI Your Non-Negotiable Rules&lt;/em&gt;**&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Follow along for the rest of the series, or connect if you're building something similar. I'd love to compare notes.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>agents</category>
      <category>ai</category>
      <category>automation</category>
      <category>devops</category>
    </item>
  </channel>
</rss>
