DEV Community

Cover image for AI as a Junior Platform Engineer: How I "Onboard" Coding Agents
Yogesh VK
Yogesh VK

Posted on • Originally published at Medium

AI as a Junior Platform Engineer: How I "Onboard" Coding Agents

Introduction

The first time I started seriously using AI in my DevOps workflows, I made the same mistake I've seen many others make.
I treated it like a tool.
Something you prompt, get an answer from, and move on. It worked, to a point. But the results were inconsistent. Sometimes surprisingly good, sometimes completely off. It felt less like working with a system and more like rolling a dice.
That changed when I started thinking about AI differently. Not as a tool - but as a junior platform engineer joining the team. That shift alone made everything more predictable.

The First Day Problem

When a new engineer joins a team, we don't expect them to be productive immediately. We don't just hand them access to production systems and ask them be productive.
Instead, we onboard them. We give them:

  • context about the system
  • documentation
  • boundaries
  • a safe environment to contribute
  • time to understand how things work Without that, even a talented engineer will struggle. AI is no different.

Context Is the Difference Between Useful and Dangerous

One of the biggest differences between good and bad AI output is context. Without context, an AI agent will give you generic answers. They might be technically correct, but not aligned with your system, your architecture, or your constraints. This is where something like a context.md file becomes incredibly powerful.
Think of it as the onboarding document you would give a new engineer. It might include:

  • how your infrastructure is structured
  • naming conventions
  • environments and workflows
  • constraints (cost, security, compliance)
  • how Terraform modules are organized
  • what "good" looks like in your system

Once the AI has this context, its suggestions start to feel less generic and more like they belong to your system. Just like a junior engineer who finally understands how things are wired.
Sample context.md:

# Platform Context

## Overview
This repository manages AWS infrastructure using Terraform.
Primary workloads run on EKS clusters across dev, staging, and production environments.

## Key Principles
- Prefer managed services where possible
- Minimize blast radius of changes
- Avoid cross-environment coupling
- All changes must go through PR review

## Terraform Structure
- modules/ → reusable infrastructure components
- envs/dev → development environment
- envs/staging → staging environment
- envs/prod → production environment

## Naming Conventions
- Resources follow: <env>-<service>-<type>
- Example: prod-payments-eks

## Guardrails
- Never modify production directly
- No `terraform apply` without PR approval
- Avoid changes that trigger resource replacement unless explicitly required

## Cost Constraints
- Prefer smaller instance types unless justified
- Autoscaling should always have upper limits defined

## Security
- IAM roles must follow least privilege
- No wildcard permissions unless explicitly approved

## Review Expectations
When reviewing a Terraform plan, focus on:
- Resource replacements
- Changes in networking or IAM
- Scaling or cost implications
- Cross-module impact

## What "Good" Looks Like
- Small, isolated changes
- Clear PR descriptions
- Minimal blast radius
Enter fullscreen mode Exit fullscreen mode

Once I started using something like this, the difference was noticeable.
The AI responses became less generic and more aligned with how the system was actually designed. It started picking up on patterns like naming conventions, environment separation, and even risk signals like resource replacements.
It felt much closer to working with someone who had been onboarded into the system, rather than someone guessing from scratch.

Guardrails Matter More Than Intelligence

When onboarding a new engineer, we don't just give context. We also define boundaries. What they should and should not do. Where they can make changes. What requires review.
AI needs the same guardrails. For example, I'm comfortable letting AI:

  • suggest Terraform changes
  • explain plan outputs
  • summarize pull requests
  • generate draft configurations

But there are clear boundaries. AI should not:

  • directly apply infrastructure changes
  • bypass review processes
  • make decisions that require operational judgment

These are not limitations of capability. They are intentional design choices. Because just like with a new engineer, the goal is not maximum autonomy - it is safe contribution.

Start With PRs, Not Production

When a new engineer joins, we usually don't give them direct production access on day one. We ask them to start with:

  • small changes
  • pull requests
  • code reviews
  • guided feedback

This builds confidence and trust over time. The same model works extremely well with AI. Instead of letting AI operate directly on infrastructure, I treat it as a contributor to the PR workflow. It can:

  • generate changes
  • explain diffs
  • highlight potential issues
  • improve readability

But the final decision still goes through human review. This keeps the system safe while still benefiting from AI acceleration.

Feedback Loops Make It Better

A junior engineer improves with feedback. AI systems also improve with iteration. When something is off, the answer almost never is:

AI doesn't work

More often, it means:

"The context was incomplete"
 "The prompt didn't reflect constraints"
 "The guardrails weren't clear"

Over time, refining context and expectations makes AI far more reliable. It starts behaving less like a random generator and more like a team member who understands the system.

The Real Shift

Thinking of AI as a junior platform engineer changes how you design workflows. Instead of asking:

"What can this tool do?"

You start asking:

"How would I onboard someone into this system?"

That question naturally leads you to:

  • better context
  • clearer boundaries
  • safer workflows
  • more predictable outcomes

Closing Thought

AI in DevOps doesn't need to be treated as an autonomous operator. In many cases, it works best as a well-onboarded junior engineer:

  • guided by context
  • constrained by guardrails
  • contributing through safe workflows
  • improving over time

The goal is not to replace engineers. It is to make systems easier to understand, safer to operate, and faster to evolve. And sometimes, the best way to do that is not to give AI more power - but to onboard it more thoughtfully.

Curious to know what you think of this approach.

Originally published on Medium:

Top comments (0)