DEV Community

Cover image for Setting Up Kiro for Your AI-Native SDLC
Davide de Paolis
Davide de Paolis

Posted on

Setting Up Kiro for Your AI-Native SDLC

I didn't wake up one morning and decide, "Today I'm building an AI-native Software Development Life Cycle!" It took weeks of trial and error to figure out what actually works. This post is my attempt to save you that time, so you can hit the ground running from day one.

The Context: Coding Between Meetings

When you move from IC to Tech Lead or Engineering Manager you will face different challenges: you write less code, but you're expected to stay technical. At this point, my hands-on coding is close to zero (though I know the role of EM varies wildly depending on company and team). But I'm not just doing people stuff like 1:1s, roadmaps, vacation planning and agile ceremonies.
The real job is still:

  • Providing technical guidance and architecture decisions
  • Keeping an eye on delivery and quality
  • Mentoring the team
  • Evaluating technical contributions
  • Building a high-performing team where blockers get removed and quality, security, and automation are just how we work.

You know the difference between the maker's schedule and the manager's schedule? Yeah, that's my life now.

time management

Still, I try to stay close to the code whenever I can. It keeps me sharp, and often dogfooding our own platform helps me understand what pains my team members, our internal users and newly onboarded teams are going through.

So I pick up tickets that are off the critical path. The grunt work. The nice-to-haves. Stuff that could take a day or a month and nobody's blocked waiting for it:

  • Documentation improvements
  • Pipeline fixes and optimizations
  • Small refactors that have been bugging people
  • Architecture diagrams that actually reflect reality
  • Raising the bar on code quality through better practices and conventions

Plus, there's the other stuff:

  • Code reviews (even in languages I'm rusty in)
  • Making sense of old, poorly documented repos we've inherited
  • Cutting through documentation overhead and decision paralysis
  • Exploring new services we might want to add to our platform

It's a lot of hats. And the time is always scarce.
Trying to do all this between meetings, with constant context switching and maybe 30 minutes here and there? It was rough. Until recently, my AI usage was pretty basic: ChatGPT or Langdock for document reviews, text polishing, brainstorming. Basically a fancy Google search.

For coding, I had Copilot in VS Code and Amazon Q in my terminal, but I was barely scratching the surface. "Summarise this document", "Refactor that function", "Write me a quick shell script." The most useful thing was asking Q for AWS CLI commands in plain English.

Then Kiro happened. When it launched (and especially when Q became Kiro in November), I immediately upgraded to Kiro Plus. Perfect timing, too—my team got hit hard with parental leave, sick days, and vacation all at once. Suddenly, I had a chance to get my hands dirty again, yet I still can't get long, uninterrupted coding sessions.

Most EMs and Staff+ engineers I know are coding in the gaps between meetings. That's exactly why AI matters so much now. We can use it to amplify what we can do in those fragmented windows of time.
Turning hours into minutes.

What Changed: The Mindset Shift

These days, Kiro is my go-to for:

  • Code Reviews: Even in languages where I'm not the expert, I can provide meaningful guidance
  • Documentation: Making sense of legacy repos and actually documenting them properly
  • Problem Structuring: Taking vague requirements and turning them into something actionable
  • Architecture: Quickly exploring alternatives, surfacing trade-offs, challenging my own assumptions. And yeah, drawing diagrams (with Draw.io and Mermaid) that actually stay up to date
  • Decision Records: Writing RFCs and ADRs while the context is still in my head
  • Maintenance: Keeping docs and diagrams from going stale
  • Learning: Getting up to speed on unfamiliar (but not necessarily new) tech

Has it boosted my productivity? Absolutely. But I won't lie; the setup took time. Figuring out steering files, agents, hooks, MCP servers, powers... it was a lot. That's why I'm writing this: to save you the mental overhead I went through.

I won't bore you with the details of setting it up. It's really straightforward and well documented: Kiro How-To

But I want to briefly introduce Kiro features and how you can take advantage of them.

Quick Note on Cost and Pricing

Before we dive in, let's talk money. Kiro's pricing is refreshingly simple: monthly plans with credits. €20 gets you 1,000 credits, €40 gets you 2,000, and €200 gets you 10,000.

How fast you burn through credits? That depends on which model you're using and how complex your interactions are. Bigger context means more credits. But what I appreciate is the full cost visibility. Every single interaction shows you exactly how many credits it used. No surprises at the end of the month.

If you do run out of credits mid-month, you can pay as you go at $0.04/credit. And from the org side, you can:

  • Toggle overage on/off to control costs
  • Enable detailed usage metrics (saved to S3) to see where credits are going

cost overages

The cost isn't entirely predictable, but it's transparent. After a few weeks of use, I found the middle tier works well for our workflow. The time I save on documentation, code reviews, and understanding unfamiliar codebases more than justifies the cost. Your mileage may vary depending on how you work, but the visibility makes it easy to adjust.

Prompts: Reusable Instructions

Prompt: an input or instruction you give to the AI. It tells the AI what to do, what context to consider, and how to respond. Think of it as a mix between a command, a question, and a set of guidelines.

In our daily work, we have recurring tasks. Not the simple "refactor this" or "make this more professional" requests - (and even these could benefit from better prompting!) - I'm talking about complex, multi-step instructions that you'd otherwise repeat every time. That's where prompt files come in.

What Are Prompt Files?

Prompt files are markdown documents stored in your project that contain reusable instructions for the AI. They're written as behavior guides, not templates for users to fill in.

Where They Live

  • Source: .kiro/prompts/
  • Usage: Reference with @your-prompt in CLI, or mention naturally in the IDE

How to Structure Them

Write prompts TO the AI, not FOR the user. Include context, approach, and expected behavior. Here's a basic structure:

# Prompt Name

You are helping the user with [specific task].

## Your Approach

1. First, do this
2. Then, do that
3. Finally, conclude with this

## Communication Style

- Be clear and concise
- Ask clarifying questions
- Provide examples

## Example Interaction

User: "Help me debug this"
You: "Let me analyze the error. First, I'll check..."
Enter fullscreen mode Exit fullscreen mode

Prompts We Actually Use

Here are a few we've built that save us time:

  • AI Review: Before opening a PR, we ask AI to review all committed changes. Catches obvious issues before bothering teammates.
  • PR Generation: The agent analyzes committed changes and creates a detailed summary for the PR description. Gives reviewers context upfront.
  • Project Review: When jumping into an unfamiliar repository, we ask Kiro for a project review. It summarizes structure, dependencies, main components, risks, and opportunities. Helps you decide where to focus first.

Steering Files: Teaching Consistent Behaviour

If prompts let you manually instruct the AI for specific tasks, steering files teach it to behave consistently without repeating yourself every time.

Steering files are markdown documents that live next to your code and provide context, conventions, and guidelines to AI assistants. Think of them as persistent documentation that automatically shapes how Kiro behaves across all your sessions. Instead of telling the AI your team's conventions every time, you tell it once.

Other tools use similar concepts with different names (like .rules files). I must say I prefer the concept of "steering" because these files guide and influence behavior rather than enforce rigid constraints. They let the AI move fast within the boundaries you've defined.

What Goes in Steering Files?

Kiro can generate foundational steering files (product overview, tech stack, project structure) just by reviewing your project. After that, you can add custom steering for specific areas:

  • Security guidelines
  • Testing strategies
  • Deployment procedures
  • Code review standards
  • Naming conventions
  • Architecture principles
  • Team communication protocols

creating steering files

The Human Benefit

I have read a lot of negative comments about the verbosity of AI and the sprawl of ai-context markdown files, but personally, I believe Steering files are just as useful for humans as they are for AI. They become:

  • Living documentation that stays close to the code
  • Onboarding guides for new team members
  • Quick reference for project conventions
  • Single source of truth for standards

The more I use them, the less I need those manually written READMEs with tables of contents and multiple subpages that go stale after a few weeks. Now I keep a main README for humans, the AGENTS.md file for AI Agents and a couple of other entry points for humans consuming or contributing to the repository. But they mostly point to the steering files.

Kiro's Smart Loading Strategy

The "always on" nature of context files could be a problem. Load too much, and you'll burn through credits. This is where Kiro stands out: it supports three loading strategies to keep context focused.

Always: Loaded in every conversation

---
inclusion: always
---
Enter fullscreen mode Exit fullscreen mode

Use this for core project context that's always relevant.

File-Based: Loaded when specific files are open

---
inclusion: fileMatch
fileMatchPattern: "*.tf"
fileMatchPattern: ["**/*.ts", "**/*.tsx", "**/tsconfig.*.json"]
---
Enter fullscreen mode Exit fullscreen mode

Use this for domain-specific guidance. When you open a Terraform file, the Terraform guide loads automatically.

Manual: Loaded only when referenced

---
inclusion: manual
---
Enter fullscreen mode Exit fullscreen mode

Use this for detailed guides you only need occasionally.
This on-demand loading keeps context focused and prevents overwhelming the AI with irrelevant information.

Content Organisation

You can write content directly in the steering file or reference another file in your repo. I tend to put shared context under .agents/context/ so different AI tools can reference it. Each developer can then create pointers in their tool's settings (.cursorrules, .kiro/steering/, etc.). More on this approach in a later post.

You can read more about steering files and their conventions in the Kiro documentation.

MCP Servers: Extending Capabilities

I covered what MCP servers are in more detail in a previous post. The short version: they're a great way to extend your AI tool's capabilities, and they're easy to integrate into your workflow.

Why They Matter

Of course, I can browse AWS docs in one tab, check GitHub in another, switch to our observability platform to look at the latest incident, then jump to Jira to update a ticket. But every tab/tool switch is a tiny context switch. And for me, that often leads to distractions: I open Slack to check something, then see another urgent message, reply to that, and suddenly I'm asking myself: wait, why did I leave the IDE? (Similarly to the Doorway Effect)

doorway effect

If I can do all of that from the same window where I'm already working, I can stay focused and move faster. (and also, less strain on fingers and wrists to use the mouse or keyboard shortcuts to switch tabs).

What I'm Using

I keep several MCP servers installed but I pay attention to have them enabled only on demand. Here's what I use most:

  • GitHub
  • New Relic
  • Atlassian
  • AWS API
  • AWS Diagram
  • AWS Terraform
  • AWS EKS
  • AWS Pricing

Real Usage Examples

Instead of leaving the IDE to search AWS documentation:

You: "How do I configure Lambda function URLs?"
Kiro: *searches AWS docs*
      *provides relevant documentation*
      *suggests implementation*
Enter fullscreen mode Exit fullscreen mode

Also, I recently discovered I can manage tasks generated by SpecDriven Development with Kiro using the Backlog.md MCP:

You: "Create a task for implementing XYZ"
Kiro: *creates task in Backlog.md*
      *links to relevant spec*
      *sets priority and acceptance criteria*
Enter fullscreen mode Exit fullscreen mode

A Word of Caution

Be careful with how many MCP servers you have enabled. Too many can clutter your context and slow things down. The on-demand activation helps, but it's still worth being selective about what you install.

I'll cover Backlog.md and spec-driven development in another post. For now, let's look at how agents and powers build on these foundations.

Custom Agents: Specialised Configurations

custom agents

Agents combine prompts, steering files, and MCP servers into specialised configurations for specific tasks. Think of them as roles you can switch between, each with its own context, tools, and behaviour.

How They Work

Agents are defined as JSON files stored in .kiro/agents/ (either globally or per project). You can view available agents with /agent list or switch between them with /agent swap. (only in kiro-cli)

The magic is in the packaging. Instead of manually loading the right prompts, enabling the right MCP servers, and remembering which tools to use, you reference an agent and get everything configured automatically.

A Real Example

Here's our PlatformSupportAgent (redacted for brevity) that helps colleagues get answers about our platform offering:

{
  "$schema": "https://raw.githubusercontent.com/aws/amazon-q-developer-cli/refs/heads/main/schemas/agent-v1.json",
  "name": "cloudplatform-support",
  "description": "Answers questions about cloud platform using documentation, Terraform, GitHub, and AWS resources",
  "prompt": "quite descriptive prompt",
  "mcpServers": {
    "aws-documentation": {},
    "terraform": {},
    "github": {}
  },
  "tools": [
    "fsRead",
    "listDirectory",
    "fileSearch",
    "grepSearch",
    "@aws-documentation",
    "@terraform",
    "@github"
  ],
  "allowedTools": ["fsRead", "listDirectory", "fileSearch", "grepSearch"],
  "toolsSettings": {},
  "resources": [
    "file://.agents/context/iac-baseline.md",
    "file://.agents/context/code-quality.md",
    "file://README.md",
    "file://AGENTS.md",
    "file://docs/**/*.md"
  ]
}
Enter fullscreen mode Exit fullscreen mode

This agent comes pre-configured with GitHub, AWS docs, and Terraform tools, has baseline standards always loaded, and maintains the support persona throughout the conversation.

Agents vs. Prompts

You might wonder: why not just use prompts? Here's the difference:

Prompts (.kiro/prompts/*.md):

  • Reference with @your-prompt
  • Injects instructions into the current conversation
  • Use when you want to add specific methodology to your current agent
  • More flexible, can combine with other prompts

Agents (.kiro/agents/*.json):

  • Activate with /agent swap your-agent
  • Switches to a completely different persona with its own tools and context
  • Use when the task requires specific resources, tools, and persistent context
  • More powerful for complex, recurring workflows

For the same task, I often create both. The prompt gives me flexibility to use it with any agent. The agent gives me a complete, pre-configured environment when I need the full setup.

The Maintenance Challenge

What I don't love about agents: the prompt field in the JSON can get hard to read and maintain. Because of this (and to improve reusability across AI tools), I often publish agents in both JSON and Markdown formats. The Markdown version is just a human-readable version of the same content. (that can also be fed to other AI tools, to achieve similar results - although without the MCP servers and tools).

I automate this with Kiro Hooks, which we'll cover next.

Hooks: Automation

Hooks execute commands or trigger the agent whenever specific events occur in your IDE. Think of them as automated workflows that respond to file changes, saves, or other IDE events.

Why Use Hooks?

I used to finish writing code and then remember I needed to update the documentation. Or I'd move files around and forget to update links. Hooks remove that mental overhead by automating the repetitive parts.

Available Triggers

Hooks can respond to several events:

  • fileEdited: When you save a file
  • fileCreated: When you create a new file
  • fileDeleted: When you delete a file
  • agentStop: When an agent execution completes
  • Plus more in Kiro CLI

What Hooks Can Do

When triggered, hooks can either:

  • Invoke the agent with a specific prompt
  • Run a shell command directly

A Simple Example

{
  "name": "Lint on Save",
  "when": { "type": "fileEdited", "patterns": ["*.ts"] },
  "then": { "type": "askAgent", "prompt": "Run linter and fix errors" }
}
Enter fullscreen mode Exit fullscreen mode

My Real-World Hook

Here's the hook I mentioned above, that I use to automate Agents' readability. Whenever I create or edit an agent JSON file, this hook automatically generates a readable Markdown version:

{
  "name": "Auto-convert Agent JSON to Markdown",
  "when": {
    "type": "fileEdited",
    "patterns": [
      "**/agents/**/*.json"
    ]
  },
  "then": {
    "type": "askAgent",
    "prompt": "Convert the edited agent JSON file to a markdown format. Create a corresponding .md file with the same name in the same directory. Include all agent properties in a readable format with sections for name, description, instructions, and any other relevant fields. Explain what tools and resources the agent has available"
  }
}
Enter fullscreen mode Exit fullscreen mode

This keeps my agent documentation up to date without any manual work.

creating a hook

Why I'm Cautious with Automatic Triggers

I haven't gone all-in on automatic hooks yet. In these early iterations, I'm rarely happy with the first result. I don't want hooks updating all my documentation just because I created a new piece of code that I'll definitely iterate on.

But hooks are powerful for truly repetitive tasks. If you find yourself doing the same thing after every file save or commit, that's a good candidate for a hook. The Kiro documentation has more examples, including one for maintaining test coverage.

Kiro Powers: Composable Functionality

If Agents are a great way of preconfiguring prompts and MCP servers for specific scenarios, meet the next level: Kiro Powers (unfortunately, not available in the CLI yet). They package documentation, workflows, and MCP servers into reusable units. Like plugins that give your AI assistant specialised knowledge and capabilities.

Why They're Useful

Instead of seeing dozens of individual MCP tools cluttering your configuration, you see a list of installed Powers. You activate them on demand, and they automatically enable their embedded MCP servers. This keeps your context clean while providing full access when you need it.

The Activate-Then-Use Pattern

Powers use a structured discovery approach:

  1. Activate a Power to see what it offers
  2. Discover available tools, documentation, and guides
  3. Use specific capabilities as needed

What's Inside a Power

Powers include:

  • Documentation explaining what the Power does
  • Steering files with step-by-step guides for common tasks
  • Best practices from the team that built it
  • Examples showing real usage patterns
  • MCP servers that provide the actual functionality

A Real Example

Without a Power:

"How do I find our customized SQS Terraform module?"
→ Search GitHub manually
→ Find repository
→ Read README
→ Copy module path
→ Check version tags
Enter fullscreen mode Exit fullscreen mode

With our Platform Engineering Hub Power:

"I need to add SQS to our project, what should I do?"
→ AI uses GitHub MCP via the Power to search our private CustomTerraformModules repository.
→ Gets README, examples, and usage
→ Provides complete answer instantly
Enter fullscreen mode Exit fullscreen mode

Building My Own
Building your own Power is very easy - and guess what, a Kiro Power exist to guide you in the process! check it out here

IPower Builder

So, this is what I just recently started playing around; starting from the CloudPlatform Support Agent I mentioned above, I'm currently building a Platform Knowledge-Base Power for our team. The goal is to distribute guidelines and resources in a centralised place while providing MCP servers that know where to search in our Confluence and GitHub repositories. This way, engineers can ask the AI agent questions before reaching out to us on Slack.

Current Limitations

I've noticed that Powers can't embed agents, prompts, hooks, or scripts (for obvious security reasons). My original goal was to use Powers to distribute a baseline of AI tools, but unless I convert everything to Markdown and document it within the Power's steering files, users won't see it.

Kiro Powers give you a clean, composable system. Each Power stays focused on its domain, and you activate only what you need for the current task. I'm still experimenting with them and will likely have more to share in the coming weeks.

What About Skills?

Skills are a concept from Claude that have gained traction in the broader AI tooling community. Kiro doesn't have native skills support (that's what steering files are for), but you can still use them thanks to skills.sh.

Skills.sh makes it easy to browse, install, and update skills from a central repository. The installation process is straightforward, and skills get copied into your .kiro/skills/ folder (either globally or per project).

I've started experimenting with AWS Hero Anton Babenko's Terraform skill for infrastructure work, which is genuinely useful and well-crafted.

The Integration Challenge

Here's where I'm still figuring things out: skills can be quite large and often overlap with steering files. Unlike steering files, they don't have an inclusion method (always, fileMatch, or manual). This means they're always loaded, which can bloat your context.

I'm still working through how to integrate skills smoothly with my existing steering file setup. The content is valuable, but the lack of conditional loading makes it tricky to manage alongside everything else.

I'll likely write a follow-up post once I've spent more time with them and figured out a good workflow.

Wrapping Up

This post covered the core building blocks of an AI-powered workflow with Kiro:

  • Prompts for reusable instructions
  • Steering files for consistent behavior and context
  • MCP servers for extending capabilities
  • Agents for specialized configurations
  • Hooks for automation
  • Powers for composable functionality
  • Skills for community-driven guidance

Each piece serves a specific purpose, and together they create a system that amplifies what you can accomplish in those fragmented windows of time between meetings.

The setup takes effort. I won't pretend otherwise. But once you have these pieces in place, the productivity gains are real. Not because the AI writes perfect code (it doesn't), but because it helps you think faster, explore alternatives quicker, and stay focused when context switching would normally derail you.

What's Next

  • Team Collaboration: How to work effectively when everyone uses different AI tools (Kiro, Cursor, Claude, Copilot), and how skills.sh fits into that picture
  • AI-Native SDLC: What it actually looks like in practice, including spec-driven development and the Backlog.md workflow

If you're experimenting with AI in your development workflow, I'd love to hear what's working for you. What tools are you using? What problems are you trying to solve?

Resources

Top comments (0)