DEV Community

Cover image for I Stopped Fighting My AI How Kiro's agent hooks and steering files fixed my biggest frustration with AI coding tools
Ibrahim Pima
Ibrahim Pima

Posted on

I Stopped Fighting My AI How Kiro's agent hooks and steering files fixed my biggest frustration with AI coding tools

I've been using AI coding assistants for about 8 months now. Cursor, GitHub Copilot, the usual suspects.

hey're all impressive, but they share one fatal flaw that's been driving me insane:

They forget everything.

You explain your project architecture. The AI generates some code. Five minutes later, you're explaining the same architecture again. Every session starts from scratch. Every feature request requires re-explaining your context.

I found myself maintaining a text file of prompts to copy-paste at the start of each session:

  • "We use Zustand for state management, not Redux"
  • "All API calls go through our custom fetcher with retry logic"
  • "Components follow the compound component pattern"
  • "Test files go in __tests__ not next to the component"

I was spending 10% of my coding time just explaining my project to the AI. Again and again and again.

Then I tried Kiro, and it fundamentally changed how I think about AI-assisted development.

The Context Problem

Let me show you what I mean with a real example.

Last month I was building a dashboard for monitoring crypto wallets. Standard stuff: React frontend, Node backend, PostgreSQL database.

With my previous AI tool, here's how a typical session would go:

Me: "Add a new endpoint for fetching transaction history"

AI: *generates endpoint with inline SQL queries*

Me: "No, we use Prisma for database access"

AI: *rewrites with Prisma*

Me: "And all endpoints need rate limiting"

AI: *adds rate limiting*

Me: "And proper error handling with our custom error classes"

AI: *adds error handling*
Enter fullscreen mode Exit fullscreen mode

Four back-and-forth messages just to get code that follows my project's patterns. Then 20 minutes later:

Me: "Add an endpoint for fetching wallet balances"

AI: *generates endpoint with inline SQL queries again*
Enter fullscreen mode Exit fullscreen mode

We're back to square one.

Enter Steering Files

Kiro solves this with something called steering files - persistent markdown documents that live in your project at .kiro/steering/.

Here's what I put in my steering file for that dashboard project:

# Database Access

All database operations use Prisma. Never write raw SQL queries.

Example:
Enter fullscreen mode Exit fullscreen mode


typescript
const transactions = await prisma.transaction.findMany({
where: { walletId },
orderBy: { timestamp: 'desc' }
});


# API Patterns

All endpoints must include:
- Rate limiting via express-rate-limit
- Error handling with AppError class
- Request validation with Zod schemas
- Proper HTTP status codes

# Testing Standards

- Test files in `__tests__/` directory
- Use Vitest, not Jest
- Mock Prisma with prisma-mock
- Minimum 80% coverage
Enter fullscreen mode Exit fullscreen mode

You write this once. Kiro reads it. Forever.

Now when I ask for a new endpoint, Kiro generates code that already follows these patterns. No more re-explaining. No more "actually we do it this way."

The AI finally maintains context instead of just having context for a single conversation.

Agent Hooks: The Game Changer

But steering files were just the beginning. The feature that actually changed my workflow was agent hooks.

Agent hooks are event-driven automations written in natural language. They're like GitHub Actions but for your local development environment, powered by AI.

Here's a hook I set up in about 30 seconds:

trigger: onSave
pattern: "**/*.tsx"
instructions: |
  When a React component is saved:
  1. Check if a corresponding test file exists in __tests__
  2. If not, create one with basic render tests
  3. If it exists, update it to cover any new props or functions
  4. Run the test to verify it passes
Enter fullscreen mode Exit fullscreen mode

That's it. Plain English. No complex scripting.

Now every time I save a React component, Kiro automatically:

  • Creates or updates the test file
  • Adds tests for new props/functions
  • Runs the tests
  • Shows me the results

I haven't manually created a test file in a week. The hook just handles it.

Real-World Hook Examples

Here are some other hooks I've set up:

Auto-update API documentation:

trigger: onSave
pattern: "src/api/**/*.ts"
instructions: |
  When an API route file changes:
  1. Extract endpoint, method, parameters, response types
  2. Update the corresponding section in docs/API.md
  3. Ensure the example requests are current
Enter fullscreen mode Exit fullscreen mode

Security check before commit:

trigger: onManual
instructions: |
  Before committing, scan for:
  - Hardcoded API keys or secrets
  - Console.log statements in production code
  - TODO comments with no corresponding issue
  - Files larger than 500 lines that should be split
Enter fullscreen mode Exit fullscreen mode

Keep dependencies in sync:

trigger: onSave
pattern: "package.json"
instructions: |
  When package.json changes:
  1. Check if any new dependencies need configuration
  2. Update the development setup docs
  3. Verify no conflicting versions
Enter fullscreen mode Exit fullscreen mode

Each hook took less than a minute to set up. Together they've probably saved me 10+ hours over the past two weeks.

Specs: Planning Before Coding

The third piece of Kiro's workflow is specs - AI-generated specification documents that break down features before you code them.

Instead of going straight from idea to code, Kiro helps you:

  1. Define requirements in EARS notation (more on this later)
  2. Generate a design document with data flows and interfaces
  3. Break the feature into sequenced tasks
  4. Link each task back to requirements

Here's what this looked like for my transaction history feature:

I started with: "Add transaction history with filtering and export"

Kiro generated a spec that included:

  • User requirements ("User can filter by date range, amount, type")
  • System design (data flow diagrams, API contracts)
  • Task breakdown (8 sequential tasks with dependencies)
  • Test criteria for each task

The spec caught issues I would've missed:

  • Need to handle pagination for large transaction lists
  • Export needs to work with filters applied
  • Date ranges need timezone handling
  • What happens with failed transactions?

Without the spec, I would've coded for 3 hours and then realized "oh crap, I forgot about timezones."

With the spec, these edge cases were handled upfront.

What EARS Notation Actually Is

Quick sidebar: EARS (Easy Approach to Requirements Syntax) is a format for writing clear, unambiguous requirements.

Instead of:

"The system should allow users to export transactions"

EARS format:

"When the user clicks export, the system shall generate a CSV file containing all displayed transactions"

It's more explicit about triggers, conditions, and actions. Makes it way harder for the AI (or human developers) to misinterpret what you want.

Kiro generates these automatically from your natural language description, then lets you refine them before coding starts.

The Development Flow

Here's how my workflow changed:

Before Kiro:

  1. Think of feature
  2. Start coding
  3. Realize I need to explain context to AI
  4. Copy-paste my project patterns
  5. Generate code
  6. Fix all the things that don't match my patterns
  7. Manually write tests
  8. Manually update docs
  9. Commit

With Kiro:

  1. Describe feature to Kiro
  2. Review/refine the generated spec
  3. Approve spec
  4. Kiro implements following the spec
  5. Hooks automatically update tests and docs
  6. Review and commit

Steps 3-6 from the old flow are just... gone. Handled by steering + hooks.

Real Impact: A Case Study

Last week I needed to add a notification system to the dashboard. Users wanted alerts for:

  • Large transactions
  • Failed transactions
  • Low wallet balances
  • Unusual activity

Traditional AI assistant approach:

  • Would've taken me ~6 hours
  • Would've required constant hand-holding
  • Would've missed edge cases
  • Tests written last (if at all)
  • Documentation updated manually (if I remembered)

With Kiro:

  • Created a spec: 15 minutes
  • Refined requirements: 10 minutes
  • Kiro implemented: 2 hours
  • Hooks kept tests/docs in sync automatically
  • Total: ~2.5 hours

But more importantly: the code was consistent with my existing patterns from the start. No "actually we use our custom notification service" corrections. No forgetting to add rate limiting. No missing tests.

The steering file had all the context. The hooks enforced the patterns. The spec caught the edge cases.

Setting Up Your First Hook

If you want to try this, here's the simplest possible hook to start with:

  1. Open Kiro (it's built on VS Code, so feels familiar)
  2. Click the Kiro ghost icon in the sidebar
  3. Go to "Agent Hooks"
  4. Click the + button
  5. Type in natural language what you want:
When I save a file, check for console.log statements
and comment them out with a note that they should
be removed before production
Enter fullscreen mode Exit fullscreen mode

That's it. Kiro converts your natural language to a hook configuration and starts running it.

Try it with something simple first:

  • "When I save, run prettier"
  • "When I create a .tsx file, add my standard imports"
  • "When I modify package.json, update the README dependencies section"

Once you see how easy it is, you'll start thinking of hooks for everything.

The MCP Integration

One more thing: Kiro has native Model Context Protocol (MCP) support.

This means you can connect external tools and data sources:

  • Your company's internal documentation
  • Your database schema
  • Your API specs from Postman/Swagger
  • Web search for looking up library docs
  • Slack for checking team decisions

I connected our company's Notion wiki as an MCP server. Now when Kiro generates code, it references our actual internal patterns and decisions, not just generic best practices.

What About Terminal Issues?

Full transparency: Kiro's terminal integration isn't perfect yet. Sometimes commands don't execute reliably, which can be frustrating if you're trying to run tests or start servers directly.

But honestly? The hooks + steering + specs workflow is so much better than what I was using before that the terminal issues are a minor annoyance, not a dealbreaker.

Plus, Kiro is still early (backed by AWS, actively developing). These rough edges are getting smoothed out.

Should You Try It?

Here's my honest take:

Try Kiro if:

  • You're tired of re-explaining your project to your AI
  • You have patterns/standards you want enforced automatically
  • You work on projects that need consistent structure
  • You're building something more complex than a prototype

Stick with Cursor/Copilot if:

  • You just need quick code generation
  • You mostly do one-off scripts or small projects
  • You don't care about maintaining consistency
  • You need rock-solid stability right now

For me, the context persistence + automation was worth switching. I'm spending way less time fighting with my AI assistant and way more time actually building.

Try It Yourself

Kiro is free while in early access. Download it at kiro.dev.

Start with:

  1. Create one steering file with your project's patterns
  2. Set up one simple hook (like the console.log checker above)
  3. Try creating a spec for your next feature

See if it clicks for you like it did for me.


Have you tried Kiro? What's your biggest frustration with AI coding assistants? Drop a comment below.


Top comments (0)