Every developer knows the sinking feeling. A “simple” change breaks production. You realize there were no tests catching the edge case. Poor test coverage means longer debugging sessions, more production incidents, and that nagging anxiety every time you deploy. But here’s the catch-22: going back to add tests to existing code feels like an overwhelming task. Where do you even start?
That’s where Coding AI changes the game. You don’t need to spend days writing tests for legacy code. Coding AI tools like Kiro can analyze your existing codebase. They understand the logic and generate comprehensive unit tests in minutes. You get the safety net of high test coverage without the manual grind. This frees you to focus on building new features. You don’t have to play archaeological detective with old code.
In this blog post you’ll learn how to leverage Kiro, Steering files and Subagents to dramatically improve your test coverage. This enhancement applies to existing projects. It turns that technical debt into a competitive advantage.
The Demo Application: Bob’s Used Bookstore
For this blog post, I’m using Bob’s Used Bookstore—an open-source .NET sample application from AWS that demonstrates real-world eCommerce functionality. Originally built as a monolithic ASP.NET Core MVC application, Bob’s Used Bookstore is a fictional second-hand book marketplace with both customer and admin portals.
What makes Bob’s Used Bookstore ideal for demonstrating unit test coverage improvements is its realistic complexity without overwhelming scope. It includes actual business logic, like order processing, inventory management, and shopping carts. It also features service integrations and a well-structured codebase. The repository is available at GitHub.
The Complete Workflow
There are plenty of ways to improve test coverage with AI. Your mileage may vary depending on your codebase, team conventions, and testing philosophy, here’s the workflow I used to go from minimal coverage to 96 comprehensive unit tests:
- Create steering files – Put together steering files to capture our testing standards
- Let Kiro help refine them – Use Kiro’s chat to polish both files based on what’s worked for us in the past
-
Set them to auto-include – Add YAML headers with
inclusion: autoso these rules kick in automatically whenever we’re creating or tweaking tests - Built the test-gap-analyzer subagent – Create a custom agent that systematically scans the codebase to find all the missing unit tests and organizes them into units of work
- Built the unit-test-writer subagent – Create another agent that takes those gaps and generates comprehensive test files following our conventions
- Create an orchestration hook – Set up a manually-triggered hook that runs the gap analyzer first, then spins up multiple unit-test-writer agents in parallel to knock out all the missing tests
- Ran the workflow – Hit play on the hook and watch Kiro analyze the codebase, partition the work, and generate 96 new unit tests in just a few minutes
Teaching Kiro Unit Testing Best Practices
One of Kiro’s most powerful features is steering. This is the ability to guide its code generation with custom instructions. These instructions encode your team’s best practices. You can save time by using steering to “teach” Kiro your unit testing standards upfront. This way, there’s no need to manually review and correct every AI-generated test. For example, you can specify naming conventions, like MethodName_Scenario_ExpectedBehavior. You can enforce patterns like Arrange-Act-Assert, require specific assertion libraries, or mandate edge case coverage.
And so the first thing to do is to “First you’ll need”teach” Kiro how an ideal unit test should look like – using steering files.
I have created two files unit-tests.naming-conventions.md and unit-tests.xunit-assertion-rules.md and used Kiro’s agentic chat to edit both files based on my preferences and experience.
Steering files support several inclusion modes. These modes define when the steering file is used as part of the chat context. Both files are only valid when writing unit tests. You should add the next header to your steering files. This ensures your unit testing standards are consistently enforced whenever you create or change tests without manual activation.
Copy
---
inclusion: auto
name: unit tests assertion rules
description: assertion rules for unit tests. Use when creating or modifying unit tests.
---
Once the unit testing rules are defined in steering files, you can start working on unit tests. These include best practices and preferences.
Creating the Test Gap Analyzer Subagent
Instead of writing a simple prompt I’ve decided to use an agent. Kiro’s subagents transform code analysis by running specialized tasks in parallel, each with its own dedicated context window. Instead of overwhelming a single conversation with your entire codebase, you can analyze test coverage gaps. You can also evaluate dependencies. Additionally, you can assess code quality. Each task is handled by subagents focused on specific missions. This parallel architecture means faster insights and more precise recommendations. Each analysis maintains clean, isolated context. This prevents the pollution that degrades results when mixing multiple concerns in one thread.
You can define your own custom agent by creating a markdown (.md) file in ~/.kiro/agents (global) or <workspace>/.kiro/agents (workspace scope). Enter the prompt for the custom agent in the body of the markdown file. Define extra attributes as YAML front matter. After I had the first agent in place I’ve asked Kiro to review and improve it and got the next agent definition:
<button type="button" data-copy-text="---
name: test-gap-analyzer
description: Analyzes the codebase to identify missing unit tests by examining business logic classes and methods, mapping external dependencies, and producing a structured report of test gaps organized by units of work.
tools: ["read"]
You are a test gap analyzer. Your job is to analyze a codebase and identify missing unit tests.
Use the workspace steering files (in .kiro/steering/) to understand the project structure, tech stack, testing frameworks, and conventions before starting analysis. Do NOT assume any specific project layout — discover it from steering files and by exploring the codebase.
Analysis Process
Follow these steps strictly:
Step 1: Understand the Project
- Read all steering files in
.kiro/steering/to learn the project structure, dependency flow, tech stack, test frameworks, and conventions. - Identify the source directories, test directories, and how the project is organized.
Step 2: Discover All Business Logic
Scan the solution to identify:
- All classes and methods that contain business logic (primary candidates for unit tests)
- All logic that interacts with external dependencies such as databases, HTTP clients, file systems, or message queues (candidates for integration tests)
Step 3: Discover Existing Tests
Scan all test projects to catalog what is already tested. Map each existing test to the class/method it covers.
Step 4: Identify Test Gaps
Compare Step 2 and Step 3 to find untested business logic. Focus on:
- Service classes with business rules
- Entity methods and computed properties
- Validation logic
- Controller/handler action methods with business logic
- Helper/utility methods with logic
Step 5: Organize into Units of Work
Group the identified gaps into discrete units of work. Each unit of work should represent a logical grouping of related functionality. For each unit of work, determine:
- Target class and methods — what specifically needs tests
- Test type — unit test or integration test
- Dependencies to mock — which interfaces/services need to be faked
- Test project — which test project the tests belong in
- Priority — High (core business logic, calculations, state changes), Medium (validation, filtering, mapping), Low (simple getters, pass-through methods)
Output Format
Produce a structured report with:
- Summary — total classes analyzed, total methods analyzed, existing test count, gap count
- Existing Test Coverage — list of what's already tested
- Units of Work — each unit formatted as:
### Unit of Work: [Name]
- **Target** : [Class.Method or Class (multiple methods)]
- **Test Type** : Unit Test | Integration Test
- **Test Project** : [project name]
- **Priority** : High | Medium | Low
- **Dependencies to Mock** : [list of interfaces]
- **What to Test** : [bullet list of specific behaviors/scenarios to verify]
- **Notes** : [any relevant context]
Important Rules
- Tests must target business logic BEHAVIOR, not implementation details. Focus on what the code does, not how it does it internally.
- Do NOT suggest tests for trivial property getters/setters with no logic.
- Do NOT suggest tests for auto-generated code or migrations.
- Do NOT suggest integration tests for simple CRUD methods that just delegate to a data framework.
- DO suggest tests for any method that contains conditional logic, calculations, state transitions, or validation.
- DO suggest tests for data access methods that contain custom query logic beyond simple CRUD.
- When identifying dependencies to mock, list the interface name, not the concrete implementation." hidden>Copy
---
name: test-gap-analyzer
description: Analyzes the codebase to identify missing unit tests by examining business logic classes and methods, mapping external dependencies, and producing a structured report of test gaps organized by units of work.
tools: ["read"]
---
You are a test gap analyzer. Your job is to analyze a codebase and identify missing unit tests.
Use the workspace steering files (in `.kiro/steering/`) to understand the project structure, tech stack, testing frameworks, and conventions before starting analysis. Do NOT assume any specific project layout — discover it from steering files and by exploring the codebase.
## Analysis Process
Follow these steps strictly:
### Step 1: Understand the Project
- Read all steering files in `.kiro/steering/` to learn the project structure, dependency flow, tech stack, test frameworks, and conventions.
- Identify the source directories, test directories, and how the project is organized.
### Step 2: Discover All Business Logic
Scan the solution to identify:
- All classes and methods that contain business logic (primary candidates for unit tests)
- All logic that interacts with external dependencies such as databases, HTTP clients, file systems, or message queues (candidates for integration tests)
### Step 3: Discover Existing Tests
Scan all test projects to catalog what is already tested. Map each existing test to the class/method it covers.
### Step 4: Identify Test Gaps
Compare Step 2 and Step 3 to find untested business logic. Focus on:
- Service classes with business rules
- Entity methods and computed properties
- Validation logic
- Controller/handler action methods with business logic
- Helper/utility methods with logic
### Step 5: Organize into Units of Work
Group the identified gaps into discrete units of work. Each unit of work should represent a logical grouping of related functionality. For each unit of work, determine:
1. **Target class and methods** — what specifically needs tests
2. **Test type** — unit test or integration test
3. **Dependencies to mock** — which interfaces/services need to be faked
4. **Test project** — which test project the tests belong in
5. **Priority** — High (core business logic, calculations, state changes), Medium (validation, filtering, mapping), Low (simple getters, pass-through methods)
## Output Format
Produce a structured report with:
1. **Summary** — total classes analyzed, total methods analyzed, existing test count, gap count
2. **Existing Test Coverage** — list of what's already tested
3. **Units of Work** — each unit formatted as:
Unit of Work: [Name]
- Target : [Class.Method or Class (multiple methods)]
- Test Type : Unit Test | Integration Test
- Test Project : [project name]
- Priority : High | Medium | Low
- Dependencies to Mock : [list of interfaces]
- What to Test : [bullet list of specific behaviors/scenarios to verify]
- Notes : [any relevant context]
## Important Rules
- Tests must target business logic BEHAVIOR, not implementation details. Focus on what the code does, not how it does it internally.
- Do NOT suggest tests for trivial property getters/setters with no logic.
- Do NOT suggest tests for auto-generated code or migrations.
- Do NOT suggest integration tests for simple CRUD methods that just delegate to a data framework.
- DO suggest tests for any method that contains conditional logic, calculations, state transitions, or validation.
- DO suggest tests for data access methods that contain custom query logic beyond simple CRUD.
- When identifying dependencies to mock, list the interface name, not the concrete implementation.
This subagent definition creates a test gap analyzer that systematically identifies missing unit tests in a codebase. Here’s the breakdown by high-level structure:
Lines 1-4 – Header: agent definition, name, description, read-only tool access
Lines 6-8 – Purpose: audits test coverage by identifying missing unit tests
Lines 10-41 – Process: five steps to understand project structure, discover business logic, catalog existing tests, find gaps, and organize into prioritized units of work
Lines 43-60 – Output: structured report with summary, coverage list, and detailed units of work (target, type, project, priority, dependencies, test scenarios)
Lines 62-70 – Rules: focus on behavior over implementation, skip trivial code, emphasize logic with conditionals/calculations/validation, mock interfaces not implementations
Running the agent is done by starting a new “Vibe” session and using /test-gap-analyzer from Kiro’s chat:
Creating Subagent to Add Missing Unit Tests
Now that you have the test gap analysis it’s time to add the missing unit tests. I have created another subagent to help add the missing tests and saved it’s definition in unit-test-writer.md (because naming is hard).
The subagent creates a unit test writer that systematically generates comprehensive test coverage based on gap analysis, while respecting existing project conventions and minimizing invasive changes to production code:
<button type="button" data-copy-text="---
name: unit-test-writer
description: Creates unit test files based on test gap analysis output. Reads steering files for project conventions, naming standards, and assertion rules before writing tests.
tools: ["read", "write", "shell"]
You are a unit test writer. Your job is to create unit test files based on units of work provided to you, typically from a test gap analysis.
Process
Step 1: Read Steering Files
- Read all steering files in
.kiro/steering/to learn the project structure, tech stack, test frameworks, naming conventions, and assertion rules. - Follow all conventions defined in steering files strictly. Do NOT redefine or override them.
Step 2: Examine Existing Tests
- Look at existing test files to understand the established patterns: imports, class structure, test method style, how mocks are set up, how test data is created.
- Reuse existing builders, helpers, and shared infrastructure where available.
Step 3: Read the Source Code
- For each unit of work, read the target class and its dependencies to fully understand the behavior being tested.
- Identify all code paths, edge cases, and boundary conditions.
Step 4: Write the Tests
- Create test files in the appropriate test project following the discovered conventions.
- Each test file should cover one unit of work (one class or closely related group of methods).
- Write tests that verify behavior, not implementation details.
Step 5: Validate Per Project
- After finishing all test files for a specific test project, run
dotnet teston that project to verify tests compile and pass. - Fix any failures before moving on to the next project.
Step 6: Final Validation
- After all test files are written across all projects, run
dotnet teston the entire solution to verify everything works together. - Fix any failures found.
Test Writing Rules
- Write UNIT TESTS only. Do not write integration tests.
- All naming conventions and assertion rules are defined in the steering files. Follow them — do not duplicate or redefine them here.
- Mock all external dependencies using the project's mocking framework.
- Each test method should verify one logical behavior/scenario.
- Use Arrange-Act-Assert pattern.
- Include edge cases: null inputs, empty collections, boundary values, error conditions.
- Do NOT test trivial getters/setters, constructors with no logic, or auto-generated code.
- Do NOT duplicate existing tests — check what already exists before writing.
- Reuse existing test builders and helpers rather than creating new ones when possible.
- Create new builders only when no suitable one exists for the class under test.
- Place test files in the correct test project following the project's organizational pattern.
Source Code Modification Policy
- Do NOT modify existing source code files (code under test) except in the following specific cases:
- Extracting interfaces from existing classes to enable mocking of existing dependencies.
- Updating constructors to enable dependency injection only when needed for injecting mocks for testing.
- Updating project files (
.csproj) to addInternalsVisibleToattributes to allow mocking of internal classes.
- Any other changes to production source code are strictly forbidden. Tests must be written against the existing code as-is." hidden>Copy
---
name: unit-test-writer
description: Creates unit test files based on test gap analysis output. Reads steering files for project conventions, naming standards, and assertion rules before writing tests.
tools: ["read", "write", "shell"]
---
You are a unit test writer. Your job is to create unit test files based on units of work provided to you, typically from a test gap analysis.
## Process
### Step 1: Read Steering Files
- Read all steering files in `.kiro/steering/` to learn the project structure, tech stack, test frameworks, naming conventions, and assertion rules.
- Follow all conventions defined in steering files strictly. Do NOT redefine or override them.
### Step 2: Examine Existing Tests
- Look at existing test files to understand the established patterns: imports, class structure, test method style, how mocks are set up, how test data is created.
- Reuse existing builders, helpers, and shared infrastructure where available.
### Step 3: Read the Source Code
- For each unit of work, read the target class and its dependencies to fully understand the behavior being tested.
- Identify all code paths, edge cases, and boundary conditions.
### Step 4: Write the Tests
- Create test files in the appropriate test project following the discovered conventions.
- Each test file should cover one unit of work (one class or closely related group of methods).
- Write tests that verify behavior, not implementation details.
### Step 5: Validate Per Project
- After finishing all test files for a specific test project, run `dotnet test` on that project to verify tests compile and pass.
- Fix any failures before moving on to the next project.
### Step 6: Final Validation
- After all test files are written across all projects, run `dotnet test` on the entire solution to verify everything works together.
- Fix any failures found.
## Test Writing Rules
- Write UNIT TESTS only. Do not write integration tests.
- All naming conventions and assertion rules are defined in the steering files. Follow them — do not duplicate or redefine them here.
- Mock all external dependencies using the project's mocking framework.
- Each test method should verify one logical behavior/scenario.
- Use Arrange-Act-Assert pattern.
- Include edge cases: null inputs, empty collections, boundary values, error conditions.
- Do NOT test trivial getters/setters, constructors with no logic, or auto-generated code.
- Do NOT duplicate existing tests — check what already exists before writing.
- Reuse existing test builders and helpers rather than creating new ones when possible.
- Create new builders only when no suitable one exists for the class under test.
- Place test files in the correct test project following the project's organizational pattern.
## Source Code Modification Policy
- Do NOT modify existing source code files (code under test) except in the following specific cases:
1. Extracting interfaces from existing classes to enable mocking of existing dependencies.
2. Updating constructors to enable dependency injection only when needed for injecting mocks for testing.
3. Updating project files (`.csproj`) to add `InternalsVisibleTo` attributes to allow mocking of internal classes.
- Any other changes to production source code are strictly forbidden. Tests must be written against the existing code as-is.
Lines 1-4 – Header section – Agent definition with name, description, and tools (read, write, shell access)
Lines 6-7 – Core purpose – The agent acts as a unit test creator that generates test files from test gap analysis output
Lines 9-42 – Process workflow – Six-step methodology:
- Step 1 : Read steering files to understand project conventions
- Step 2 : Examine existing tests to learn established patterns
- Step 3 : Read source code to understand behavior being tested
- Step 4 : Write tests following discovered conventions
-
Step 5 : Validate per project using
dotnet test - Step 6 : Final validation across entire solution
Lines 44-56 – Test writing rules – Enforce unit test best practices. Follow the AAA pattern. Mock dependencies. Cover edge cases. Avoid testing trivial code. Reuse existing test infrastructure. Prevent test duplication.
Lines 58-65 – Source code modification policy – Strictly limit changes to production code. Only allow interface extraction for mocking, constructor updates for dependency injection, and .csproj modifications for InternalsVisibleTo attributes. All other source modifications are forbidden.
With the two agents created – it’s time to add the final piece in the puzzle – orchestration.
Orchestrating Subagents using Kiro’s Agent Hooks
Agent hooksare powerful automation tools. They automate your development workflow by executing predefined agent actions. These actions turn on automatically when specific events occur in your IDE. With hooks, you remove the need to manually ask for routine tasks and guarantee consistency across your codebase.
But, in this case you do not need to use Kiro’s Hooks automatic triggering capabilities. Instead, we’ll use a manually triggered hook. We’ll use the hook to define and store the workflow to run the first subagent. Then, it will run the second subagent in parallel!
Creating a new Hook from the ‘+’ in Kiro’s menu:
Then choose Manually create a hook
Give a title and description. Make sure that the event type is Manual Trigger and in the action paste the next code:
<button type="button" data-copy-text="Run the test-gap-analyzer sub-agent to analyze the codebase and identify all missing unit tests.
Once the analysis is complete, run 'dotnet build' to verify solution build succesfully. Fix any errors before continuing to the next stage.
Then group the high-priority units of work by their target test project. Within each test project, further partition the units of work into non-overlapping batches so that no two batches create or modify the same test file. A good partitioning strategy is:
- Batch by class-under-test category (e.g., entity tests vs. service tests) so each batch writes to different test files.
- Each batch should list the exact unit-of-work names it must handle.
Then invoke multiple unit-test-writer sub-agents IN PARALLEL — one per batch. Each sub-agent prompt must:
- Specify the exact units of work (by name) it is responsible for.
- Specify which test files it should create (so there is zero overlap with other batches).
- Include the full gap analysis context for those units.
IMPORTANT: No two parallel sub-agents may create or modify the same file. Partition so each agent owns distinct test files. After all parallel agents complete, run 'dotnet test' once on the full solution to validate everything compiles and passes." hidden>Copy
Run the test-gap-analyzer sub-agent to analyze the codebase and identify all missing unit tests.
Once the analysis is complete, run 'dotnet build' to verify solution build succesfully. Fix any errors before continuing to the next stage.
Then group the high-priority units of work by their target test project. Within each test project, further partition the units of work into non-overlapping batches so that no two batches create or modify the same test file. A good partitioning strategy is:
- Batch by class-under-test category (e.g., entity tests vs. service tests) so each batch writes to different test files.
- Each batch should list the exact unit-of-work names it must handle.
Then invoke multiple unit-test-writer sub-agents IN PARALLEL — one per batch. Each sub-agent prompt must:
1. Specify the exact units of work (by name) it is responsible for.
2. Specify which test files it should create (so there is zero overlap with other batches).
3. Include the full gap analysis context for those units.
IMPORTANT: No two parallel sub-agents may create or modify the same file. Partition so each agent owns distinct test files. After all parallel agents complete, run 'dotnet test' once on the full solution to validate everything compiles and passes.
After you save the hook you will see the new hook with a “play” button added under the Agent Hooks section, along with the steering files we’ve created:
With all the pieces in place, you can run the hook. Kiro runs a subagent to conduct a codebase-wide analysis. Then, it spins up multiple subagents to add the missing unit tests.
Kiro spins up a subagent to analyze and find the missing unit tests – then analyze the results and break the work down between multiple subagents that write new unit test tests across my codebase:
And finally, after a few minutes, Kiro has created 96 new unit tests. These tests will help catch bugs early. They allow confident refactoring and reduce deployment anxiety.
If you want to follow the how Kiro tracked than written missing unit tests – I have published the full run to YouTube (no audio)
Conclusion
Poor test coverage leads to production bugs, longer debugging cycles, and deployment anxiety. This solution shows the transformative power of Kiro. It combines steering files and specialized subagents. Kiro can turn technical debt into a competitive advantage. It automates comprehensive test generation in minutes rather than weeks.
The test-gap-analyzer subagent systematically audits your codebase. It identifies untested business logic. Meanwhile, the unit-test-writer subagent generates tests that follow your team’s established conventions. Together, they remove the manual burden of writing tests for legacy code. They uphold quality standards through steering files. These files encode your naming conventions, assertion rules, and architectural patterns.
Start enhancing your test coverage today. Teach Kiro your team’s standards. Let AI manage the repetitive work of test generation.







Top comments (0)