<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Dror Helper</title>
    <description>The latest articles on DEV Community by Dror Helper (@dhelper).</description>
    <link>https://dev.to/dhelper</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/dhelper"/>
    <language>en</language>
    <item>
      <title>Using Kiro Subagents to Improve Test Coverage</title>
      <dc:creator>Dror Helper</dc:creator>
      <pubDate>Mon, 30 Mar 2026 12:07:47 +0000</pubDate>
      <link>https://dev.to/dhelper/using-kiro-subagents-to-improve-test-coverage-2b56</link>
      <guid>https://dev.to/dhelper/using-kiro-subagents-to-improve-test-coverage-2b56</guid>
      <description>&lt;p&gt;Every developer knows the sinking feeling. A “simple” change breaks production. You realize there were no tests catching the edge case. Poor test coverage means longer debugging sessions, more production incidents, and that nagging anxiety every time you deploy. But here’s the catch-22: going back to add tests to existing code feels like an overwhelming task. Where do you even start?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;That’s where Coding AI changes the game.&lt;/strong&gt;  You don’t need to spend days writing tests for legacy code. Coding AI tools like &lt;a href="https://kiro.dev" rel="noopener noreferrer"&gt;Kiro&lt;/a&gt; can analyze your existing codebase. They understand the logic and generate comprehensive unit tests in minutes. You get the safety net of high test coverage without the manual grind. This frees you to focus on building new features. You don’t have to play archaeological detective with old code.&lt;/p&gt;

&lt;p&gt;In this blog post you’ll learn how to leverage &lt;a href="https://kiro.dev" rel="noopener noreferrer"&gt;Kiro&lt;/a&gt;, &lt;a href="https://kiro.dev/docs/steering/" rel="noopener noreferrer"&gt;Steering files&lt;/a&gt; and &lt;a href="https://kiro.dev/docs/cli/chat/subagents/" rel="noopener noreferrer"&gt;Subagents&lt;/a&gt; to dramatically improve your test coverage. This enhancement applies to existing projects. It turns that technical debt into a competitive advantage.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Demo Application: Bob’s Used Bookstore
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh0l3vl1fpgktoqqinesf.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh0l3vl1fpgktoqqinesf.jpeg" alt="Illustration of a cheerful man giving a thumbs up in front of a used bookstore labeled 'Bob's Used Bookstore', featuring signs for various books including '.NET', 'NUG s'Nú', 'Extremely Cloudy', and 'ASP NET GOSSIP'." width="800" height="960"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For this blog post, I’m using &lt;strong&gt;&lt;a href="https://github.com/aws-samples/bobs-used-bookstore-sample" rel="noopener noreferrer"&gt;Bob’s Used Bookstore&lt;/a&gt;&lt;/strong&gt;—an open-source .NET sample application from AWS that demonstrates real-world eCommerce functionality. Originally built as a monolithic ASP.NET Core MVC application, Bob’s Used Bookstore is a fictional second-hand book marketplace with both customer and admin portals. &lt;/p&gt;

&lt;p&gt;What makes Bob’s Used Bookstore ideal for demonstrating unit test coverage improvements is its realistic complexity without overwhelming scope. It includes actual business logic, like order processing, inventory management, and shopping carts. It also features service integrations and a well-structured codebase. The repository is available at &lt;a href="https://github.com/aws-samples/bobs-used-bookstore-sample" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Complete Workflow
&lt;/h2&gt;

&lt;p&gt;There are plenty of ways to improve test coverage with AI. Your mileage may vary depending on your codebase, team conventions, and testing philosophy, here’s the workflow I used to go from minimal coverage to 96 comprehensive unit tests:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Create&lt;/strong&gt;  &lt;strong&gt;steering files&lt;/strong&gt;  – Put together steering files to capture our testing standards&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Let Kiro help refine them&lt;/strong&gt;  – Use Kiro’s chat to polish both files based on what’s worked for us in the past&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Set them to auto-include&lt;/strong&gt;  – Add YAML headers with &lt;code&gt;inclusion: auto&lt;/code&gt; so these rules kick in automatically whenever we’re creating or tweaking tests&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Built the test-gap-analyzer subagent&lt;/strong&gt;  – Create a custom agent that systematically scans the codebase to find all the missing unit tests and organizes them into units of work&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Built the unit-test-writer subagent&lt;/strong&gt;  – Create another agent that takes those gaps and generates comprehensive test files following our conventions&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Create an orchestration hook&lt;/strong&gt;  – Set up a manually-triggered hook that runs the gap analyzer first, then spins up multiple unit-test-writer agents in parallel to knock out all the missing tests&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Ran the workflow&lt;/strong&gt;  – Hit play on the hook and watch Kiro analyze the codebase, partition the work, and generate 96 new unit tests in just a few minutes&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Teaching Kiro Unit Testing Best Practices
&lt;/h2&gt;

&lt;p&gt;One of Kiro’s most powerful features is &lt;strong&gt;steering&lt;/strong&gt;. This is the ability to guide its code generation with custom instructions. These instructions encode your team’s best practices. You can save time by using steering to “teach” Kiro your unit testing standards upfront. This way, there’s no need to manually review and correct every AI-generated test. For example, you can specify naming conventions, like &lt;code&gt;MethodName_Scenario_ExpectedBehavior&lt;/code&gt;. You can enforce patterns like Arrange-Act-Assert, require specific assertion libraries, or mandate edge case coverage.&lt;/p&gt;

&lt;p&gt;And so the first thing to do is to “First you’ll need”teach” Kiro how an ideal unit test should look like – using steering files.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs3xgdd07hckebvmqx426.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs3xgdd07hckebvmqx426.png" width="800" height="213"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I have created two files &lt;a href="https://gist.github.com/dhelper/251b38a0e9f417f854c6f48f21db694c" rel="noopener noreferrer"&gt;unit-tests.naming-conventions.md&lt;/a&gt; and &lt;a href="https://gist.github.com/dhelper/fdcfb940d8b6bfe43222a9df7a77e2b7" rel="noopener noreferrer"&gt;unit-tests.xunit-assertion-rules.md&lt;/a&gt; and used Kiro’s agentic chat to edit both files based on my preferences and experience.&lt;/p&gt;

&lt;p&gt;Steering files support several &lt;a href="https://kiro.dev/docs/steering/" rel="noopener noreferrer"&gt;inclusion modes&lt;/a&gt;. These modes define when the steering file is used as part of the chat context. Both files are only valid when writing unit tests. You should add the next header to your steering files. This ensures your unit testing standards are consistently enforced whenever you create or change tests without manual activation. &lt;/p&gt;

&lt;p&gt;Copy&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;inclusion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;auto&lt;/span&gt;
&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;unit tests assertion rules&lt;/span&gt;
&lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;assertion rules for unit tests. Use when creating or modifying unit tests.&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once the unit testing rules are defined in steering files, you can start working on unit tests. These include best practices and preferences.&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating the Test Gap Analyzer Subagent
&lt;/h2&gt;

&lt;p&gt;Instead of writing a simple prompt I’ve decided to use an agent. Kiro’s &lt;a href="https://kiro.dev/docs/chat/subagents/" rel="noopener noreferrer"&gt;subagents&lt;/a&gt; transform code analysis by running specialized tasks in parallel, each with its own dedicated context window. Instead of overwhelming a single conversation with your entire codebase, you can analyze test coverage gaps. You can also evaluate dependencies. Additionally, you can assess code quality. Each task is handled by subagents focused on specific missions. This parallel architecture means faster insights and more precise recommendations. Each analysis maintains clean, isolated context. This prevents the pollution that degrades results when mixing multiple concerns in one thread.&lt;/p&gt;

&lt;p&gt;You can define your own custom agent by creating a markdown (.md) file in &lt;code&gt;~/.kiro/agents&lt;/code&gt; (global) or &lt;code&gt;&amp;lt;workspace&amp;gt;/.kiro/agents&lt;/code&gt; (workspace scope). Enter the prompt for the custom agent in the body of the markdown file. Define extra attributes as YAML front matter. After I had the first agent in place I’ve asked Kiro to review and improve it and got the next agent definition:&lt;/p&gt;

&lt;p&gt;&amp;lt;button type="button" data-copy-text="---&lt;br&gt;
name: test-gap-analyzer&lt;br&gt;
description: Analyzes the codebase to identify missing unit tests by examining business logic classes and methods, mapping external dependencies, and producing a structured report of test gaps organized by units of work.&lt;/p&gt;
&lt;h2&gt;
  
  
  tools: ["read"]
&lt;/h2&gt;

&lt;p&gt;You are a test gap analyzer. Your job is to analyze a codebase and identify missing unit tests.&lt;br&gt;
Use the workspace steering files (in &lt;code&gt;.kiro/steering/&lt;/code&gt;) to understand the project structure, tech stack, testing frameworks, and conventions before starting analysis. Do NOT assume any specific project layout — discover it from steering files and by exploring the codebase.&lt;/p&gt;
&lt;h2&gt;
  
  
  Analysis Process
&lt;/h2&gt;

&lt;p&gt;Follow these steps strictly:&lt;/p&gt;
&lt;h3&gt;
  
  
  Step 1: Understand the Project
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Read all steering files in &lt;code&gt;.kiro/steering/&lt;/code&gt; to learn the project structure, dependency flow, tech stack, test frameworks, and conventions.&lt;/li&gt;
&lt;li&gt;Identify the source directories, test directories, and how the project is organized.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Step 2: Discover All Business Logic
&lt;/h3&gt;

&lt;p&gt;Scan the solution to identify:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;All classes and methods that contain business logic (primary candidates for unit tests)&lt;/li&gt;
&lt;li&gt;All logic that interacts with external dependencies such as databases, HTTP clients, file systems, or message queues (candidates for integration tests)&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Step 3: Discover Existing Tests
&lt;/h3&gt;

&lt;p&gt;Scan all test projects to catalog what is already tested. Map each existing test to the class/method it covers.&lt;/p&gt;
&lt;h3&gt;
  
  
  Step 4: Identify Test Gaps
&lt;/h3&gt;

&lt;p&gt;Compare Step 2 and Step 3 to find untested business logic. Focus on:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Service classes with business rules&lt;/li&gt;
&lt;li&gt;Entity methods and computed properties&lt;/li&gt;
&lt;li&gt;Validation logic&lt;/li&gt;
&lt;li&gt;Controller/handler action methods with business logic&lt;/li&gt;
&lt;li&gt;Helper/utility methods with logic&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Step 5: Organize into Units of Work
&lt;/h3&gt;

&lt;p&gt;Group the identified gaps into discrete units of work. Each unit of work should represent a logical grouping of related functionality. For each unit of work, determine:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Target class and methods&lt;/strong&gt; — what specifically needs tests&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Test type&lt;/strong&gt; — unit test or integration test&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Dependencies to mock&lt;/strong&gt; — which interfaces/services need to be faked&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Test project&lt;/strong&gt; — which test project the tests belong in&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Priority&lt;/strong&gt; — High (core business logic, calculations, state changes), Medium (validation, filtering, mapping), Low (simple getters, pass-through methods)&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;
  
  
  Output Format
&lt;/h2&gt;

&lt;p&gt;Produce a structured report with:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Summary&lt;/strong&gt; — total classes analyzed, total methods analyzed, existing test count, gap count&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Existing Test Coverage&lt;/strong&gt; — list of what's already tested&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Units of Work&lt;/strong&gt; — each unit formatted as:
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;&lt;span class="gu"&gt;### Unit of Work: [Name]&lt;/span&gt;
&lt;span class="p"&gt;-&lt;/span&gt; &lt;span class="gs"&gt;**Target**&lt;/span&gt; : [Class.Method or Class (multiple methods)]
&lt;span class="p"&gt;-&lt;/span&gt; &lt;span class="gs"&gt;**Test Type**&lt;/span&gt; : Unit Test | Integration Test
&lt;span class="p"&gt;-&lt;/span&gt; &lt;span class="gs"&gt;**Test Project**&lt;/span&gt; : [project name]
&lt;span class="p"&gt;-&lt;/span&gt; &lt;span class="gs"&gt;**Priority**&lt;/span&gt; : High | Medium | Low
&lt;span class="p"&gt;-&lt;/span&gt; &lt;span class="gs"&gt;**Dependencies to Mock**&lt;/span&gt; : [list of interfaces]
&lt;span class="p"&gt;-&lt;/span&gt; &lt;span class="gs"&gt;**What to Test**&lt;/span&gt; : [bullet list of specific behaviors/scenarios to verify]
&lt;span class="p"&gt;-&lt;/span&gt; &lt;span class="gs"&gt;**Notes**&lt;/span&gt; : [any relevant context]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h2&gt;
  
  
  Important Rules
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Tests must target business logic BEHAVIOR, not implementation details. Focus on what the code does, not how it does it internally.&lt;/li&gt;
&lt;li&gt;Do NOT suggest tests for trivial property getters/setters with no logic.&lt;/li&gt;
&lt;li&gt;Do NOT suggest tests for auto-generated code or migrations.&lt;/li&gt;
&lt;li&gt;Do NOT suggest integration tests for simple CRUD methods that just delegate to a data framework.&lt;/li&gt;
&lt;li&gt;DO suggest tests for any method that contains conditional logic, calculations, state transitions, or validation.&lt;/li&gt;
&lt;li&gt;DO suggest tests for data access methods that contain custom query logic beyond simple CRUD.&lt;/li&gt;
&lt;li&gt;When identifying dependencies to mock, list the interface name, not the concrete implementation." hidden&amp;gt;Copy
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;test-gap-analyzer&lt;/span&gt;
&lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Analyzes the codebase to identify missing unit tests by examining business logic classes and methods, mapping external dependencies, and producing a structured report of test gaps organized by units of work.&lt;/span&gt;

&lt;span class="na"&gt;tools&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;read"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;

&lt;span class="nn"&gt;---&lt;/span&gt;

&lt;span class="s"&gt;You are a test gap analyzer. Your job is to analyze a codebase and identify missing unit tests.&lt;/span&gt;
&lt;span class="s"&gt;Use the workspace steering files (in `.kiro/steering/`) to understand the project structure, tech stack, testing frameworks, and conventions before starting analysis. Do NOT assume any specific project layout — discover it from steering files and by exploring the codebase.&lt;/span&gt;

&lt;span class="c1"&gt;## Analysis Process&lt;/span&gt;

&lt;span class="na"&gt;Follow these steps strictly&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;

&lt;span class="c1"&gt;### Step 1: Understand the Project&lt;/span&gt;

&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;Read all steering files in `.kiro/steering/` to learn the project structure, dependency flow, tech stack, test frameworks, and conventions.&lt;/span&gt;

&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;Identify the source directories, test directories, and how the project is organized.&lt;/span&gt;

&lt;span class="c1"&gt;### Step 2: Discover All Business Logic&lt;/span&gt;

&lt;span class="na"&gt;Scan the solution to identify&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;

&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;All classes and methods that contain business logic (primary candidates for unit tests)&lt;/span&gt;

&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;All logic that interacts with external dependencies such as databases, HTTP clients, file systems, or message queues (candidates for integration tests)&lt;/span&gt;

&lt;span class="c1"&gt;### Step 3: Discover Existing Tests&lt;/span&gt;

&lt;span class="s"&gt;Scan all test projects to catalog what is already tested. Map each existing test to the class/method it covers.&lt;/span&gt;

&lt;span class="c1"&gt;### Step 4: Identify Test Gaps&lt;/span&gt;

&lt;span class="na"&gt;Compare Step 2 and Step 3 to find untested business logic. Focus on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;

&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;Service classes with business rules&lt;/span&gt;

&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;Entity methods and computed properties&lt;/span&gt;

&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;Validation logic&lt;/span&gt;

&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;Controller/handler action methods with business logic&lt;/span&gt;

&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;Helper/utility methods with logic&lt;/span&gt;

&lt;span class="c1"&gt;### Step 5: Organize into Units of Work&lt;/span&gt;

&lt;span class="s"&gt;Group the identified gaps into discrete units of work. Each unit of work should represent a logical grouping of related functionality. For each unit of work, determine&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt;

&lt;span class="s"&gt;1. **Target class and methods** — what specifically needs tests&lt;/span&gt;

&lt;span class="s"&gt;2. **Test type** — unit test or integration test&lt;/span&gt;

&lt;span class="s"&gt;3. **Dependencies to mock** — which interfaces/services need to be faked&lt;/span&gt;

&lt;span class="s"&gt;4. **Test project** — which test project the tests belong in&lt;/span&gt;

&lt;span class="s"&gt;5. **Priority** — High (core business logic, calculations, state changes), Medium (validation, filtering, mapping), Low (simple getters, pass-through methods)&lt;/span&gt;

&lt;span class="c1"&gt;## Output Format&lt;/span&gt;

&lt;span class="na"&gt;Produce a structured report with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;

&lt;span class="s"&gt;1. **Summary** — total classes analyzed, total methods analyzed, existing test count, gap count&lt;/span&gt;

&lt;span class="s"&gt;2. **Existing Test Coverage** — list of what's already tested&lt;/span&gt;

&lt;span class="na"&gt;3. **Units of Work** — each unit formatted as&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h3&gt;
  
  
  Unit of Work: [Name]
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Target&lt;/strong&gt; : [Class.Method or Class (multiple methods)]&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Test Type&lt;/strong&gt; : Unit Test | Integration Test&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Test Project&lt;/strong&gt; : [project name]&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Priority&lt;/strong&gt; : High | Medium | Low&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Dependencies to Mock&lt;/strong&gt; : [list of interfaces]&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;What to Test&lt;/strong&gt; : [bullet list of specific behaviors/scenarios to verify]&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Notes&lt;/strong&gt; : [any relevant context]
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;
&lt;span class="gu"&gt;## Important Rules&lt;/span&gt;
&lt;span class="p"&gt;
-&lt;/span&gt; Tests must target business logic BEHAVIOR, not implementation details. Focus on what the code does, not how it does it internally.
&lt;span class="p"&gt;
-&lt;/span&gt; Do NOT suggest tests for trivial property getters/setters with no logic.
&lt;span class="p"&gt;
-&lt;/span&gt; Do NOT suggest tests for auto-generated code or migrations.
&lt;span class="p"&gt;
-&lt;/span&gt; Do NOT suggest integration tests for simple CRUD methods that just delegate to a data framework.
&lt;span class="p"&gt;
-&lt;/span&gt; DO suggest tests for any method that contains conditional logic, calculations, state transitions, or validation.
&lt;span class="p"&gt;
-&lt;/span&gt; DO suggest tests for data access methods that contain custom query logic beyond simple CRUD.
&lt;span class="p"&gt;
-&lt;/span&gt; When identifying dependencies to mock, list the interface name, not the concrete implementation.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;This subagent definition creates a  &lt;strong&gt;test gap analyzer&lt;/strong&gt;  that systematically identifies missing unit tests in a codebase. Here’s the breakdown by high-level structure:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Lines 1-4&lt;/strong&gt;  – Header: agent definition, name, description, read-only tool access&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Lines 6-8&lt;/strong&gt;  – Purpose: audits test coverage by identifying missing unit tests&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Lines 10-41&lt;/strong&gt;  – Process: five steps to understand project structure, discover business logic, catalog existing tests, find gaps, and organize into prioritized units of work&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Lines 43-60&lt;/strong&gt;  – Output: structured report with summary, coverage list, and detailed units of work (target, type, project, priority, dependencies, test scenarios)&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Lines 62-70&lt;/strong&gt;  – Rules: focus on behavior over implementation, skip trivial code, emphasize logic with conditionals/calculations/validation, mock interfaces not implementations&lt;/p&gt;

&lt;p&gt;Running the agent is done by starting a new “Vibe” session and using &lt;em&gt;/test-gap-analyzer&lt;/em&gt; from Kiro’s chat:&lt;/p&gt;


&lt;h2&gt;
  
  
  Creating Subagent to Add Missing Unit Tests
&lt;/h2&gt;

&lt;p&gt;Now that you have the test gap analysis it’s time to add the missing unit tests. I have created another subagent to help add the missing tests and saved it’s definition in &lt;em&gt;unit-test-writer.md&lt;/em&gt; (because naming is hard).&lt;br&gt;&lt;br&gt;
The subagent creates a  &lt;strong&gt;unit test writer&lt;/strong&gt;  that systematically generates comprehensive test coverage based on gap analysis, while respecting existing project conventions and minimizing invasive changes to production code:&lt;/p&gt;

&lt;p&gt;&amp;lt;button type="button" data-copy-text="---&lt;br&gt;
name: unit-test-writer&lt;br&gt;
description: Creates unit test files based on test gap analysis output. Reads steering files for project conventions, naming standards, and assertion rules before writing tests.&lt;/p&gt;
&lt;h2&gt;
  
  
  tools: ["read", "write", "shell"]
&lt;/h2&gt;

&lt;p&gt;You are a unit test writer. Your job is to create unit test files based on units of work provided to you, typically from a test gap analysis.&lt;/p&gt;
&lt;h2&gt;
  
  
  Process
&lt;/h2&gt;
&lt;h3&gt;
  
  
  Step 1: Read Steering Files
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Read all steering files in &lt;code&gt;.kiro/steering/&lt;/code&gt; to learn the project structure, tech stack, test frameworks, naming conventions, and assertion rules.&lt;/li&gt;
&lt;li&gt;Follow all conventions defined in steering files strictly. Do NOT redefine or override them.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Step 2: Examine Existing Tests
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Look at existing test files to understand the established patterns: imports, class structure, test method style, how mocks are set up, how test data is created.&lt;/li&gt;
&lt;li&gt;Reuse existing builders, helpers, and shared infrastructure where available.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Step 3: Read the Source Code
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;For each unit of work, read the target class and its dependencies to fully understand the behavior being tested.&lt;/li&gt;
&lt;li&gt;Identify all code paths, edge cases, and boundary conditions.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Step 4: Write the Tests
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Create test files in the appropriate test project following the discovered conventions.&lt;/li&gt;
&lt;li&gt;Each test file should cover one unit of work (one class or closely related group of methods).&lt;/li&gt;
&lt;li&gt;Write tests that verify behavior, not implementation details.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Step 5: Validate Per Project
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;After finishing all test files for a specific test project, run &lt;code&gt;dotnet test&lt;/code&gt; on that project to verify tests compile and pass.&lt;/li&gt;
&lt;li&gt;Fix any failures before moving on to the next project.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Step 6: Final Validation
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;After all test files are written across all projects, run &lt;code&gt;dotnet test&lt;/code&gt; on the entire solution to verify everything works together.&lt;/li&gt;
&lt;li&gt;Fix any failures found.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Test Writing Rules
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Write UNIT TESTS only. Do not write integration tests.&lt;/li&gt;
&lt;li&gt;All naming conventions and assertion rules are defined in the steering files. Follow them — do not duplicate or redefine them here.&lt;/li&gt;
&lt;li&gt;Mock all external dependencies using the project's mocking framework.&lt;/li&gt;
&lt;li&gt;Each test method should verify one logical behavior/scenario.&lt;/li&gt;
&lt;li&gt;Use Arrange-Act-Assert pattern.&lt;/li&gt;
&lt;li&gt;Include edge cases: null inputs, empty collections, boundary values, error conditions.&lt;/li&gt;
&lt;li&gt;Do NOT test trivial getters/setters, constructors with no logic, or auto-generated code.&lt;/li&gt;
&lt;li&gt;Do NOT duplicate existing tests — check what already exists before writing.&lt;/li&gt;
&lt;li&gt;Reuse existing test builders and helpers rather than creating new ones when possible.&lt;/li&gt;
&lt;li&gt;Create new builders only when no suitable one exists for the class under test.&lt;/li&gt;
&lt;li&gt;Place test files in the correct test project following the project's organizational pattern.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Source Code Modification Policy
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Do NOT modify existing source code files (code under test) except in the following specific cases:

&lt;ol&gt;
&lt;li&gt;Extracting interfaces from existing classes to enable mocking of existing dependencies.&lt;/li&gt;
&lt;li&gt;Updating constructors to enable dependency injection only when needed for injecting mocks for testing.&lt;/li&gt;
&lt;li&gt;Updating project files (&lt;code&gt;.csproj&lt;/code&gt;) to add &lt;code&gt;InternalsVisibleTo&lt;/code&gt; attributes to allow mocking of internal classes.&lt;/li&gt;
&lt;/ol&gt;
&lt;/li&gt;
&lt;li&gt;Any other changes to production source code are strictly forbidden. Tests must be written against the existing code as-is." hidden&amp;gt;Copy
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;unit-test-writer&lt;/span&gt;
&lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Creates unit test files based on test gap analysis output. Reads steering files for project conventions, naming standards, and assertion rules before writing tests.&lt;/span&gt;

&lt;span class="na"&gt;tools&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;read"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;write"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;shell"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;

&lt;span class="nn"&gt;---&lt;/span&gt;

&lt;span class="s"&gt;You are a unit test writer. Your job is to create unit test files based on units of work provided to you, typically from a test gap analysis.&lt;/span&gt;

&lt;span class="c1"&gt;## Process&lt;/span&gt;

&lt;span class="c1"&gt;### Step 1: Read Steering Files&lt;/span&gt;

&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;Read all steering files in `.kiro/steering/` to learn the project structure, tech stack, test frameworks, naming conventions, and assertion rules.&lt;/span&gt;

&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;Follow all conventions defined in steering files strictly. Do NOT redefine or override them.&lt;/span&gt;

&lt;span class="c1"&gt;### Step 2: Examine Existing Tests&lt;/span&gt;

&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;Look at existing test files to understand the established patterns&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;imports, class structure, test method style, how mocks are set up, how test data is created.&lt;/span&gt;

&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;Reuse existing builders, helpers, and shared infrastructure where available.&lt;/span&gt;

&lt;span class="c1"&gt;### Step 3: Read the Source Code&lt;/span&gt;

&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;For each unit of work, read the target class and its dependencies to fully understand the behavior being tested.&lt;/span&gt;

&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;Identify all code paths, edge cases, and boundary conditions.&lt;/span&gt;

&lt;span class="c1"&gt;### Step 4: Write the Tests&lt;/span&gt;

&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;Create test files in the appropriate test project following the discovered conventions.&lt;/span&gt;

&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;Each test file should cover one unit of work (one class or closely related group of methods).&lt;/span&gt;

&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;Write tests that verify behavior, not implementation details.&lt;/span&gt;

&lt;span class="c1"&gt;### Step 5: Validate Per Project&lt;/span&gt;

&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;After finishing all test files for a specific test project, run `dotnet test` on that project to verify tests compile and pass.&lt;/span&gt;

&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;Fix any failures before moving on to the next project.&lt;/span&gt;

&lt;span class="c1"&gt;### Step 6: Final Validation&lt;/span&gt;

&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;After all test files are written across all projects, run `dotnet test` on the entire solution to verify everything works together.&lt;/span&gt;

&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;Fix any failures found.&lt;/span&gt;

&lt;span class="c1"&gt;## Test Writing Rules&lt;/span&gt;

&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;Write UNIT TESTS only. Do not write integration tests.&lt;/span&gt;

&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;All naming conventions and assertion rules are defined in the steering files. Follow them — do not duplicate or redefine them here.&lt;/span&gt;

&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;Mock all external dependencies using the project's mocking framework.&lt;/span&gt;

&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;Each test method should verify one logical behavior/scenario.&lt;/span&gt;

&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;Use Arrange-Act-Assert pattern.&lt;/span&gt;

&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;Include edge cases&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="s"&gt; inputs, empty collections, boundary values, error conditions.&lt;/span&gt;

&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;Do NOT test trivial getters/setters, constructors with no logic, or auto-generated code.&lt;/span&gt;

&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;Do NOT duplicate existing tests — check what already exists before writing.&lt;/span&gt;

&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;Reuse existing test builders and helpers rather than creating new ones when possible.&lt;/span&gt;

&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;Create new builders only when no suitable one exists for the class under test.&lt;/span&gt;

&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;Place test files in the correct test project following the project's organizational pattern.&lt;/span&gt;

&lt;span class="c1"&gt;## Source Code Modification Policy&lt;/span&gt;

&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;Do NOT modify existing source code files (code under test) except in the following specific cases&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="s"&gt;1. Extracting interfaces from existing classes to enable mocking of existing dependencies.&lt;/span&gt;
  &lt;span class="s"&gt;2. Updating constructors to enable dependency injection only when needed for injecting mocks for testing.&lt;/span&gt;
  &lt;span class="s"&gt;3. Updating project files (`.csproj`) to add `InternalsVisibleTo` attributes to allow mocking of internal classes.&lt;/span&gt;

&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;Any other changes to production source code are strictly forbidden. Tests must be written against the existing code as-is.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;&lt;strong&gt;Lines 1-4&lt;/strong&gt;  –  &lt;strong&gt;Header section&lt;/strong&gt;  – Agent definition with name, description, and tools (read, write, shell access)&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Lines 6-7&lt;/strong&gt;  –  &lt;strong&gt;Core purpose&lt;/strong&gt;  – The agent acts as a unit test creator that generates test files from test gap analysis output&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Lines 9-42&lt;/strong&gt;  –  &lt;strong&gt;Process workflow&lt;/strong&gt;  – Six-step methodology:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Step 1&lt;/strong&gt; : Read steering files to understand project conventions&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Step 2&lt;/strong&gt; : Examine existing tests to learn established patterns&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Step 3&lt;/strong&gt; : Read source code to understand behavior being tested&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Step 4&lt;/strong&gt; : Write tests following discovered conventions&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Step 5&lt;/strong&gt; : Validate per project using &lt;code&gt;dotnet test&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Step 6&lt;/strong&gt; : Final validation across entire solution&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Lines 44-56&lt;/strong&gt;  –  &lt;strong&gt;Test writing rules&lt;/strong&gt;  – Enforce unit test best practices. Follow the AAA pattern. Mock dependencies. Cover edge cases. Avoid testing trivial code. Reuse existing test infrastructure. Prevent test duplication.&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Lines 58-65&lt;/strong&gt;  –  &lt;strong&gt;Source code modification policy&lt;/strong&gt;  – Strictly limit changes to production code. Only allow interface extraction for mocking, constructor updates for dependency injection, and &lt;code&gt;.csproj&lt;/code&gt; modifications for &lt;code&gt;InternalsVisibleTo&lt;/code&gt; attributes. All other source modifications are forbidden.&lt;/p&gt;

&lt;p&gt;With the two agents created – it’s time to add the final piece in the puzzle – orchestration.&lt;/p&gt;
&lt;h2&gt;
  
  
  Orchestrating Subagents using Kiro’s Agent Hooks
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://kiro.dev/docs/hooks/" rel="noopener noreferrer"&gt;Agent hooks&lt;/a&gt;are powerful automation tools. They automate your development workflow by executing predefined agent actions. These actions turn on automatically when specific events occur in your IDE. With hooks, you remove the need to manually ask for routine tasks and guarantee consistency across your codebase.&lt;/p&gt;

&lt;p&gt;But, in this case you do not need to use Kiro’s Hooks automatic triggering capabilities. Instead, we’ll use a manually triggered hook. We’ll use the hook to define and store the workflow to run the first subagent. Then, it will run the second subagent in parallel!&lt;/p&gt;

&lt;p&gt;Creating a new Hook from the ‘+’ in Kiro’s menu:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhhw5ih5uslh5rvbyyxa2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhhw5ih5uslh5rvbyyxa2.png" width="433" height="133"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then choose &lt;em&gt;Manually create a hook&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3kwynbw1z6a469pted23.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3kwynbw1z6a469pted23.png" width="800" height="194"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Give a title and description. Make sure that the event type is &lt;em&gt;Manual Trigger&lt;/em&gt; and in the action paste the next code:&lt;/p&gt;

&lt;p&gt;&amp;lt;button type="button" data-copy-text="Run the test-gap-analyzer sub-agent to analyze the codebase and identify all missing unit tests.&lt;/p&gt;

&lt;p&gt;Once the analysis is complete, run 'dotnet build' to verify solution build succesfully. Fix any errors before continuing to the next stage.&lt;/p&gt;

&lt;p&gt;Then group the high-priority units of work by their target test project. Within each test project, further partition the units of work into non-overlapping batches so that no two batches create or modify the same test file. A good partitioning strategy is:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Batch by class-under-test category (e.g., entity tests vs. service tests) so each batch writes to different test files.&lt;/li&gt;
&lt;li&gt;Each batch should list the exact unit-of-work names it must handle.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Then invoke multiple unit-test-writer sub-agents IN PARALLEL — one per batch. Each sub-agent prompt must:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Specify the exact units of work (by name) it is responsible for.&lt;/li&gt;
&lt;li&gt;Specify which test files it should create (so there is zero overlap with other batches).&lt;/li&gt;
&lt;li&gt;Include the full gap analysis context for those units.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;IMPORTANT: No two parallel sub-agents may create or modify the same file. Partition so each agent owns distinct test files. After all parallel agents complete, run 'dotnet test' once on the full solution to validate everything compiles and passes." hidden&amp;gt;Copy&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Run the test-gap-analyzer sub-agent to analyze the codebase and identify all missing unit tests.

Once the analysis is complete, run 'dotnet build' to verify solution build succesfully. Fix any errors before continuing to the next stage.

Then group the high-priority units of work by their target test project. Within each test project, further partition the units of work into non-overlapping batches so that no two batches create or modify the same test file. A good partitioning strategy is:

- Batch by class-under-test category (e.g., entity tests vs. service tests) so each batch writes to different test files.

- Each batch should list the exact unit-of-work names it must handle.

Then invoke multiple unit-test-writer sub-agents IN PARALLEL — one per batch. Each sub-agent prompt must:

1. Specify the exact units of work (by name) it is responsible for.

2. Specify which test files it should create (so there is zero overlap with other batches).

3. Include the full gap analysis context for those units.

IMPORTANT: No two parallel sub-agents may create or modify the same file. Partition so each agent owns distinct test files. After all parallel agents complete, run 'dotnet test' once on the full solution to validate everything compiles and passes.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After you save the hook you will see the new hook with a “play” button added under the &lt;em&gt;Agent Hooks&lt;/em&gt; section, along with the steering files we’ve created:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv1tnmtx3bc3m1f9wcydb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv1tnmtx3bc3m1f9wcydb.png" width="523" height="624"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With all the pieces in place, you can run the hook. Kiro runs a subagent to conduct a codebase-wide analysis. Then, it spins up multiple subagents to add the missing unit tests.&lt;/p&gt;

&lt;p&gt;Kiro spins up a subagent to analyze and find the missing unit tests – then analyze the results and break the work down between multiple subagents that write new unit test tests across my codebase:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdpell4cybz91ay53k0zr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdpell4cybz91ay53k0zr.png" width="800" height="913"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And finally, after a few minutes, Kiro has created &lt;strong&gt;96 new unit tests&lt;/strong&gt;. These tests will help catch bugs early. They allow confident refactoring and reduce deployment anxiety.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fufrte7rkv1zgthpmk4mn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fufrte7rkv1zgthpmk4mn.png" width="800" height="423"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you want to follow the how Kiro tracked than written missing unit tests – I have published the &lt;a href="https://youtu.be/j62hqG6DzTM" rel="noopener noreferrer"&gt;full run to YouTube&lt;/a&gt; (no audio)&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Poor test coverage leads to production bugs, longer debugging cycles, and deployment anxiety. This solution shows the transformative power of Kiro. It combines steering files and specialized subagents. Kiro can turn technical debt into a competitive advantage. It automates comprehensive test generation in minutes rather than weeks.&lt;/p&gt;

&lt;p&gt;The  &lt;strong&gt;test-gap-analyzer&lt;/strong&gt;  subagent systematically audits your codebase. It identifies untested business logic. Meanwhile, the  &lt;strong&gt;unit-test-writer&lt;/strong&gt;  subagent generates tests that follow your team’s established conventions. Together, they remove the manual burden of writing tests for legacy code. They uphold quality standards through steering files. These files encode your naming conventions, assertion rules, and architectural patterns.&lt;/p&gt;

&lt;p&gt;Start enhancing your test coverage today. Teach &lt;a href="https://kiro.dev" rel="noopener noreferrer"&gt;Kiro&lt;/a&gt; your team’s standards. Let AI manage the repetitive work of test generation.&lt;/p&gt;

</description>
      <category>agenticcoding</category>
      <category>howto</category>
      <category>kiro</category>
      <category>tools</category>
    </item>
    <item>
      <title>Kiro for Test-Driven Development (TDD)</title>
      <dc:creator>Dror Helper</dc:creator>
      <pubDate>Mon, 16 Mar 2026 14:12:31 +0000</pubDate>
      <link>https://dev.to/dhelper/kiro-for-test-driven-development-tdd-11me</link>
      <guid>https://dev.to/dhelper/kiro-for-test-driven-development-tdd-11me</guid>
      <description>&lt;p&gt;You’ve spent years mastering TDD’s red-green-refactor rhythm. Now AI coding assistants write code instantly. Should you abandon the discipline that made you a better developer, or do TDD and AI work together?&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;In this post I will use &lt;a href="https://kiro.dev/" rel="noopener noreferrer"&gt;Kiro&lt;/a&gt; – an AI-powered IDE that enables developers build software from prototype to production through spec-driven development, intelligent agent assistance, and automated workflows.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;em&gt;Before we see how AI handles TDD, let’s revisit the cycle that makes it work:  &lt;strong&gt;Red&lt;/strong&gt;  (write a failing test),  &lt;strong&gt;Green&lt;/strong&gt;  (make it pass with minimal code),  &lt;strong&gt;Refactor&lt;/strong&gt;  (clean up while keeping tests green). This rhythm isn’t just methodology—it’s the discipline that prevents us from writing code we don’t need.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;For my experiment I’ve decided to use Roy Osherove’s “&lt;a href="https://osherove.com/tdd-kata-1" rel="noopener noreferrer"&gt;String Calculator TDD Kata&lt;/a&gt;“, however using such a well known TDD Kata results in the LLM solving it using one of the many implementations that already exist out there, so instead before starting I have asked my trusty AI to suggest a similar problem that was not tried before and so I present to you the _ &lt;strong&gt;Number Parser Kata&lt;/strong&gt; _:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Objective:&lt;/strong&gt; Create a function that parses written numbers in English and returns their numeric sum.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Parse written numbers (“one”, “two”, “three”) and return their sum&lt;/li&gt;
&lt;li&gt;Progressively add: handling “and” connectors, compound numbers (“twenty-three”), negatives (“minus five”)&lt;/li&gt;
&lt;li&gt;Examples

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;parse("one")&lt;/code&gt; → &lt;code&gt;1&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;parse("five")&lt;/code&gt; → &lt;code&gt;5&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;parse("one two")&lt;/code&gt; → &lt;code&gt;3&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;parse("one and two")&lt;/code&gt; → &lt;code&gt;3&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;parse("thirteen and fifteen")&lt;/code&gt; → &lt;code&gt;28&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;parse("minus five")&lt;/code&gt; → &lt;code&gt;-5&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;You get the point, as a starting point I wanted to see if providing a clear task enables Kiro to write the solution test first.&lt;/p&gt;

&lt;h2&gt;
  
  
  “Teaching” Kiro about TDD using Steering files
&lt;/h2&gt;

&lt;p&gt;For this TDD experiment, I created a steering file that explicitly taught Kiro the red-green-refactor cycle, ensuring it would write minimal failing tests first rather than jumping ahead to complete solutions.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;a href="https://kiro.dev/docs/steering/" rel="noopener noreferrer"&gt;&lt;strong&gt;Steering files&lt;/strong&gt;  &lt;/a&gt;are instruction documents you place in your project that teach Kiro your team’s coding practices and workflows—steering files encode your development methodology directly into the AI’s behavior.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Since I’m going to work on a “green field” application I’ve also added steering files outlining the tech used (python), project structure and testing best practices.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsiifce0nzro7preyp80h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsiifce0nzro7preyp80h.png" alt="A screenshot of a software interface titled 'KIRO', featuring navigational elements such as 'SPECS', 'AGENT HOOKS', and 'AGENT STEERING &amp;amp; SKILLS', along with a workspace section listing items like 'project-structure', 'tdd-workflow', 'tech', and 'test-standards'." width="541" height="362"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Kiro AI IDE workspace showcasing the organization of project structure, TDD workflow, tech, and test standards for streamlined development.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Here’s the key steering file that teaches Kiro TDD principles, notice lines 20-35, this was a crucial part that prevents Kiro from jmping ahead with implementation, writing one test then complete implementation without working step by step&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;inclusion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;always&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;

&lt;span class="c1"&gt;# TDD Workflow&lt;/span&gt;

&lt;span class="c1"&gt;## Philosophy&lt;/span&gt;
&lt;span class="s"&gt;This project follows strict Test-Driven Development (TDD). All code must be written in response to a failing test. No production code exists without a corresponding test that drove its creation.&lt;/span&gt;

&lt;span class="c1"&gt;## The Red-Green-Refactor Cycle&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;**Red** &lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Write a failing test that describes the desired behavior in plain English&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;**Green** &lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Write the minimum production code to make the test pass — nothing more&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;**Refactor** &lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Clean up the code while keeping all tests green&lt;/span&gt;

&lt;span class="c1"&gt;## Rules&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;Never write production code before a failing test exists&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;Tests must fail for the right reason before implementing&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;Implement only what is needed to pass the current failing test&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;After each green phase, consider if refactoring is needed&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;All behavior must be described in plain English before generating a test&lt;/span&gt;

&lt;span class="c1"&gt;## What "Minimum" Means&lt;/span&gt;
&lt;span class="na"&gt;**CRITICAL** &lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Minimum"&lt;/span&gt; &lt;span class="s"&gt;means the simplest possible code that makes ONLY the current test pass.&lt;/span&gt;

&lt;span class="na"&gt;Examples of what NOT to do&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;❌ Test checks "one" returns 1 → Don't implement a dictionary with "one" through "ten"&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;❌ Test checks addition of two numbers → Don't implement multiplication, division, etc.&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;❌ Test checks parsing a single word → Don't implement comma-separated parsing&lt;/span&gt;

&lt;span class="na"&gt;Examples of correct minimal implementations&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;✅ Test checks "one" returns 1 → Use `if numbers == "one"&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="s"&gt;return 1`&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;✅ Test checks "two" returns 2 → Add `elif numbers == "two"&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="s"&gt;return 2`&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;✅ After 3+ similar cases → Refactor to use a dictionary (driven by duplication, not anticipation)&lt;/span&gt;

&lt;span class="na"&gt;**The Golden Rule** &lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;If you can delete code and the test still passes, you wrote too much code.&lt;/span&gt;

&lt;span class="err"&gt;*&lt;/span&gt;&lt;span class="nv"&gt;*Resist&lt;/span&gt; &lt;span class="s"&gt;the urge to be "clever" or "complete"**. Let the tests drive every single line of production code. Premature generalization violates TDD principles.&lt;/span&gt;

&lt;span class="c1"&gt;## Cycle Prompt Pattern&lt;/span&gt;
&lt;span class="s"&gt;When asked to implement a feature, always&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt;
&lt;span class="s"&gt;1. First generate a failing test for ONE specific behavior&lt;/span&gt;
&lt;span class="s"&gt;2. Confirm the test fails&lt;/span&gt;
&lt;span class="s"&gt;3. Then generate the minimal implementation (see "What Minimum Means" above)&lt;/span&gt;
&lt;span class="s"&gt;4. Confirm all tests pass&lt;/span&gt;
&lt;span class="s"&gt;5. Suggest refactoring opportunities (only if duplication exists)&lt;/span&gt;
&lt;span class="s"&gt;6. Review request and existing tests to find if additional tests are needed&lt;/span&gt;
   &lt;span class="s"&gt;- If at least one more test is needed, start another cycle by writing a failing test&lt;/span&gt;
   &lt;span class="s"&gt;- If not, declare that requirement was met&lt;/span&gt;

&lt;span class="s"&gt;Break each feature into individual tasks for each step in the TDD lifecycle&lt;/span&gt;

&lt;span class="c1"&gt;## Self-Check Before Implementing&lt;/span&gt;
&lt;span class="s"&gt;Before writing production code, ask yourself&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt;
&lt;span class="s"&gt;1. What is the EXACT assertion in the failing test?&lt;/span&gt;
&lt;span class="s"&gt;2. What is the SIMPLEST code that makes that assertion pass?&lt;/span&gt;
&lt;span class="s"&gt;3. Am I implementing anything the test doesn't verify?&lt;/span&gt;
&lt;span class="s"&gt;4. If I remove this line, does the test still pass? (If yes, delete it)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Running the Red-Green-Refactor cycle
&lt;/h2&gt;

&lt;p&gt;Now that I had my initial setup – I started with the first requirement:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Create a function that parses written numbers in English and returns their numeric sum. the method signature will be: int Add(string numbers).&lt;/p&gt;

&lt;p&gt;If a single number is written then the output should be that number numeric value&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;And it worked! – Kiro Jumped in and created a first failing test followed by a trivial implementation&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fihynad8vjo6b05732657.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fihynad8vjo6b05732657.png" alt="Code snippet showing a test for the Add function in a number parsing module, checking if the input 'one' returns 1." width="800" height="412"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Test function for the Number Parser’s Add method, validating that the input ‘one’ returns the correct output of 1.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fad6yiqucffyjkef8m5mb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fad6yiqucffyjkef8m5mb.png" alt="Screenshot of a coding environment showing a Python file named 'parser.py' with a function definition comment for a number parser module, along with test session details on the side." width="800" height="448"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Kiro setup setup for a Number Parser in Python, from left to right: files, tests, implementation and agentic chat&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;And then continued to implement all numbers between 1-10 as well and tests for sum of two numbers – which was the test I expected – trivial and simple:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9n6ykyk9nafqdrtc5ujx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9n6ykyk9nafqdrtc5ujx.png" alt="Code snippet showing a test function for adding numbers represented as words, asserting that the sum of 'one' and 'two' equals 3." width="774" height="271"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Python test function for adding multiple written number words, asserting that ‘one two’ equals 3.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;All the time continuing the cycle of Red-Green-Refactor.&lt;/p&gt;

&lt;p&gt;As Kiro ran, I noticed that while the code was refactored and improved, the tests remained the same. After Kiro finished, I asked for a test refactor and updated Kiro’s Steering file to ensure both tests and code would be refactored going forward.&lt;/p&gt;

&lt;p&gt;Once all cycles were complete, I verified no additional tests were needed by requesting a quick test review. Kiro did jump ahead at one point—creating a parameterized test for all numbers between 4 and 10—but as someone who’s trained countless developers in TDD, this is a common “human behavior” if I ever saw one.&lt;/p&gt;

&lt;p&gt;If you want to see the whole run (6min 50sec) you can check it on YouTube:&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;This experiment answered my initial question:  &lt;strong&gt;Yes, TDD remains valuable in the age of AI coding assistants&lt;/strong&gt; —not despite AI’s capabilities, but because of them.&lt;/p&gt;

&lt;p&gt;Kiro followed the red-green-refactor cycle when guided by steering files that encoded TDD principles. It wrote failing tests first, implemented minimal solutions, and refactored code while keeping tests green. The Number Parser Kata demonstrated that AI practices disciplined development when properly instructed.&lt;/p&gt;

&lt;p&gt;Kiro occasionally “jumped ahead” with parameterized tests—a behavior I’ve seen countless times when training human developers in TDD. This reinforced an important insight:  &lt;strong&gt;we don’t abandon the practices that made us better developers in the world of agentic coding. Instead, we encode them into how we guide our AI tools.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Some argue AI’s ability to generate complex code with tests makes TDD obsolete. &lt;strong&gt;This experiment suggests otherwise:&lt;/strong&gt; TDD provides the verification layer that ensures AI-generated code actually implements the specified behavior correctly. As AI coding assistants increase in capability, the discipline of TDD becomes more important, not less—it’s the guardrail that keeps us from accepting code that compiles but doesn’t solve the right problem.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Try this yourself:&lt;/strong&gt;  use the steering file from earlier in this post and give TDD with AI a spin.&lt;/p&gt;

</description>
      <category>agenticcoding</category>
      <category>kiro</category>
      <category>unittestingtips</category>
      <category>refactoring</category>
    </item>
    <item>
      <title>Easily create builders for your tests using Intellij IDEA</title>
      <dc:creator>Dror Helper</dc:creator>
      <pubDate>Mon, 26 Oct 2020 11:42:59 +0000</pubDate>
      <link>https://dev.to/dhelper/easily-create-builders-for-your-tests-using-intellij-idea-5f3b</link>
      <guid>https://dev.to/dhelper/easily-create-builders-for-your-tests-using-intellij-idea-5f3b</guid>
      <description>&lt;p&gt;The builder pattern is one of the more useful patterns out there when creation unit tests.&lt;/p&gt;

&lt;p&gt;Instead of having a huge initialization such as:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;
&lt;span class="nd"&gt;@Test&lt;/span&gt;
&lt;span class="kt"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;validateUser_userNameIsEmpty_returnFalse&lt;/span&gt;&lt;span class="o"&gt;(){&lt;/span&gt;
    &lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;user&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;User&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt;
    &lt;span class="n"&gt;user&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;setId&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"id-1"&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
    &lt;span class="n"&gt;user&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;setName&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
    &lt;span class="n"&gt;user&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;setPhoneNumber&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"555-1234"&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
    &lt;span class="c1"&gt;// additional user initialization&lt;/span&gt;

   &lt;span class="kt"&gt;boolean&lt;/span&gt; &lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;userService&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;validate&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;user&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;

   &lt;span class="n"&gt;assertFalse&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can instead create a simple user builder object for your tests that would initialize all of the object’s properties with default values (not empty) – since you do not really care what the values are as long as they are valid and write the following test:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;
&lt;span class="nd"&gt;@Test&lt;/span&gt;
&lt;span class="kt"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;validateUser_userNameIsEmpty_returnFalse&lt;/span&gt;&lt;span class="o"&gt;(){&lt;/span&gt;
    &lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;user&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;UserBuilder&lt;/span&gt;&lt;span class="o"&gt;()&lt;/span&gt;
                      &lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;withName&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
                      &lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;build&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt;

   &lt;span class="kt"&gt;boolean&lt;/span&gt; &lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;userService&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;validate&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;user&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;

   &lt;span class="n"&gt;assertFalse&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It helps you reduce the test code avoid duplication and if the user’s class changes in some way you do not need to fix 100+ tests. But the real benefit is that using builders in your tests is that they help to focus the test reader on what’s important in this test. This is very important because the test reader might be you trying to understand why a test you wrote three months ago started failing at 8pm on the day before a major release.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;If you want to learn more about object initialization you can check my post on the subject: "[On object creation and unit tests](https://helpercode.com/2013/12/22/on-object-creation-and-unit-tests/)"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  The problem with builders
&lt;/h2&gt;

&lt;p&gt;But it’s not all rainbows and sunshine when creation builders for your tests, it is also a painful experience, you need to create a new class, then add all of the values you need to set in that object, implement setter methods and build method and it’s not a fun experience which has caused me more often than not to avoid creationg builders until my test code has become too painful to maintain.&lt;/p&gt;

&lt;p&gt;But that was in the past, since I found a cool new feature in Intellij, I found out that when I create setters for a class using code generation I can choose those setters to use the builder template.&lt;/p&gt;

&lt;p&gt;Now creating builders for my tests is easy:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create a new builder class&lt;/li&gt;
&lt;li&gt;Copy the fields you need from the original class&lt;/li&gt;
&lt;li&gt;Set default values for all fields&lt;/li&gt;
&lt;li&gt;Create a &lt;strong&gt;build&lt;/strong&gt; function using a constructor – or if you must setter methods &lt;/li&gt;
&lt;li&gt;Generate all of the setter methods as build methods with a click of your keyboard (alt+insert)&lt;/li&gt;
&lt;li&gt;Write tests using your new and shiny class.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fhelpercode.com%2Fwp-content%2Fuploads%2F2020%2F10%2Fcreatebuilder.gif%3Fw%3D640" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fhelpercode.com%2Fwp-content%2Fuploads%2F2020%2F10%2Fcreatebuilder.gif%3Fw%3D640" width="600" height="858"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Quick and simple, and relatively painless.&lt;/p&gt;

&lt;p&gt;If you’re a .NET developer (I know I am) you might be wondering if the same feature exist in R# and/or Visual Studio (I know I did). Unfortunately it does not I guess it’s because it was never requested and properties in .NET are implemented differently. I guess you’ll have to ask JetBrains to add this feature or create your own code template.&lt;/p&gt;

&lt;p&gt;And until Then… Stay healthy and happy coding…&lt;/p&gt;

</description>
      <category>tools</category>
      <category>unittestingtips</category>
      <category>intellij</category>
      <category>java</category>
    </item>
    <item>
      <title>Better tests names using JUnit’s display names generators</title>
      <dc:creator>Dror Helper</dc:creator>
      <pubDate>Thu, 27 Aug 2020 08:57:59 +0000</pubDate>
      <link>https://dev.to/dhelper/better-tests-names-using-junits-display-names-generators-2h25</link>
      <guid>https://dev.to/dhelper/better-tests-names-using-junits-display-names-generators-2h25</guid>
      <description>&lt;p&gt;Writing unit tests can be challenging, but there is one thing that can get you on the right track – &lt;strong&gt;the test name&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;If you manage to give your test a good name – you will write a good test.&lt;/p&gt;

&lt;p&gt;Unfortunately in some (&lt;em&gt;read: many&lt;/em&gt;) unit testing frameworks the test name must be a valid method name – because those “unit tests” are actually functions inside a class and so they looks something like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;
&lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;CalculatorTests&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;

    &lt;span class="nd"&gt;@Test&lt;/span&gt;
    &lt;span class="kt"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;add_passTwoPositiveNumbers_returnSum&lt;/span&gt;&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
        &lt;span class="nc"&gt;Calculator&lt;/span&gt; &lt;span class="n"&gt;calculator&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Calculator&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt;

        &lt;span class="kt"&gt;int&lt;/span&gt; &lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;calculator&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;add&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;

        &lt;span class="n"&gt;assertEquals&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Test Names
&lt;/h2&gt;

&lt;p&gt;In my tests I’ve been using the naming scheme created by &lt;a href="https://osherove.com/blog/2005/4/3/naming-standards-for-unit-tests.html" rel="noopener noreferrer"&gt;Roy Osherove&lt;/a&gt;. It forces you to think about the test before you write it and keep you from writing horrible unit tests.&lt;/p&gt;

&lt;p&gt;The test names are built from three parts:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The &lt;strong&gt;method&lt;/strong&gt; running the test – this is the play button that would be used to execute the “experiment”&lt;/li&gt;
&lt;li&gt;The &lt;strong&gt;scenario&lt;/strong&gt; we test – what is the system state before running the method, what input is used during the test – in other words &lt;em&gt;“what makes this test different from any other test ever written”.&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;The &lt;strong&gt;expected result&lt;/strong&gt; – what we expect to happen when we run the method (1) with the specific state (2) .&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The good thing about using structured test names is that when a test fails we understand immediatly what went wrong. Test names tells use what we’re testing and what we expect to happen and together with the error message (from an assertion) we should quickly understand what went wrong, fix it and have the code back to running smoothly in no time.&lt;/p&gt;

&lt;h2&gt;
  
  
  However – there is a problem
&lt;/h2&gt;

&lt;p&gt;JUnit and it’s successor xUnit testing frameworks use methods and classes to host the “tests” and so the test “name” must be a valid function name, and so I find myself using underscores and camel-case/pascal-case to help the reader of the method locate and understand the words I’m using.&lt;/p&gt;

&lt;p&gt;It seems that in 2020 we’re still haven’t grasp the idea that test names &lt;strong&gt;do not have to be method names&lt;/strong&gt; at least not in mainstream unit testing frameworks. I know some unit testing frameworks enable writing text as the test names but usually when I get to a company the are using one of the popular unit testing framework which does not).&lt;/p&gt;

&lt;h2&gt;
  
  
  The JUnit5 solution – test name generators
&lt;/h2&gt;

&lt;p&gt;JUnit5 did try and solve this issue, by adding thee ability to mark your test with a test name generator:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;
&lt;span class="nd"&gt;@DisplayNameGeneration&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;TestNameGenerator&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;class&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
&lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;CalculatorTests&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;

    &lt;span class="nd"&gt;@Test&lt;/span&gt;
    &lt;span class="kt"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;add_passTwoPositiveNumbers_returnSum&lt;/span&gt;&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
        &lt;span class="nc"&gt;Calculator&lt;/span&gt; &lt;span class="n"&gt;calculator&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Calculator&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt;

        &lt;span class="kt"&gt;int&lt;/span&gt; &lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;calculator&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;add&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;

        &lt;span class="n"&gt;assertEquals&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The display name generators can be one of the pre-packaged name generators such as DisplayNameGenerator.ReplaceUnderscores that will automatically replace the underscores in you test names with spaces or you can write your own by extending one of the &lt;em&gt;DisplayNameGenerator&lt;/em&gt; classes or by implementing &lt;em&gt;DisplayNameGenerator&lt;/em&gt; interface.&lt;/p&gt;

&lt;p&gt;Then you can either use the &lt;em&gt;@DisplayNameGenerator&lt;/em&gt; annotation on your test classes or methods or you can create &lt;em&gt;junit-platform.properties&lt;/em&gt; file and add the line:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight properties"&gt;&lt;code&gt;&lt;span class="py"&gt;junit.jupiter.displayname.generator.default&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;&amp;lt;your DN generator&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  My solution
&lt;/h2&gt;

&lt;p&gt;I’ve wanted to split the test names into the three parts and then add brackets around the method tested and have a test name that looks: (_ &lt;strong&gt;method&lt;/strong&gt; _): _ &lt;strong&gt;scenario&lt;/strong&gt; _ -&amp;gt; _ &lt;strong&gt;expected result&lt;/strong&gt; _ and so I wrote the following code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;
&lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;TestNameGenerator&lt;/span&gt; &lt;span class="kd"&gt;extends&lt;/span&gt; &lt;span class="nc"&gt;DisplayNameGenerator&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;Standard&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;private&lt;/span&gt; &lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="nf"&gt;splitToParts&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="n"&gt;input&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;try&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
            &lt;span class="nc"&gt;List&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;String&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;stringParts&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;getTestNameParts&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;input&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;

            &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;stringParts&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;size&lt;/span&gt;&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
                &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;input&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
            &lt;span class="o"&gt;}&lt;/span&gt;

            &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;stringParts&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;size&lt;/span&gt;&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
                &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nc"&gt;String&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;format&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"(%s): Always %s"&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;stringParts&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;get&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="o"&gt;),&lt;/span&gt; &lt;span class="n"&gt;stringParts&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;get&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="o"&gt;));&lt;/span&gt;
            &lt;span class="o"&gt;}&lt;/span&gt;

            &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;stringParts&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;size&lt;/span&gt;&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
                &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nc"&gt;String&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;format&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"(%s): %s -&amp;gt; %s"&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;stringParts&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;get&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="o"&gt;),&lt;/span&gt; &lt;span class="n"&gt;stringParts&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;get&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="o"&gt;),&lt;/span&gt; &lt;span class="n"&gt;stringParts&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;get&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="o"&gt;));&lt;/span&gt;
            &lt;span class="o"&gt;}&lt;/span&gt;

        &lt;span class="o"&gt;}&lt;/span&gt; &lt;span class="k"&gt;catch&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;Exception&lt;/span&gt; &lt;span class="n"&gt;exc&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
            &lt;span class="nc"&gt;System&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;console&lt;/span&gt;&lt;span class="o"&gt;().&lt;/span&gt;&lt;span class="na"&gt;writer&lt;/span&gt;&lt;span class="o"&gt;().&lt;/span&gt;&lt;span class="na"&gt;println&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Failed parsing test name"&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
        &lt;span class="o"&gt;}&lt;/span&gt;

        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;input&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;

    &lt;span class="kd"&gt;private&lt;/span&gt; &lt;span class="nc"&gt;List&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;String&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="nf"&gt;getTestNameParts&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="n"&gt;input&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
        &lt;span class="nc"&gt;List&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;String&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;stringParts&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;ArrayList&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&amp;gt;();&lt;/span&gt;
        &lt;span class="nc"&gt;StringBuilder&lt;/span&gt; &lt;span class="n"&gt;sb&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;StringBuilder&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt;
        &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="kt"&gt;int&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="n"&gt;input&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;length&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt; &lt;span class="o"&gt;++&lt;/span&gt;&lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
            &lt;span class="kt"&gt;char&lt;/span&gt; &lt;span class="n"&gt;ch&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;input&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;charAt&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
            &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ch&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="sc"&gt;'('&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
                &lt;span class="k"&gt;break&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
            &lt;span class="o"&gt;}&lt;/span&gt;
            &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ch&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="sc"&gt;'_'&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
                &lt;span class="n"&gt;stringParts&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;add&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;sb&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;toString&lt;/span&gt;&lt;span class="o"&gt;());&lt;/span&gt;
                &lt;span class="n"&gt;sb&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;setLength&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
            &lt;span class="o"&gt;}&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;Character&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;isUpperCase&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ch&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="n"&gt;stringParts&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;size&lt;/span&gt;&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
                &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;sb&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;length&lt;/span&gt;&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
                    &lt;span class="n"&gt;sb&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;append&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;" "&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
                &lt;span class="o"&gt;}&lt;/span&gt;
                &lt;span class="n"&gt;sb&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;append&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;Character&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;toLowerCase&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ch&lt;/span&gt;&lt;span class="o"&gt;));&lt;/span&gt;
            &lt;span class="o"&gt;}&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
                &lt;span class="n"&gt;sb&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;append&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ch&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
            &lt;span class="o"&gt;}&lt;/span&gt;
        &lt;span class="o"&gt;}&lt;/span&gt;

        &lt;span class="n"&gt;stringParts&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;add&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;sb&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;toString&lt;/span&gt;&lt;span class="o"&gt;());&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;stringParts&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;

    &lt;span class="nd"&gt;@Override&lt;/span&gt;
    &lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="nf"&gt;generateDisplayNameForMethod&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;Class&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;?&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;testClass&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="nc"&gt;Method&lt;/span&gt; &lt;span class="n"&gt;testMethod&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nf"&gt;splitToParts&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;
                &lt;span class="kd"&gt;super&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;generateDisplayNameForMethod&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;testClass&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;testMethod&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
        &lt;span class="o"&gt;);&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It’s quite a lot but it’s basically replacing underscores (‘_’) with spaces (‘ ‘) and splitting to words based on upper case letters and also I’ve wanted to handle cases in which there are two parts to the test name.&lt;/p&gt;

&lt;p&gt;Now when I run the tests I see the following results:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frh09m51joeaup0ltv35o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frh09m51joeaup0ltv35o.png" width="800" height="387"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Which is exactly what I want, having readable test names helps me write better test names, it’s hard to hide when it’s written in pain English and writing good test names helps me write better tests – but we’ve already covered that &lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flf7sq83i3kati2941k7k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flf7sq83i3kati2941k7k.png" alt="🙂" width="72" height="72"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Happy coding…&lt;/p&gt;

</description>
      <category>howto</category>
      <category>tools</category>
      <category>junit</category>
      <category>unittests</category>
    </item>
  </channel>
</rss>
