DEV Community

Cover image for Building ResuMatch AI with TDD and AI-Assisted Development (Claude)
Mohamed Afiq
Mohamed Afiq

Posted on

Building ResuMatch AI with TDD and AI-Assisted Development (Claude)

🎯 Why I Started Experimenting with This

While building ResuMatch AI, I ran into a problem I didn’t expect:

AI could generate code extremely fast… but it could also confidently generate the wrong implementation.

At first, I was treating AI like an autopilot.. blindly accepting all the changes ..

Eventually I realized something important:

If I couldn’t clearly define the expected behavior first, I couldn’t properly review the AI’s output either.

That pushed me into learning Test-Driven Development (TDD) more seriously while building actual features in my project.

This article isn’t a guide on “the best way” to build AI systems. It’s mostly a reflection on what I learned while combining TDD, ASP.NET Core, and AI-assisted development in a real application.

đź§  The Mental Model That Changed Everything

One idea from my mentorship sessions really stuck with me:

You are the architect of intent.
The AI is the implementation engine.

That completely changed how I worked with AI.

Instead of asking AI to “build the feature,” I started:

  1. Defining the expected behavior first
  2. Writing failing tests
  3. Letting AI implement against those tests
  4. Reviewing whether the implementation actually satisfied the contract

The tests became the control mechanism... not the AI.

🏗️ The Feature I Used to Practice TDD

One of the first features I implemented this way in ResuMatch AI was a daily generation limit system.

The idea was simple:

  • Free users can only generate 3 tailored applications per day
  • Usage resets daily
  • Backend should block requests once the limit is reached

Instead of jumping straight into implementation, I started with test scenarios first.

🔴 Red → 🟢 Green → 🔵 Refactor

I followed the classic TDD cycle:

Write failing test
↓
Run tests (RED)
↓
Implement minimum code
↓
Run tests again (GREEN)
↓
Clean up implementation (REFACTOR)
Enter fullscreen mode Exit fullscreen mode

What surprised me was how useful this became when working with AI-generated code.

Without tests, it was easy to accept code that “looked correct.”

With tests, incorrect assumptions surfaced immediately.

✍️ Writing the Behaviors First

Before implementation, I wrote the feature scenarios as test method names:

[Fact]
public async Task CreateApplication_WhenUserHasThreeGenerationsToday_ShouldThrowDailyLimitExceededException()

[Fact]
public async Task CreateApplication_WhenUserHadThreeGenerationsYesterday_ShouldSucceed()

[Fact]
public async Task CreateApplication_WhenNoUsageRowExists_ShouldCreateUsageRowWithCountOne()
Enter fullscreen mode Exit fullscreen mode

This was probably the biggest learning moment for me.

The test names themselves became executable requirements.

If I couldn’t clearly name the scenario, I usually didn’t fully understand the business rule yet.

🤖 Where AI Actually Helped

Once the test structure was clear, AI became much more useful.

I used it to:

  • Fill in repetitive Arrange/Act/Assert sections
// File: Unit/Services/ApplicationServiceTests.cs

namespace ResuMatch.Tests.Unit.Services;

public class ApplicationServiceTests
{
    // SCENARIO 1: Happy path — user is under the limit
    [Fact]
    public async Task CreateApplication_WhenUserHasZeroGenerationsToday_ShouldSucceed()
    {
        // YOU write this comment structure:
        // Arrange: user exists, no UserUsage row for today
        // Act: call CreateApplicationAsync
        // Assert: returns valid Guid, no exception
    }

    // SCENARIO 2: Edge case — exactly at limit (2 out of 3 used)
    [Fact]
    public async Task CreateApplication_WhenUserHasTwoGenerationsToday_ShouldSucceed()
    {
        // Arrange: UserUsage row exists with GenerationCount = 2
        // Act: call CreateApplicationAsync
        // Assert: succeeds, GenerationCount becomes 3
    }

// Many more scenarios
Enter fullscreen mode Exit fullscreen mode
  • Generate boilerplate EF Core setup
  • Implement DTOs and exception classes
  • Suggest minimal production code changes

For example, after defining the expected behavior, I could prompt the AI with very targeted instructions:

  • Modify ApplicationService to make these tests pass.
  • Do not change method names.
  • Do not modify unrelated logic.
  • Use DateOnly.FromDateTime(DateTime.UtcNow).

That produced far better results than vague prompts like:

“Build a rate limiting feature for this application and only allow 3 attempts”

In conclusion, TDD doesn't make AI deterministic. It makes your integration reliable, your refactoring safe, and your debugging sane. If you're building with AI-Assisted development, don't skip the tests. Your future self will thank you.

đź’¬ Let's Connect!

Have you tried TDD with AI agents? What challenges did you face?
Drop a comment below or connect with me:

GitHub: [https://github.com/mafiqqq]
LinkedIn: [https://www.linkedin.com/in/afiqqqx/]

Top comments (0)