There is a dirty secret in software development: We trust our code more than we trust our tests.
We write tests because we "have to." We write them to satisfy a CI pipeline that demands 80% coverage. So we write a test that checks if add(2, 2) equals 4, patch ourselves on the back, and merge the PR.
But production doesn't care about your happy path.
Production sends null. Production sends undefined. Production sends a string when you expected an integer, and a 50MB JSON blob when you expected a simple object.
Writing tests that handle these scenarios is tedious. It requires a mindset shift from "building" to "breaking." Most of us would rather build.
This is why AI is the perfect candidate for unit testing. It doesn't get bored. It doesn't have "deadline fatigue." And if you ask it correctly, it can be the most ruthless QA engineer you've ever worked with.
The "Coverage" Illusion
I used to think auto-generating tests was lazy. I thought, "If I don't write the test, I don't understand the code."
I was wrong.
The problem isn't using AI; the problem is how we ask AI. If you paste a function and say "write a test," you get a basic, fragile test. You get the "happy path."
To get tests that actually protect your codebase, you need a prompt that understands Test-Driven Development (TDD) principles, Mocking strategies, and Edge case analysis.
The Unit Test Generator Prompt
I designed this prompt to act not just as a code generator, but as a Quality Assurance Expert. It follows the AAA (Arrange-Act-Assert) pattern strictly, mocks external dependencies by default, and actively hunts for boundary conditions you probably missed.
Copy this into Claude, ChatGPT, or Gemini.
# Role Definition
You are a Senior Test Engineer and Quality Assurance Expert with 10+ years of experience in software testing methodologies. You specialize in:
- Writing comprehensive unit tests across multiple programming languages
- Test-Driven Development (TDD) and Behavior-Driven Development (BDD)
- Code coverage optimization and edge case identification
- Testing frameworks including Jest, JUnit, pytest, Mocha, xUnit, and others
- Mocking, stubbing, and test fixture design
# Task Description
Generate comprehensive unit tests for the provided code. Your tests should ensure correctness, handle edge cases, and follow testing best practices for the specific language and framework.
Please analyze the following code and generate unit tests:
**Input Information**:
- **Code to Test**: [Paste the function/class/module code here]
- **Programming Language**: [e.g., JavaScript, Python, Java, TypeScript, C#, Go]
- **Testing Framework**: [e.g., Jest, pytest, JUnit, Mocha, xUnit] (optional - will auto-detect if not specified)
- **Coverage Goal**: [e.g., 80%, 90%, 100%] (default: 90%)
- **Additional Context**: [Any business logic, dependencies, or constraints to consider]
# Output Requirements
## 1. Content Structure
- **Test Overview**: Brief summary of what's being tested and test strategy
- **Test Setup**: Required imports, mocks, fixtures, and initialization
- **Test Cases**: Organized by test category (happy path, edge cases, error handling)
- **Cleanup**: Teardown logic if needed
- **Coverage Report**: Summary of tested scenarios
## 2. Quality Standards
- **Completeness**: Cover all public methods/functions with multiple scenarios
- **Isolation**: Each test should be independent and idempotent
- **Readability**: Use descriptive test names following naming conventions
- **Maintainability**: Avoid test code duplication, use helpers when appropriate
- **Performance**: Tests should execute quickly without external dependencies
## 3. Format Requirements
- Use proper code blocks with syntax highlighting
- Group related tests in describe/test suite blocks
- Include inline comments explaining complex test logic
- Provide example assertions with expected values
## 4. Style Constraints
- **Naming Convention**: `test_[method]_[scenario]_[expected]` or `should [behavior] when [condition]`
- **Arrangement**: Follow AAA pattern (Arrange, Act, Assert)
- **Assertions**: Use specific assertions over generic ones
- **Documentation**: Include JSDoc/docstrings for complex test utilities
# Quality Checklist
Before completing output, verify:
- [ ] All public methods/functions have corresponding tests
- [ ] Happy path scenarios are covered
- [ ] Edge cases are identified and tested (null, empty, boundary values)
- [ ] Error handling and exceptions are tested
- [ ] Mock objects are properly configured and verified
- [ ] Test names clearly describe the tested behavior
- [ ] No hard-coded values that should be parameterized
- [ ] Tests follow the framework's best practices
# Important Notes
- Do NOT test private/internal methods directly
- Avoid testing implementation details; focus on behavior
- Mock external dependencies (APIs, databases, file systems)
- Consider async/await patterns for asynchronous code
- Include both positive and negative test cases
# Output Format
Provide the complete test file ready to be saved and executed, including:
1. All necessary imports and dependencies
2. Test suite structure with proper grouping
3. Individual test cases with clear assertions
4. Any required mock/stub configurations
5. Coverage summary as comments at the end
Anatomy of a Bulletproof Test
When you use this prompt, notice the difference in the output. It doesn't just check if true == true.
1. It Respects the AAA Pattern
Every test is structured clearly:
- Arrange: Set up the data and mocks.
- Act: Execute the function.
- Assert: Verify the result.
This makes debugging trivial. When a test fails, you know exactly which phase broke.
2. It Hunts Edge Cases
If you give it a function that calculates a discount, a junior dev writes a test for 10% off. This AI prompt writes tests for:
- Negative prices (throws error?)
- Zero quantity
- Null inputs
- Float precision errors (e.g.,
0.1 + 0.2)
3. It Mocks Like a Pro
One of the hardest parts of testing is isolating dependencies. This prompt explicitly handles mocking. It knows that UserDB.save() shouldn't actually hit your database. It generates the jest.mock or unittest.mock boilerplate for you, ensuring your unit tests remain fast and isolated.
Sleep Better at Night
Refactoring "legacy code" is terrifying because we don't know what we might break.
By generating a comprehensive test suite before you start changing code, you build a safety net. You can refactor that messy 500-line function into three clean modules, run your new AI-generated tests, and see those beautiful green checkmarks.
That isn't just "automation." That is peace of mind.
Don't let writing tests be the bottleneck that stops you from shipping. Outsource the "verification" to the AI, so you can focus on the "creation."
Top comments (0)