ChatGPT for Testing: Prompts That Write Tests You'd Actually Write Yourself
I've written a lot of tests I'm not proud of. Happy path only. One assertion per function. The kind of tests that pass CI but don't catch anything real. A few months ago I started using ChatGPT as a testing collaborator, not to write the tests for me blindly, but to surface the cases I was lazily skipping.
The results were better than I expected — not because ChatGPT is magic, but because having to describe your function to it forces you to think clearly about what it's supposed to do.
Here's how I actually use it.
Starting From a Function Signature
The fastest workflow I've found is pasting in a function signature plus a one-sentence description and asking for a full test suite. The key is being specific about what kind of tests you want.
Prompt: "Here's a Python function that validates user-submitted email addresses before account creation. It takes a string and returns a boolean. Write a pytest test suite for it. Cover the happy path, invalid formats, edge cases with special characters, empty strings, and anything else that could cause a false positive or false negative. Include comments explaining why each test matters."
What you get back is usually 80% there. Some tests will be trivially obvious, a few will be genuinely useful cases you hadn't thought of. The comments are what I find most valuable — they force the model to articulate the intent behind each test, which I can then verify or reject.
The Edge Case Interrogation
This one has saved me more than once. You paste in a function and ask ChatGPT to think like an attacker — or just a very careless user.
Prompt: "Here's a JavaScript function that processes a payment amount from user input and applies a discount code. What inputs would cause this function to behave unexpectedly, return wrong results, or throw an error? List them as specific test cases I should write, not as general categories."
The "specific test cases, not general categories" instruction is important. Without it you get back a list like "test with null values" — useless. With it, you get "test with amount = '10.00' as a string instead of a number" and "test with discount code that has trailing whitespace" — things you can actually write assertions for.
Integration Tests From User Stories
My team writes user stories in the format "users should be able to..." before we write code. These are actually great seeds for integration test scenarios.
Prompt: "Convert these user stories into integration test scenarios for a React/Node.js app using Jest and Supertest. For each scenario, describe the setup state, the action to test, and the expected outcome. Don't write the code yet — just give me the test plan in plain English so I can review it first. User stories: [paste stories]"
Separating the planning from the implementation is deliberate. I want to review the test logic before the code gets generated, because ChatGPT will confidently write tests for the wrong behavior if the user story is ambiguous.
Generating Realistic Mock Data
This is unglamorous but genuinely time-consuming. Generating realistic nested mock objects for TypeScript types, especially when they have optional fields and union types, is tedious.
Prompt: "Here's a TypeScript interface for an order object in our e-commerce system. Generate 5 mock objects that cover different realistic scenarios: a standard order, an order with multiple line items and a discount, an order with a partially fulfilled status, an order with a foreign shipping address, and a failed payment order. Make the data look real — realistic names, product names, prices, and timestamps."
The "make it look real" instruction matters. If you skip it, you get name: "Test User" and price: 99.99 everywhere, which makes test failures harder to read and debug.
Turning Requirements Into Test Cases
Product requirements are usually written for humans, not test suites. This prompt bridges that gap.
Prompt: "Here's a requirements document section describing how our file upload feature should work. Translate each requirement into a specific test case title and a one-sentence description of what it asserts. Format it as a numbered list. Flag any requirements that are ambiguous or untestable as written: [paste requirements]"
The "flag ambiguous requirements" instruction is the most useful part. It turns ChatGPT into a QA reviewer who reads the requirements critically, which usually surfaces gaps before any code is written.
Coverage Gap Analysis
This one I run periodically on existing code. Paste in a function and the tests you already have, then ask what's missing.
Prompt: "Here's a Go function that parses configuration files, and here are the existing tests for it. What scenarios are not covered by these tests? Specifically call out any error paths, boundary conditions, or interaction effects that could cause bugs in production but would pass the current test suite."
I've found actual bugs this way — cases where my tests were passing but the function had silent failure modes I hadn't considered.
What ChatGPT Can't Do for You
It won't know your domain. It doesn't know that in your system, a "pending" order means something different from a "processing" order, or that order ID 0 is reserved. You still have to review everything it generates. Think of it as a testing junior who is very fast and never gets bored, but needs clear instructions and your sign-off on the output.
The prompts above give you a starting template. The real skill is iterating on them until they match your codebase's patterns and vocabulary.
If you want a full set of tested prompts across more testing scenarios — property-based testing, snapshot testing, test refactoring, flaky test diagnosis — I've put together a 200-prompt pack that covers the full developer workflow.
Get 200 ChatGPT Prompts for Developers — $19 instant download
Top comments (0)