When writing unit tests, I follow a quick multi-step prompting method that hasn't failed me yet. It's helped me consistently produce high-quality, maintainable tests.
Here's the approach:
🔍Ask: ### {function} ### Can you explain what this function does?
This forces a model (or even yourself) to clearly articulate the purpose and behavior of the function.
🧠Plan: "Can you plan a set of unit tests for this function?"
This step breaks down edge cases, success paths, and failure modes.
📈Refine if Needed: If the test plan is too small or generic, prompt again - or switch models - until you get a comprehensive set.
💻Implement: Ask for the test class in Java. Use the naming convention:
UnitOfWork_StateUnderTest_ExpectedBehavior
(This makes your tests self-documenting and easier to maintain.)
This structured prompt-based method has consistently helped me cover more edge cases and write cleaner tests with less mental overhead.
💡 Please note that you will have to modify somethings here and there, you cannot and should not expect LLMs to generate everything that you imagine, e.g. - file names can be made more relevant to your need, object creations can be made more relevant to your need.
Top comments (0)