I built a CLI tool that analyzes pytest failures and proposes fixes automatically.
How It Works
- Runs your test suite
- Analyzes failure patterns
- Proposes a fix with diff preview
- You approve or reject
- Re-runs to verify no regressions
Demo
Current Approach
Right now it's mostly pattern-based - analyzes pytest output and looks for common bugs like wrong operators, flipped logic, off-by-one errors.
There's an LLM fallback for trickier cases, but the core is deterministic to keep it predictable and safe.
The goal is to automate the test β fix β validate loop, not just explain errors like ChatGPT would.
What's Next
Looking to handle more complex failure patterns and improve the global optimization approach (scoring multiple candidate fixes across the full test suite).
Also looking for beta testers - DM me if you want to try it on a real project.
Feedback Welcome
What kind of test failures would you find most useful to auto-fix?
**Publish it.**
---
### **2. Twitter Post**
Built a CLI that auto-fixes failing Python tests π
Analyzes pytest output β proposes fix β shows diff β validates no regressions
Demo: https://drive.google.com/file/d/1Uv79v47-ZVC6xLv1TZL2cvEbUuLcy5FU/view?usp=drivesdk
Pattern-based core + LLM fallback for edge cases
Looking for beta testers. Thoughts?
Top comments (0)