Let’s be honest: Debugging is the "Dark Souls" of software engineering. You spend 10 minutes writing a feature and 2 hours wondering why undefined is not a function.
I got tired of the "print-statement-and-pray" workflow, so I started building codetrace-ai.
Why another dev tool? 🛠️
Most tools tell you where the code broke. codetrace-ai tells you how it got there—and then fixes it for you.
Imagine a tool that:
- Summarizes a massive execution log into 3 actionable bullet points.
- Actually edits your files: Found a logic error in the trace? Just ask codetrace-ai to fix it. It refactors your code in real-time based on the actual execution data.
- Understands your async flows without you having to set 40 breakpoints.
- Privacy-First by Design: Your code and execution traces stay local. We don't use your proprietary data to train third-party models—what happens on your machine, stays on your machine. (for llama support)
The "Trace" Vibe 🎨
We went with a high-energy theme (Red, Orange, and Yellow) because debugging should feel like you're solving a high-stakes puzzle, not filling out a spreadsheet.
I need your feedback!
I'm currently in the early stages of this project and would love to hear from the community:
- What's the most annoying bug you've had to trace recently?
- Would you trust an AI to suggest a fix for a logic error, or just to explain the error?
- Local AI: How important is it to you that the LLM processing your traces runs entirely on your local hardware?
Drop a comment below! I'm actively looking for contributors and early testers.
👉 Repo: Codetrace-ai
👉 PyPI: pip install codetrace-ai
Top comments (0)