DEV Community

tdSEVN
tdSEVN

Posted on

I got tired of copy-pasting scattered logs to AI, so I built an open-source Go daemon that traces E2E (React to SQL)

Hi DEV Community! 👋

Like many of you, I rely heavily on AI (Claude/Cursor) for debugging. But recently, I noticed a huge pain point in my workflow: AI constantly hallucinates or gives generic answers when it lacks actual runtime context.

Think about this common scenario:
Your frontend crashes and throws a cryptic error in the browser console. However, the backend silently returns a 200 OK, or maybe there's a deep SQL constraint failing that the frontend never sees.

To give the AI enough context to actually fix the bug, I had to:

Manually collect logs from the browser console.

Dig through the backend terminal to find the executed SQL.

Check the Network tab for HTTP headers.

Paste everything together into a massive prompt.

It’s incredibly tedious, context-switching is draining, and it burns through tokens.

💡 The Solution: Trace2Prompt
I decided to scratch my own itch and built Trace2Prompt—an open-source tool written in Go.

GitHub logo thuanDaoSE / trace2prompt

Zero-Config AI Debugging Assistant. Auto-capture End-to-End context (Frontend to DB) into perfect LLM prompts.

🚀 Trace2Prompt

"Zero-Config" AI Debug Assistant - Automatically Collects Runtime Context & Distributed Logs

Go Report Card License: MIT

😩 The Pain: "Hey AI, why did my app crash?"

Read this in other languages: 🇻🇳 Tiếng Việt.

You open ChatGPT/Claude and type:

"Hey AI, I clicked button A, then filled out form B, and suddenly the project stopped working. Why is there a business logic error here? Why is the system so slow?"

And the result? AI gives generic, cliché answers, or worse, makes up incorrect code. The simple reason is that AI is blind to the Runtime Environment (Context at execution time). It only knows how to read static code, but doesn't know what the actual data was at that time.

Furthermore, in modern systems, logs are often scattered everywhere: Frontend reports errors in the browser console, Backend throws exceptions in the terminal, SQL gets stuck in the database To…

Instead of manually gathering logs, Trace2Prompt acts as a lightweight background daemon that utilizes the standard OpenTelemetry (OTLP). It automatically connects the dots across your entire stack:

Frontend Clicks & Console errors ➡️ Backend API execution (Flame Graph) ➡️ Actual SQL queries.

It aggregates all this E2E context into one single, standardized Prompt. You just click "Copy", send it to your AI, and it instantly knows the exact line of code causing the bug.

⚙️ How it works under the hood
I wanted this to be as frictionless as possible, so the core principles are:

Zero-Config Code: You don't need to change your business logic. Just attach the standard OTel agent to your app (works natively with Node.js, Java, Python, Go, etc.).

Ultra-lightweight: Written in Go, the daemon runs extremely smoothly, using almost no CPU and only consuming a few dozen MB of RAM.

Security First: Sensitive information like Passwords, JWT Tokens, and Emails are automatically redacted to [REDACTED] before being sent to AI. Also, because devs (rightfully) don't trust random executables, I intentionally didn't provide pre-built binaries. It's 100% open-source, and you spin it up safely with one simple Docker command.

🎬 See it in action

(I recorded a quick 27s demo showing how it catches a silent React crash and extracts the exact SQL query).

Trace2Prompt Example Output

👉 Click here to watch the full 27s Demo Video on our GitHub README!

🤝 I'd love your feedback!
I just released and I am looking for developers to try it out. Whether you want to tear my Go code apart, suggest architectural improvements, or just use it to debug your own side projects—all feedback and PRs are highly appreciated!

Let me know what you think in the comments below! 👇

Top comments (2)

Collapse
 
klement_gunndu profile image
klement Gunndu

The pain point is dead-on — manually stitching browser console + backend logs + SQL traces into a prompt is the worst part of AI-assisted debugging. Using OTel as the collection layer is a solid call since most stacks already have it instrumented. Does the auto-redaction catch custom sensitive fields beyond the standard PII patterns, or is it regex-based?

Collapse
 
tdsevn profile image
tdSEVN

Hey, thanks for checking it out! Yeah, gathering that context manually is the absolute worst.

Right now in v1, it's just hardcoded/regex-based targeting the usual suspects (Headers like Authorization/Cookie, and JSON keys like password, token, email). It replaces them with [REDACTED].

It won't catch custom sensitive fields yet. But adding a trace2prompt.yaml config so you can define your own regex rules or keys to mask is next on my roadmap.

Would a config file approach work for your setup, or do you have a different solution in mind?"