DEV Community

Param Shah
Param Shah

Posted on

We built traceAI, an open-source tool for tracing LLM calls in production

If you have ever tried to debug an LLM app in production, you know
how painful it gets. You have no idea what prompt actually went out,
what the model returned, how long it took, or why it failed.

That is exactly why we built traceAI.

traceAI is an open-source observability tool that traces every LLM
call in your application. It captures:

  • Inputs and outputs
  • Latency and token usage
  • Costs
  • Errors and failures

All with minimal setup.

We are launching our full platform next week but the traceAI repo
is already live on GitHub.

Check it out: https://github.com/future-agi/traceAI

Would love feedback from devs who are running LLMs in production.
What does your current observability stack look like? What is
missing?

Top comments (0)