<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Param Shah</title>
    <description>The latest articles on DEV Community by Param Shah (@param_shah_e2b).</description>
    <link>https://dev.to/param_shah_e2b</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/param_shah_e2b"/>
    <language>en</language>
    <item>
      <title>We built traceAI, an open-source tool for tracing LLM calls in production</title>
      <dc:creator>Param Shah</dc:creator>
      <pubDate>Sat, 18 Apr 2026 11:27:11 +0000</pubDate>
      <link>https://dev.to/param_shah_e2b/we-built-traceai-an-open-source-tool-for-tracing-llm-calls-in-production-4b3m</link>
      <guid>https://dev.to/param_shah_e2b/we-built-traceai-an-open-source-tool-for-tracing-llm-calls-in-production-4b3m</guid>
      <description>&lt;p&gt;If you have ever tried to debug an LLM app in production, you know &lt;br&gt;
how painful it gets. You have no idea what prompt actually went out, &lt;br&gt;
what the model returned, how long it took, or why it failed.&lt;/p&gt;

&lt;p&gt;That is exactly why we built traceAI.&lt;/p&gt;

&lt;p&gt;traceAI is an open-source observability tool that traces every LLM &lt;br&gt;
call in your application. It captures:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Inputs and outputs&lt;/li&gt;
&lt;li&gt;Latency and token usage&lt;/li&gt;
&lt;li&gt;Costs&lt;/li&gt;
&lt;li&gt;Errors and failures&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;All with minimal setup.&lt;/p&gt;

&lt;p&gt;We are launching our full platform next week but the traceAI repo &lt;br&gt;
is already live on GitHub.&lt;/p&gt;

&lt;p&gt;Check it out: &lt;a href="https://github.com/future-agi/traceAI" rel="noopener noreferrer"&gt;https://github.com/future-agi/traceAI&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Would love feedback from devs who are running LLMs in production. &lt;br&gt;
What does your current observability stack look like? What is &lt;br&gt;
missing?&lt;/p&gt;

</description>
      <category>ai</category>
      <category>llm</category>
      <category>opensource</category>
      <category>devtool</category>
    </item>
  </channel>
</rss>
