<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Kilo Spark</title>
    <description>The latest articles on DEV Community by Kilo Spark (@kilospark).</description>
    <link>https://dev.to/kilospark</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/kilospark"/>
    <language>en</language>
    <item>
      <title>Debug Stripe Webhooks Without Guessing What Your Server Sends Back</title>
      <dc:creator>Kilo Spark</dc:creator>
      <pubDate>Sun, 15 Feb 2026 20:57:03 +0000</pubDate>
      <link>https://dev.to/kilospark/debug-stripe-webhooks-without-guessing-what-your-server-sends-back-5e7</link>
      <guid>https://dev.to/kilospark/debug-stripe-webhooks-without-guessing-what-your-server-sends-back-5e7</guid>
      <description>&lt;p&gt;You get a Stripe webhook. Your server processes it. Something breaks — maybe the charge succeeds but your database doesn't update, or the customer gets double-charged.&lt;/p&gt;

&lt;p&gt;The problem isn't Stripe. It's that you can't see what your server is actually sending &lt;em&gt;to&lt;/em&gt; Stripe during that webhook flow.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Blind Spot
&lt;/h2&gt;

&lt;p&gt;When a Stripe webhook fires, your handler probably:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Verifies the signature&lt;/li&gt;
&lt;li&gt;Reads the event&lt;/li&gt;
&lt;li&gt;Makes API calls back to Stripe (update subscription, create invoice, refund, etc.)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Steps 1 and 2 are easy to log. Step 3? You're flying blind unless you dig through Stripe's dashboard and match timestamps manually.&lt;/p&gt;

&lt;h2&gt;
  
  
  One-Line Fix
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://toran.sh" rel="noopener noreferrer"&gt;toran.sh&lt;/a&gt; lets you see every outbound API call by swapping a base URL. Instead of:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;stripe&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;api_base&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;https://api.stripe.com&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You use:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;stripe&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;api_base&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;https://stripe.toran.sh&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's it. No SDK. No middleware. No code beyond changing one string.&lt;/p&gt;

&lt;h2&gt;
  
  
  What You See
&lt;/h2&gt;

&lt;p&gt;Every request your server makes to Stripe now shows up in real time at toran.sh:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Full request body&lt;/strong&gt; — see exactly what parameters you're sending&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Response&lt;/strong&gt; — see what Stripe sent back (including error details)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Timing&lt;/strong&gt; — spot slow calls that might be causing timeouts&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Headers&lt;/strong&gt; — verify you're sending the right API version, idempotency keys, etc.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Real Example: Debugging a Double-Charge
&lt;/h2&gt;

&lt;p&gt;A user reported being charged twice. Here's how you'd debug it:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Set &lt;code&gt;stripe.api_base = "https://stripe.toran.sh"&lt;/code&gt; in your webhook handler&lt;/li&gt;
&lt;li&gt;Re-send the webhook from Stripe's dashboard (they have a "Resend" button)&lt;/li&gt;
&lt;li&gt;Watch toran.sh — you see your handler makes &lt;strong&gt;two&lt;/strong&gt; &lt;code&gt;POST /v1/charges&lt;/code&gt; calls&lt;/li&gt;
&lt;li&gt;The bug is obvious: your idempotency check runs &lt;em&gt;after&lt;/em&gt; the first charge instead of before&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Without seeing the outbound calls, you'd be reading logs and guessing for hours.&lt;/p&gt;

&lt;h2&gt;
  
  
  Works With Any API
&lt;/h2&gt;

&lt;p&gt;Stripe is just one example. The pattern works with any API:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;openai.toran.sh&lt;/code&gt; for OpenAI calls&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;api.anthropic.toran.sh&lt;/code&gt; for Anthropic&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;api.github.toran.sh&lt;/code&gt; for GitHub API&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Swap the base URL, see what your code actually sends.&lt;/p&gt;

&lt;h2&gt;
  
  
  When to Use It
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Webhook debugging&lt;/strong&gt; — see what your handler sends back to the API&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;SDK debugging&lt;/strong&gt; — see what the SDK actually sends vs. what the docs say&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Integration testing&lt;/strong&gt; — verify your code sends the right payloads before going to production&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Teaching&lt;/strong&gt; — show a junior dev exactly what happens when they call &lt;code&gt;stripe.charges.create()&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Try It
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://toran.sh" rel="noopener noreferrer"&gt;toran.sh&lt;/a&gt; is free — no signup required for basic usage. Swap a URL, see your API calls.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Built by &lt;a href="https://kilospark.com" rel="noopener noreferrer"&gt;Kilo Spark&lt;/a&gt;. We build dev tools that show you what's actually happening.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>api</category>
      <category>stripe</category>
      <category>debugging</category>
    </item>
    <item>
      <title>What Open Source Maintainers Miss in Large PRs (And How to Catch It)</title>
      <dc:creator>Kilo Spark</dc:creator>
      <pubDate>Sun, 15 Feb 2026 08:01:23 +0000</pubDate>
      <link>https://dev.to/kilospark/what-open-source-maintainers-miss-in-large-prs-and-how-to-catch-it-35nj</link>
      <guid>https://dev.to/kilospark/what-open-source-maintainers-miss-in-large-prs-and-how-to-catch-it-35nj</guid>
      <description>&lt;p&gt;Large pull requests are where bugs, security issues, and breaking changes go to hide.&lt;/p&gt;

&lt;p&gt;If you maintain an open source project, you've been there: a 400-line PR lands in your review queue. You scroll through it, skim the obvious parts, approve it, and move on. Two weeks later, something breaks — and the culprit was buried on line 312 of that diff.&lt;/p&gt;

&lt;p&gt;You're not lazy. You're human. And large PRs are designed (unintentionally) to exploit exactly how humans review code.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Science of Skimming
&lt;/h2&gt;

&lt;p&gt;Studies on code review consistently show that review effectiveness drops sharply after ~200 lines of diff. Beyond that threshold, reviewers start skimming. They focus on the parts they understand and gloss over the rest.&lt;/p&gt;

&lt;p&gt;For open source maintainers, this is especially painful. You're reviewing code from contributors you may not know, in your spare time, often on a phone between meetings. The incentive structure practically guarantees things slip through.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Actually Gets Missed
&lt;/h2&gt;

&lt;p&gt;After looking at hundreds of open source PRs and the issues that followed them, patterns emerge. Here's what reviewers consistently miss in large diffs:&lt;/p&gt;

&lt;h3&gt;
  
  
  1. New Dependencies
&lt;/h3&gt;

&lt;p&gt;A contributor adds a helper library — seems reasonable. But buried in &lt;code&gt;package.json&lt;/code&gt; or &lt;code&gt;go.mod&lt;/code&gt;, there's now a new transitive dependency tree you didn't audit. That tree might include packages with known vulnerabilities, packages with minimal maintenance, or packages with licenses incompatible with yours.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Real example pattern:&lt;/strong&gt; A PR that refactors a utility module also adds &lt;code&gt;lodash&lt;/code&gt; as a dependency — when the project previously had zero runtime dependencies. The refactor gets reviewed; the dependency change gets a rubber stamp.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Permission and Scope Changes
&lt;/h3&gt;

&lt;p&gt;CI config changes are the ultimate "eyes glaze over" zone. A &lt;code&gt;.github/workflows/&lt;/code&gt; modification that adds &lt;code&gt;write&lt;/code&gt; permissions to a workflow, or a Dockerfile change that switches from a non-root user to root — these are high-impact, low-visibility changes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Real example pattern:&lt;/strong&gt; A PR updates a GitHub Action workflow to fix a build issue. Buried in the YAML diff: &lt;code&gt;permissions: contents: write&lt;/code&gt; was added. The contributor needed it for their fix, but now that workflow can push to your repo.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Configuration Modifications
&lt;/h3&gt;

&lt;p&gt;Changes to &lt;code&gt;.env.example&lt;/code&gt;, &lt;code&gt;config.yaml&lt;/code&gt;, &lt;code&gt;nginx.conf&lt;/code&gt;, or infrastructure-as-code files often ride along with feature PRs. Reviewers focus on the application logic and skip the config changes entirely.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Real example pattern:&lt;/strong&gt; A PR adding a new API endpoint also modifies the CORS configuration to allow &lt;code&gt;*&lt;/code&gt; origins. The endpoint code gets thorough review. The config change? Nobody notices.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Secrets and Credentials
&lt;/h3&gt;

&lt;p&gt;Hardcoded API keys, tokens, or internal URLs sometimes appear in test fixtures, example configs, or debug code. In a large diff, they blend into the noise.&lt;/p&gt;

&lt;h3&gt;
  
  
  5. License and Legal Changes
&lt;/h3&gt;

&lt;p&gt;A new file with a different license header gets added. A &lt;code&gt;LICENSE&lt;/code&gt; file is modified. A vendored dependency brings its own licensing terms. These changes are invisible to most reviewers unless they're specifically looking for them.&lt;/p&gt;

&lt;h2&gt;
  
  
  Practical Tips for Reviewing Large PRs
&lt;/h2&gt;

&lt;p&gt;If you maintain an open source project, here are concrete things you can do today:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Break the scroll habit.&lt;/strong&gt; Don't review a large PR top-to-bottom in one pass. Start with the files that matter most: config files, CI/CD, dependency manifests, permission-related code.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Use GitHub's file filter.&lt;/strong&gt; Filter the diff to show only &lt;code&gt;.yml&lt;/code&gt;, &lt;code&gt;.json&lt;/code&gt;, &lt;code&gt;.toml&lt;/code&gt;, &lt;code&gt;.lock&lt;/code&gt; files first. Review those separately from application code.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Check the dependency diff explicitly.&lt;/strong&gt; If &lt;code&gt;package-lock.json&lt;/code&gt; or &lt;code&gt;go.sum&lt;/code&gt; changed, don't just collapse it. At minimum, check what was &lt;em&gt;added&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Ask contributors to split large PRs.&lt;/strong&gt; This is the single most effective policy. A 400-line PR should usually be 3-4 smaller ones. Make this a documented expectation in your &lt;code&gt;CONTRIBUTING.md&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Set up automated guardrails.&lt;/strong&gt; Manual review doesn't scale, especially for the boring-but-critical stuff like dependency changes and permission modifications.&lt;/p&gt;

&lt;h2&gt;
  
  
  Automating What Humans Miss
&lt;/h2&gt;

&lt;p&gt;This is where tooling matters. Not as a replacement for human review, but as a safety net for the things humans predictably miss.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://axiomo.app" rel="noopener noreferrer"&gt;Axiomo&lt;/a&gt; was built specifically for this problem. It's a GitHub App that scans every PR and surfaces structured risk signals — new dependencies, permission changes, config modifications, and more — so you don't have to read 500 lines of diff hoping you'll catch the important parts.&lt;/p&gt;

&lt;p&gt;It doesn't replace your judgment. It tells you where to focus it.&lt;/p&gt;

&lt;p&gt;What Axiomo flags automatically:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;New dependencies&lt;/strong&gt; added to your project (and their risk profile)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;CI/CD and workflow changes&lt;/strong&gt; that modify permissions or secrets access&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Configuration changes&lt;/strong&gt; that affect security posture&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;License modifications&lt;/strong&gt; across your dependency tree&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Sensitive file patterns&lt;/strong&gt; — env files, credentials, infrastructure config&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For open source maintainers, Axiomo is &lt;strong&gt;free and unlimited for public repositories&lt;/strong&gt;. You install the GitHub App, and it starts working on every PR — no configuration required.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Maintainer's Dilemma
&lt;/h2&gt;

&lt;p&gt;Open source maintainers are asked to be security experts, dependency auditors, license lawyers, and CI/CD specialists — on top of actually reviewing code logic. That's not sustainable.&lt;/p&gt;

&lt;p&gt;The answer isn't "review harder." It's having structured signals that point you to what matters, so you can spend your limited review time on the things that actually need human judgment.&lt;/p&gt;

&lt;p&gt;Large PRs will always exist. But missing what's hidden inside them doesn't have to be inevitable.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;&lt;a href="https://axiomo.app" rel="noopener noreferrer"&gt;Axiomo&lt;/a&gt; is a free GitHub App that surfaces risk signals in every PR. Unlimited for public repos. Install it in 30 seconds → &lt;a href="https://axiomo.app" rel="noopener noreferrer"&gt;axiomo.app&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>opensource</category>
      <category>github</category>
      <category>security</category>
      <category>devops</category>
    </item>
    <item>
      <title>How to Monitor Every API Call Your LangChain Agent Makes</title>
      <dc:creator>Kilo Spark</dc:creator>
      <pubDate>Sun, 15 Feb 2026 08:01:20 +0000</pubDate>
      <link>https://dev.to/kilospark/how-to-monitor-every-api-call-your-langchain-agent-makes-382h</link>
      <guid>https://dev.to/kilospark/how-to-monitor-every-api-call-your-langchain-agent-makes-382h</guid>
      <description>&lt;h1&gt;
  
  
  How to Monitor Every API Call Your LangChain Agent Makes
&lt;/h1&gt;

&lt;p&gt;Your LangChain agent is making API calls you never see. Here's how to watch them.&lt;/p&gt;




&lt;p&gt;You built a LangChain agent. It works. It calls OpenAI, fetches from APIs, queries vector stores. But do you actually know what it's sending? What headers, what payloads, what tokens it's burning through on each request?&lt;/p&gt;

&lt;p&gt;Most developers don't. LangChain abstracts away the HTTP layer — that's the point. But when something breaks, when costs spike, or when you're debugging a hallucination, you need to see the raw traffic.&lt;/p&gt;

&lt;p&gt;In this tutorial, I'll show you how to use &lt;a href="https://toran.sh" rel="noopener noreferrer"&gt;toran.sh&lt;/a&gt; to intercept and inspect every outbound API call from a LangChain agent — in real-time, with zero code changes beyond swapping a URL.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem: LangChain's Black Box
&lt;/h2&gt;

&lt;p&gt;Here's a typical LangChain agent:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;langchain.agents&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;initialize_agent&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;load_tools&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;langchain_openai&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;ChatOpenAI&lt;/span&gt;

&lt;span class="n"&gt;llm&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;ChatOpenAI&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;gpt-4o&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;temperature&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;tools&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;load_tools&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;serpapi&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;llm-math&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="n"&gt;llm&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;llm&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;agent&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;initialize_agent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;tools&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;llm&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;agent&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;zero-shot-react-description&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;verbose&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;agent&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;invoke&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;What is the population of Tokyo divided by the population of Paris?&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Even with &lt;code&gt;verbose=True&lt;/code&gt;, you see the chain-of-thought — not the actual HTTP requests. You don't see:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;How many API calls were made to OpenAI&lt;/li&gt;
&lt;li&gt;The exact prompt tokens sent in each request&lt;/li&gt;
&lt;li&gt;Response latencies for each call&lt;/li&gt;
&lt;li&gt;Whether retry logic kicked in&lt;/li&gt;
&lt;li&gt;What your SerpAPI calls actually looked like&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;LangSmith exists for tracing chains, but it operates at the framework level. Sometimes you need to see the raw HTTP — the actual bytes on the wire.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Fix: Route Through toran.sh
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://toran.sh" rel="noopener noreferrer"&gt;toran.sh&lt;/a&gt; works by giving you a unique URL that proxies your API calls. You swap your base URL, and every request flows through toran's dashboard where you can inspect it in real-time.&lt;/p&gt;

&lt;p&gt;No SDK. No proxy configuration. No code beyond changing one URL string.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1: Get Your toran.sh Endpoint
&lt;/h3&gt;

&lt;p&gt;Head to &lt;a href="https://toran.sh" rel="noopener noreferrer"&gt;toran.sh&lt;/a&gt; and create a channel. You'll get a slug like &lt;code&gt;my-langchain-debug&lt;/code&gt;. No signup required for basic use.&lt;/p&gt;

&lt;p&gt;Your proxy URL becomes: &lt;code&gt;https://my-langchain-debug.toran.sh&lt;/code&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 2: Point LangChain at toran
&lt;/h3&gt;

&lt;p&gt;Here's the key change — swap the &lt;code&gt;base_url&lt;/code&gt; on your OpenAI client:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;langchain_openai&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;ChatOpenAI&lt;/span&gt;

&lt;span class="c1"&gt;# Before: calls api.openai.com directly
# llm = ChatOpenAI(model="gpt-4o")
&lt;/span&gt;
&lt;span class="c1"&gt;# After: routes through toran.sh
&lt;/span&gt;&lt;span class="n"&gt;llm&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;ChatOpenAI&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;gpt-4o&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;base_url&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;https://my-langchain-debug.toran.sh/v1&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's it. One line. LangChain sends all OpenAI requests through toran, which forwards them to OpenAI and logs everything.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 3: Run Your Agent and Watch
&lt;/h3&gt;

&lt;p&gt;Open your toran.sh dashboard in one window. Run your agent in another:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;langchain.agents&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;initialize_agent&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;load_tools&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;langchain_openai&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;ChatOpenAI&lt;/span&gt;

&lt;span class="n"&gt;llm&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;ChatOpenAI&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;gpt-4o&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;temperature&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;base_url&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;https://my-langchain-debug.toran.sh/v1&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;tools&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;load_tools&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;serpapi&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;llm-math&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="n"&gt;llm&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;llm&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;agent&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;initialize_agent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;tools&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;llm&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;agent&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;zero-shot-react-description&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;verbose&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;agent&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;invoke&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;What is the population of Tokyo divided by the population of Paris?&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now check your toran dashboard. You'll see every request in real-time:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;POST /v1/chat/completions&lt;/strong&gt; — the initial reasoning call&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;POST /v1/chat/completions&lt;/strong&gt; — after the agent gets SerpAPI results and thinks again&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;POST /v1/chat/completions&lt;/strong&gt; — the math step&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;POST /v1/chat/completions&lt;/strong&gt; — the final answer synthesis&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Click any request to see the full payload: the system prompt, the conversation history that LangChain assembled, the function definitions, token counts, and the complete response.&lt;/p&gt;

&lt;h2&gt;
  
  
  What You'll Discover
&lt;/h2&gt;

&lt;p&gt;Once you start watching, you'll notice things:&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Agents make more calls than you think
&lt;/h3&gt;

&lt;p&gt;A simple ReAct agent answering one question might make 4-6 OpenAI calls. An agent with multiple tools can easily hit 10+. Each one costs tokens.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. The prompts are massive
&lt;/h3&gt;

&lt;p&gt;LangChain injects tool descriptions, formatting instructions, and conversation history into every call. Your "simple question" might be wrapped in 2,000 tokens of scaffolding.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Retries happen silently
&lt;/h3&gt;

&lt;p&gt;If OpenAI returns a 429 or 500, the client retries. Without toran, you'd never know. With it, you see the failed request, the delay, and the retry.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Response times vary wildly
&lt;/h3&gt;

&lt;p&gt;Some calls return in 500ms. Others take 8 seconds. The dashboard shows you exactly which calls are slow, so you can optimize the right thing.&lt;/p&gt;

&lt;h2&gt;
  
  
  Monitoring Multiple Services
&lt;/h2&gt;

&lt;p&gt;LangChain agents often call more than just OpenAI. If you're using other LLM providers or APIs that support base URL configuration, you can route them through toran too:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;langchain_openai&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;ChatOpenAI&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;langchain_anthropic&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;ChatAnthropic&lt;/span&gt;

&lt;span class="c1"&gt;# Monitor OpenAI calls
&lt;/span&gt;&lt;span class="n"&gt;openai_llm&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;ChatOpenAI&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;gpt-4o&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;base_url&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;https://my-langchain-debug.toran.sh/v1&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# For any HTTP-based tool or API, swap the base URL
# to route through your toran channel
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Every call from every service shows up in one dashboard.&lt;/p&gt;

&lt;h2&gt;
  
  
  Production Use: Keep It Running
&lt;/h2&gt;

&lt;p&gt;toran.sh isn't just for debugging. Keep it in your staging or production pipeline:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;

&lt;span class="c1"&gt;# Toggle monitoring via environment variable
&lt;/span&gt;&lt;span class="n"&gt;base_url&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getenv&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;TORAN_BASE_URL&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;https://api.openai.com/v1&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;llm&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;ChatOpenAI&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;gpt-4o&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;base_url&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;base_url&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Set &lt;code&gt;TORAN_BASE_URL&lt;/code&gt; in staging to monitor. Remove it in production for direct calls. Or leave it on — the proxy adds minimal latency.&lt;/p&gt;

&lt;h2&gt;
  
  
  Pricing
&lt;/h2&gt;

&lt;p&gt;toran.sh has a free tier that requires no signup — perfect for debugging sessions. If you need persistent logging and team features:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Free&lt;/strong&gt; — No signup, basic real-time inspection&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Pro ($29/mo)&lt;/strong&gt; — Extended history, team access&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Pro Plus ($99/mo)&lt;/strong&gt; — Full retention, priority support&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Wrap-Up
&lt;/h2&gt;

&lt;p&gt;LangChain is powerful, but abstraction comes at a cost: you lose visibility into what's actually happening at the network level. toran.sh gives it back with a one-line change.&lt;/p&gt;

&lt;p&gt;Next time your agent burns through $5 of tokens on a single question, or takes 30 seconds to respond, or returns something bizarre — don't guess. Watch the calls.&lt;/p&gt;

&lt;p&gt;👉 &lt;a href="https://toran.sh" rel="noopener noreferrer"&gt;toran.sh&lt;/a&gt; — start monitoring in 30 seconds.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>python</category>
      <category>langchain</category>
      <category>debugging</category>
    </item>
    <item>
      <title>The 5 Riskiest PRs Merged to Popular Open Source Projects This Week</title>
      <dc:creator>Kilo Spark</dc:creator>
      <pubDate>Sun, 15 Feb 2026 03:22:03 +0000</pubDate>
      <link>https://dev.to/kilospark/the-5-riskiest-prs-merged-to-popular-open-source-projects-this-week-3kmj</link>
      <guid>https://dev.to/kilospark/the-5-riskiest-prs-merged-to-popular-open-source-projects-this-week-3kmj</guid>
      <description>&lt;p&gt;Every week, thousands of pull requests get merged to major open source projects. Most are routine — dependency bumps, typo fixes, small refactors. But some are high-risk changes that touch critical paths, come from first-time contributors, or modify security-sensitive code.&lt;/p&gt;

&lt;p&gt;I analyzed recent PRs across several popular repos to find the ones that deserved the most scrutiny. Here's what I found, and what it tells us about how we review code.&lt;/p&gt;

&lt;h2&gt;
  
  
  What makes a PR "risky"?
&lt;/h2&gt;

&lt;p&gt;Before diving in, let's define what I mean by risk. It's not about the code being bad — it's about the &lt;em&gt;probability that something important was missed during review&lt;/em&gt;. A few signals:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;First-time contributor&lt;/strong&gt; to the repo — they don't know the conventions yet&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Large diff touching multiple subsystems&lt;/strong&gt; — hard to review thoroughly&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Changes to auth, payments, or data handling&lt;/strong&gt; — high blast radius if wrong&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Insufficient test coverage for the change&lt;/strong&gt; — the safety net has holes&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Quick merge with minimal discussion&lt;/strong&gt; — maybe the reviewers were busy&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;None of these mean the PR is bad. They mean it deserved more attention than average.&lt;/p&gt;

&lt;h2&gt;
  
  
  How I analyzed them
&lt;/h2&gt;

&lt;p&gt;I used &lt;a href="https://axiomo.app" rel="noopener noreferrer"&gt;axiomo.app&lt;/a&gt; to generate structured signals for each PR. Instead of reading through hundreds of lines of diff, axiomo produces a breakdown:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Contributor context&lt;/strong&gt; — is this person a regular committer or brand new?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Risk score&lt;/strong&gt; with specific drivers — what exactly makes this PR risky?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Focus areas&lt;/strong&gt; — which files/changes deserve the most review attention?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Evidence&lt;/strong&gt; — links to the specific lines that triggered each signal&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The key difference from AI code review tools: axiomo doesn't try to tell you if the code is "good" or "bad." It tells you &lt;em&gt;where to look&lt;/em&gt; and &lt;em&gt;why it matters&lt;/em&gt;. The human reviewer still makes the call.&lt;/p&gt;

&lt;h2&gt;
  
  
  What patterns emerge
&lt;/h2&gt;

&lt;p&gt;After analyzing hundreds of PRs across different repos, some patterns stand out:&lt;/p&gt;

&lt;h3&gt;
  
  
  1. The "drive-by refactor"
&lt;/h3&gt;

&lt;p&gt;Someone new to the project opens a PR that "cleans up" a module they don't fully understand. The refactor looks reasonable line-by-line, but it subtly changes behavior that downstream code depends on. These PRs are especially dangerous because each individual change looks fine — the risk is in the aggregate.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. The "dependency cascade"
&lt;/h3&gt;

&lt;p&gt;A dependency update that looks like a simple version bump, but the new version has breaking changes in edge cases. The test suite passes because it doesn't cover those edges. These are hard to catch because the diff is small and looks innocuous.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. The "Friday afternoon merge"
&lt;/h3&gt;

&lt;p&gt;A large PR that's been open for weeks finally gets merged with a quick "LGTM" when the maintainer is trying to clear their review queue. The conversation thread shows early concerns that were never fully resolved, but the PR got approved anyway.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. The "config change with code implications"
&lt;/h3&gt;

&lt;p&gt;Changes to configuration files, environment variables, or feature flags that seem harmless but actually change runtime behavior in production. These often fly under the review radar because they're "just config."&lt;/p&gt;

&lt;h2&gt;
  
  
  Why structured signals matter
&lt;/h2&gt;

&lt;p&gt;Traditional code review is human-powered, which is both its strength and its weakness. Reviewers bring context, judgment, and domain knowledge that no tool can replace. But they also get tired, skip files, and miss patterns across large diffs.&lt;/p&gt;

&lt;p&gt;Structured signals don't replace the reviewer — they make the reviewer faster and more focused. Instead of scanning a 500-line diff wondering "what should I pay attention to?", you get a prioritized list of focus areas with explanations.&lt;/p&gt;

&lt;p&gt;Think of it like a triage nurse in an ER. The nurse doesn't diagnose or treat — they make sure the most urgent cases get seen first. That's what structured PR signals do for code review.&lt;/p&gt;

&lt;h2&gt;
  
  
  Try it on your own repos
&lt;/h2&gt;

&lt;p&gt;If you maintain an open source project (or work on any repo with regular PRs), you can try this yourself:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Go to &lt;a href="https://axiomo.app" rel="noopener noreferrer"&gt;axiomo.app&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Paste any public GitHub PR URL&lt;/li&gt;
&lt;li&gt;Get a structured signal breakdown in seconds&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;It's free for public repos, no signup required. There's also a &lt;a href="https://github.com/apps/axiomo-app" rel="noopener noreferrer"&gt;GitHub App&lt;/a&gt; that auto-generates signals on new PRs.&lt;/p&gt;

&lt;p&gt;The signal URLs are permanent — you can share them in PR discussions, link them in review comments, or reference them later.&lt;/p&gt;

&lt;h2&gt;
  
  
  What this isn't
&lt;/h2&gt;

&lt;p&gt;This is not:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;AI code review&lt;/strong&gt; — axiomo doesn't leave line-by-line comments or suggest fixes&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;A replacement for human review&lt;/strong&gt; — it's a lens, not a judge&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Static analysis&lt;/strong&gt; — it doesn't run your code or check for bugs&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;A CI gate&lt;/strong&gt; — it's informational, not blocking&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It's a structured, explainable breakdown of &lt;em&gt;what deserves attention&lt;/em&gt; in a PR and &lt;em&gt;why&lt;/em&gt;. The reviewer still does the thinking.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;If you're curious about the methodology or want to see signals for specific repos, drop a comment. I'm happy to analyze any public PR.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>opensource</category>
      <category>github</category>
      <category>codereview</category>
      <category>programming</category>
    </item>
    <item>
      <title>How I Debug OpenAI API Calls Without Any SDK</title>
      <dc:creator>Kilo Spark</dc:creator>
      <pubDate>Sun, 15 Feb 2026 03:21:58 +0000</pubDate>
      <link>https://dev.to/kilospark/how-i-debug-openai-api-calls-without-any-sdk-43g1</link>
      <guid>https://dev.to/kilospark/how-i-debug-openai-api-calls-without-any-sdk-43g1</guid>
      <description>&lt;p&gt;You're building with the OpenAI API. Maybe directly, maybe through LangChain, maybe through an MCP tool. Something isn't working — the response is wrong, the model is ignoring your system prompt, or you're getting rate limited and don't know why.&lt;/p&gt;

&lt;p&gt;Your options:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Add &lt;code&gt;print(response)&lt;/code&gt; everywhere&lt;/li&gt;
&lt;li&gt;Set up LangSmith or Helicone&lt;/li&gt;
&lt;li&gt;Stare at the OpenAI dashboard usage page&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;All of these suck for quick debugging. Here's what I actually do.&lt;/p&gt;

&lt;h2&gt;
  
  
  The env var trick
&lt;/h2&gt;

&lt;p&gt;Most OpenAI client libraries let you override the base URL. The official Python client uses &lt;code&gt;OPENAI_BASE_URL&lt;/code&gt;. LangChain respects it. So does nearly every wrapper.&lt;/p&gt;

&lt;p&gt;The trick: point that env var at an inspector that forwards to the real API while showing you everything.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# 1. Go to toran.sh/try&lt;/span&gt;
&lt;span class="c"&gt;# 2. Enter: https://api.openai.com&lt;/span&gt;
&lt;span class="c"&gt;# 3. Get your unique URL, e.g.: https://abc123.toran.sh&lt;/span&gt;
&lt;span class="c"&gt;# 4. Set it:&lt;/span&gt;
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;OPENAI_BASE_URL&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;https://abc123.toran.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now run your code normally. Every request to OpenAI flows through your inspector URL, which forwards to the real API. You see the full request and response in your browser — live, as it happens.&lt;/p&gt;

&lt;h2&gt;
  
  
  What you can see
&lt;/h2&gt;

&lt;p&gt;Once you're watching the traffic, some things become immediately obvious:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Token usage per request.&lt;/strong&gt; Not the aggregate dashboard number — the actual &lt;code&gt;usage&lt;/code&gt; object in each response. You can see exactly which call is burning through your quota.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;System prompts hitting the API.&lt;/strong&gt; If you're using a framework, you might not realize what system prompt it's actually sending. I've caught frameworks injecting 2,000-token system prompts I didn't write.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Retry behavior.&lt;/strong&gt; Is your client retrying on 429s? How many times? With what backoff? You can see every retry as a separate request.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Streaming chunks.&lt;/strong&gt; If you're using streaming, you can see the actual SSE chunks as they arrive. Useful when debugging why your streaming UI is stuttering.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Headers you didn't expect.&lt;/strong&gt; Some wrappers add custom headers. Some send your API key in unexpected ways. Now you can see exactly what's going over the wire.&lt;/p&gt;

&lt;h2&gt;
  
  
  A real example
&lt;/h2&gt;

&lt;p&gt;I was debugging why an agent kept giving wrong answers for a specific type of query. The logs showed the right tool was being called, the right function was executing, but the final answer was wrong.&lt;/p&gt;

&lt;p&gt;I pointed the base URL at toran and watched the requests. Turned out the framework was sending the conversation history in the wrong order — tool results were appearing &lt;em&gt;before&lt;/em&gt; the tool call in the messages array. The model was confused because it was seeing an answer before the question.&lt;/p&gt;

&lt;p&gt;I would never have caught this from application logs. The logs showed "tool called, result returned, completion generated." Everything looked fine. But the actual HTTP request body told the real story.&lt;/p&gt;

&lt;h2&gt;
  
  
  Works with any OpenAI-compatible API
&lt;/h2&gt;

&lt;p&gt;The same trick works with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Anthropic&lt;/strong&gt; — set &lt;code&gt;ANTHROPIC_BASE_URL&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Azure OpenAI&lt;/strong&gt; — override the endpoint URL&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Local models (Ollama, vLLM)&lt;/strong&gt; — point at the local server through toran&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Any OpenAI-compatible API&lt;/strong&gt; — if it takes a base URL, it works
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Anthropic
&lt;/span&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;anthropic&lt;/span&gt;
&lt;span class="n"&gt;client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;anthropic&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Anthropic&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;base_url&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;https://abc123.toran.sh&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# OpenAI
&lt;/span&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;openai&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;OpenAI&lt;/span&gt;
&lt;span class="n"&gt;client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;OpenAI&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;base_url&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;https://abc123.toran.sh/v1&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  When to use this vs. proper observability
&lt;/h2&gt;

&lt;p&gt;This isn't a replacement for LangSmith, Helicone, or OpenTelemetry. Those are production monitoring tools. This is for when you're sitting at your desk going "why the hell isn't this working" and you need to see the raw request &lt;em&gt;right now&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;Think of it as &lt;code&gt;curl -v&lt;/code&gt; for your LLM calls. You don't leave &lt;code&gt;curl -v&lt;/code&gt; in production, but you reach for it constantly during development.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Use toran when:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Something is broken and you need to see the actual request&lt;/li&gt;
&lt;li&gt;You want to verify what a framework is sending on your behalf&lt;/li&gt;
&lt;li&gt;You're debugging streaming behavior&lt;/li&gt;
&lt;li&gt;You need to check retry/error handling logic&lt;/li&gt;
&lt;li&gt;You want to see real token counts per request&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Use proper observability when:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You need historical data and dashboards&lt;/li&gt;
&lt;li&gt;You're monitoring production traffic&lt;/li&gt;
&lt;li&gt;You need alerts on cost/latency&lt;/li&gt;
&lt;li&gt;You want traces across multiple services&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Try it
&lt;/h2&gt;

&lt;p&gt;Go to &lt;a href="https://toran.sh/try" rel="noopener noreferrer"&gt;toran.sh/try&lt;/a&gt;. Enter &lt;code&gt;https://api.openai.com&lt;/code&gt;. Get your URL. Set &lt;code&gt;OPENAI_BASE_URL&lt;/code&gt;. Run your code.&lt;/p&gt;

&lt;p&gt;Takes 30 seconds. You'll see things you didn't know your code was doing.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>openai</category>
      <category>debugging</category>
      <category>webdev</category>
    </item>
    <item>
      <title>What Your AI Agent Actually Sends to APIs (And Why You Should Care)</title>
      <dc:creator>Kilo Spark</dc:creator>
      <pubDate>Sun, 15 Feb 2026 02:46:03 +0000</pubDate>
      <link>https://dev.to/kilospark/what-your-ai-agent-actually-sends-to-apis-and-why-you-should-care-1j3g</link>
      <guid>https://dev.to/kilospark/what-your-ai-agent-actually-sends-to-apis-and-why-you-should-care-1j3g</guid>
      <description>&lt;p&gt;You built an MCP tool that creates Stripe charges. You tested it. It works. A customer gets double-charged. Now what?&lt;/p&gt;

&lt;p&gt;You open your logs. The LLM decided to call your &lt;code&gt;create_charge&lt;/code&gt; tool twice. Or maybe once with weird parameters. Or maybe your tool called Stripe's API three times because of retry logic you forgot about. You don't actually know, because you never saw the raw HTTP requests that left your machine.&lt;/p&gt;

&lt;p&gt;This is the reality of building AI agent tooling right now: &lt;strong&gt;you're shipping code where the most important part — what actually goes over the wire — is invisible to you.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The black box problem
&lt;/h2&gt;

&lt;p&gt;When you write a normal REST integration, you control the inputs. You write the fetch call, you set the headers, you know exactly what's being sent because you typed it.&lt;/p&gt;

&lt;p&gt;AI agents flip this. The LLM decides &lt;em&gt;when&lt;/em&gt; to call your tool, &lt;em&gt;what arguments&lt;/em&gt; to pass, and sometimes &lt;em&gt;how many times&lt;/em&gt; to call it. Your MCP tool or function-calling handler takes those arguments and fires off HTTP requests to third-party APIs. The gap between "what the LLM decided" and "what bytes hit Stripe's servers" is where bugs live.&lt;/p&gt;

&lt;p&gt;And it's not a small gap.&lt;/p&gt;

&lt;p&gt;Consider a typical MCP tool that manages Shopify products:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="nd"&gt;@mcp.tool&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;update_product&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;product_id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;title&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;price&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;float&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="n"&gt;httpx&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;AsyncClient&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;resp&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;put&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
            &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;https://your-store.myshopify.com/admin/api/2024-01/products/&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;product_id&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;.json&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;headers&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;X-Shopify-Access-Token&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;TOKEN&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;
            &lt;span class="n"&gt;json&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;product&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;title&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;title&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;variants&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;price&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nf"&gt;str&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;price&lt;/span&gt;&lt;span class="p"&gt;)}]}}&lt;/span&gt;
        &lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;resp&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;json&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Looks simple. But when an agent calls this, you can't see:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Did the agent pass a valid &lt;code&gt;product_id&lt;/code&gt;, or did it hallucinate one?&lt;/li&gt;
&lt;li&gt;What did the full request body actually look like after serialization?&lt;/li&gt;
&lt;li&gt;Did Shopify return a 200, or a 422 that your tool swallowed?&lt;/li&gt;
&lt;li&gt;How long did the request take? Was it retried?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You could add logging. You could add print statements. You could wire up OpenTelemetry. But now you're instrumenting every tool, and you still can't see the actual bytes on the wire — just what your code &lt;em&gt;thinks&lt;/em&gt; it sent.&lt;/p&gt;

&lt;h2&gt;
  
  
  What you actually want
&lt;/h2&gt;

&lt;p&gt;You want &lt;code&gt;curl -v&lt;/code&gt; for your AI agent's API calls. Something that shows you the real request and response, in real time, without changing your tool's code.&lt;/p&gt;

&lt;p&gt;The simplest approach I've found: &lt;strong&gt;swap a base URL and watch everything in your browser.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The idea is dead simple. Instead of your tool hitting &lt;code&gt;https://api.stripe.com&lt;/code&gt;, it hits &lt;code&gt;https://abc123.toran.sh&lt;/code&gt;. Toran forwards the request to the real API, records everything, and streams it to a live dashboard you can watch in your browser.&lt;/p&gt;

&lt;p&gt;No install. No signup. No SDK to integrate. You change a base URL, and suddenly you can see everything.&lt;/p&gt;

&lt;h2&gt;
  
  
  What this looks like in practice
&lt;/h2&gt;

&lt;p&gt;Go to &lt;a href="https://toran.sh/try" rel="noopener noreferrer"&gt;toran.sh/try&lt;/a&gt;, enter the upstream API you want to inspect (e.g. &lt;code&gt;https://your-store.myshopify.com&lt;/code&gt;), and you get a unique toran URL like &lt;code&gt;https://abc123.toran.sh&lt;/code&gt;. Now swap it in:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Before
&lt;/span&gt;&lt;span class="n"&gt;BASE&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;https://your-store.myshopify.com&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;

&lt;span class="c1"&gt;# After — point at your toran URL instead
&lt;/span&gt;&lt;span class="n"&gt;BASE&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;environ&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;SHOPIFY_BASE_URL&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;https://your-store.myshopify.com&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Set the env var to your toran URL&lt;/span&gt;
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;SHOPIFY_BASE_URL&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;https://abc123.toran.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's it. Now every request your MCP tool makes to Shopify flows through toran, and you see the full request and response live in your browser.&lt;/p&gt;

&lt;p&gt;No code changes beyond the base URL. When you're done debugging, unset the env var and you're back to production mode.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why this matters more for AI agents than regular code
&lt;/h2&gt;

&lt;p&gt;In traditional software, a bug in an API call is usually deterministic. Same input, same output, same broken request. You can reproduce it.&lt;/p&gt;

&lt;p&gt;AI agents are stochastic. The LLM might call your tool differently every time. The arguments might be slightly wrong in ways you'd never think to test. The failure mode might be "it worked, but it sent the wrong thing" — the hardest kind of bug.&lt;/p&gt;

&lt;p&gt;Being able to watch the actual HTTP traffic in real time turns debugging from archaeology into observation. You see the problem as it happens, not after your user reports it.&lt;/p&gt;

&lt;p&gt;Some things I've caught this way:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;An agent passing &lt;code&gt;amount: 1000&lt;/code&gt; (cents) vs &lt;code&gt;amount: 10.00&lt;/code&gt; (dollars) to Stripe — both "work," one costs 100x more&lt;/li&gt;
&lt;li&gt;Retry logic firing three times because the first response was slow, not failed&lt;/li&gt;
&lt;li&gt;Auth tokens being sent to the wrong endpoint because the agent mixed up two similar tools&lt;/li&gt;
&lt;li&gt;A tool silently falling back to a default value when the agent passed &lt;code&gt;null&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Try it
&lt;/h2&gt;

&lt;p&gt;If you're building MCP tools or any AI agent that talks to external APIs, go to &lt;a href="https://toran.sh/try" rel="noopener noreferrer"&gt;toran.sh/try&lt;/a&gt;. Enter your upstream URL, get a toran URL, swap it in, and see what your agent is actually doing.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# 1. Go to toran.sh/try, enter: https://api.openai.com&lt;/span&gt;
&lt;span class="c"&gt;# 2. Get your URL: https://abc123.toran.sh&lt;/span&gt;
&lt;span class="c"&gt;# 3. Swap it in:&lt;/span&gt;
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;OPENAI_BASE_URL&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;https://abc123.toran.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;No install. No signup. You'll probably be surprised by what you find.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>webdev</category>
      <category>debugging</category>
      <category>opensource</category>
    </item>
  </channel>
</rss>
