<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Hima Reddy</title>
    <description>The latest articles on DEV Community by Hima Reddy (@hbkandhi12).</description>
    <link>https://dev.to/hbkandhi12</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/hbkandhi12"/>
    <language>en</language>
    <item>
      <title>Turning Production Incidents Into Testing Postmortems — With a Local LLM and No API Key</title>
      <dc:creator>Hima Reddy</dc:creator>
      <pubDate>Wed, 13 May 2026 13:51:31 +0000</pubDate>
      <link>https://dev.to/hbkandhi12/turning-production-incidents-into-testing-postmortems-with-a-local-llm-and-no-api-key-dif</link>
      <guid>https://dev.to/hbkandhi12/turning-production-incidents-into-testing-postmortems-with-a-local-llm-and-no-api-key-dif</guid>
      <description>&lt;h2&gt;
  
  
  Your team raised a P1. The dev postmortem is done. But where's the testing perspective?
&lt;/h2&gt;

&lt;p&gt;Most incident postmortems answer: &lt;em&gt;what broke and how do we fix it?&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;They rarely answer: &lt;em&gt;what should have caught this? What test coverage was missing? What signals did we have that we ignored?&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;That gap is where this tool lives.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Prod Incident Test Analyzer&lt;/strong&gt; takes raw incident data — logs, alerts, Slack threads, error dumps — and generates a structured postmortem from a tester's perspective, then narrates it as audio using a free neural TTS engine. No API key. No cloud. Runs entirely on your machine with LLaMA 3 via Ollama.&lt;/p&gt;

&lt;p&gt;Here's exactly how it works under the hood.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Problem With Standard Postmortems
&lt;/h2&gt;

&lt;p&gt;A production incident happens. The dev team writes the RCA. It covers infrastructure failures, deployment mistakes, config drift. The testing section, if it exists at all, says something like: &lt;em&gt;"Add more tests."&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;That's not useful. What tests? Covering what? At which layer?&lt;/p&gt;

&lt;p&gt;The tool simulates a senior Test Engineer independently investigating the same incident — one who wasn't in the room when it happened, has no ego invested in the decisions, and is specifically looking for what the testing and observability layer missed.&lt;/p&gt;




&lt;h2&gt;
  
  
  Architecture at a Glance
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Incident Text
     │
     ▼
build_prompt()  ──►  Ollama (LLaMA 3, local)
                           │
                           ▼
              Structured Markdown Report
              ┌────────────────────────┐
              │ # Incident Summary     │
              │ # Investigation        │
              │ # Root Cause           │
              │ # Prevention Plan      │
              │ # Recommended Tests    │
              │ # Voice Summary        │
              └────────────────────────┘
                           │
               extract_voice_summary()
                           │
                           ▼
                    edge-tts (free, local)
                           │
                           ▼
                      Audio Playback
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Three components: a prompt, a local LLM, and a TTS engine. No external services.&lt;/p&gt;




&lt;h2&gt;
  
  
  Step 1: The Prompt — Where the Testing Perspective Comes From
&lt;/h2&gt;

&lt;p&gt;The most important piece of the whole tool isn't the LLM — it's the prompt. This is what makes the output useful to a tester rather than generic.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;build_prompt&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;incident_text&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;
You are a senior Test Engineer leading a production incident postmortem.

Turn the input into a structured engineering conversation.

Rules:
- Two engineers discussing the incident
- Focus on debugging steps, investigation, and root cause analysis
- Mention what tests or signals should have caught this earlier
- Include assumptions, mistakes, and validation steps
- Be technical and realistic
- Keep conversation natural like an engineering meeting

- YOU MUST identify a single most likely root cause
- If multiple causes exist, rank them and pick the primary one
- Include at least one concrete technical failure
- Avoid filler phrases

You MUST end your response with this section using exactly this heading:
# Voice Summary
Write 150-200 words summarising the incident, root cause, and prevention steps 
in a natural, conversational tone with no markdown.

Structure:
# Incident Summary
# Investigation Discussion
# Root Cause
# Prevention Plan
# Recommended Tests
# Voice Summary

Incident:
&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;incident_text&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;
&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A few deliberate decisions here:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;"Two engineers discussing the incident"&lt;/strong&gt; — a conversation surfaces assumptions and disagreements that a single-voice summary would flatten. You get &lt;em&gt;"wait, did anyone check the connection pool exhaustion alerts?"&lt;/em&gt; rather than &lt;em&gt;"connection pool exhaustion was observed."&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;"YOU MUST identify a single most likely root cause"&lt;/strong&gt; — without this constraint, LLMs hedge. They give you five equally-weighted causes and call it analysis. Forcing a single primary cause mirrors how real RCAs work.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Voice Summary section&lt;/strong&gt; — this is intentionally separate from the rest of the report. The full report is for reading. The Voice Summary is written in plain prose specifically for audio narration — no markdown, no bullet points, no headings that would sound bizarre when spoken aloud.&lt;/p&gt;

&lt;p&gt;The system prompt does the heavy lifting on domain coverage:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;content&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;You are an expert Test Engineer, production incident investigator, &lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;performance bottleneck analyst, and distributed systems debugging assistant. &lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;You specialize in root cause analysis, scalability failures, observability gaps, &lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;resource exhaustion, race conditions, retry storms, caching failures, &lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;database bottlenecks, thread starvation, concurrency bugs, &lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Kubernetes incidents, and CI/CD failures. &lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Prioritize evidence-based reasoning, rank possible causes, &lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;identify the single most likely root cause, &lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;and prefer concrete technical explanations over vague summaries.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This matters. A generic &lt;em&gt;"you are a helpful assistant"&lt;/em&gt; system prompt gets you generic output. Naming the failure modes explicitly — retry storms, thread starvation, HikariPool exhaustion — primes the model to look for these patterns in your incident text.&lt;/p&gt;




&lt;h2&gt;
  
  
  Step 2: Connecting to Ollama — Local LLM, OpenAI-Compatible API
&lt;/h2&gt;

&lt;p&gt;Ollama exposes an OpenAI-compatible API at &lt;code&gt;localhost:11434&lt;/code&gt;. This means you can use the standard OpenAI Python client with zero code changes — just swap the base URL:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;ollama_client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;OpenAI&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;base_url&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;http://localhost:11434/v1&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;api_key&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;ollama&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;   &lt;span class="c1"&gt;# dummy value, Ollama doesn't use auth
&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Calling the model is then identical to any OpenAI call:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;ollama_client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;chat&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;completions&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;create&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;model_name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;      &lt;span class="c1"&gt;# "llama3", "mistral", "qwen2.5"
&lt;/span&gt;    &lt;span class="n"&gt;temperature&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;temperature&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;messages&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;
        &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;role&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;system&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;content&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;system_prompt&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;
        &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;role&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;user&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;content&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;prompt&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;output&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;choices&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="n"&gt;message&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;content&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The model runs entirely on your machine. No tokens consumed. No data leaving your network. For incident analysis — where logs often contain sensitive infrastructure details, credentials in error messages, internal service names — this matters.&lt;/p&gt;




&lt;h2&gt;
  
  
  Step 3: Extracting the Voice Summary Reliably
&lt;/h2&gt;

&lt;p&gt;LLMs are inconsistent with heading formatting. Sometimes you get &lt;code&gt;# Voice Summary&lt;/code&gt;, sometimes &lt;code&gt;## Voice Summary&lt;/code&gt;, sometimes &lt;code&gt;**Voice Summary**&lt;/code&gt;. A naive &lt;code&gt;split("# Voice Summary")&lt;/code&gt; fails silently on half your runs.&lt;/p&gt;

&lt;p&gt;The solution is a regex that handles all common variants:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;extract_voice_summary&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;text&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;match&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;re&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;search&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="sa"&gt;r&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;#{1,3}\s*\*{0,2}Voice Summary\*{0,2}\s*\n+(.*?)(\n#{1,3}\s|\Z)&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;text&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;re&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;DOTALL&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;re&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;IGNORECASE&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;match&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;match&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;group&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;strip&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="sh"&gt;""&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This handles &lt;code&gt;#&lt;/code&gt;, &lt;code&gt;##&lt;/code&gt;, &lt;code&gt;###&lt;/code&gt;, with or without surrounding &lt;code&gt;**&lt;/code&gt;, case-insensitive, and stops at the next heading or end of string.&lt;/p&gt;

&lt;p&gt;But extraction can still fail — especially with smaller models or unusual output. Rather than crashing silently, there's a fallback that makes a second LLM call asking specifically for a 150-200 word plain-prose summary:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;get_voice_summary&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;output&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;temp&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;float&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;summary&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;extract_voice_summary&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;output&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;summary&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;summary&lt;/span&gt;

    &lt;span class="n"&gt;st&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;info&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Voice Summary section not found — running fallback summarisation...&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="n"&gt;fallback&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;ollama_client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;chat&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;completions&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;create&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;temperature&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;temp&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;messages&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;
            &lt;span class="p"&gt;{&lt;/span&gt;
                &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;role&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;user&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;content&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
                    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Summarise this incident report in 150-200 words. &lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
                    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Use a conversational tone as if explaining to a colleague. &lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
                    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;No markdown, no bullet points, plain prose only:&lt;/span&gt;&lt;span class="se"&gt;\n\n&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
                    &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="n"&gt;output&lt;/span&gt;
                &lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;fallback&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;choices&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="n"&gt;message&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;content&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;strip&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Two-stage extraction: try the structured section first, fall back to a targeted summarisation call. You always get audio.&lt;/p&gt;




&lt;h2&gt;
  
  
  Step 4: Text-to-Speech With edge-tts — No API Key, Neural Quality
&lt;/h2&gt;

&lt;p&gt;Most TTS integrations in hobby projects hit a paid API. &lt;code&gt;edge-tts&lt;/code&gt; uses Microsoft's neural voices and is completely free — it piggybacks on the same infrastructure that powers Edge browser's Read Aloud feature.&lt;/p&gt;

&lt;p&gt;The audio generation is async, so it runs through a temp file to avoid streaming complexity:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;generate_audio&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;text&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;voice_name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;bytes&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;_run&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;bytes&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;communicate&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;edge_tts&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Communicate&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;text&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;voice&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;voice_name&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="n"&gt;tempfile&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;NamedTemporaryFile&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;suffix&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;.mp3&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;delete&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;False&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;f&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="n"&gt;tmp_path&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;f&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;name&lt;/span&gt;
        &lt;span class="k"&gt;try&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;communicate&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;save&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;tmp_path&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="nf"&gt;open&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;tmp_path&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;rb&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;f&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
                &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;f&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;read&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
        &lt;span class="k"&gt;finally&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;unlink&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;tmp_path&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;asyncio&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;run&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;_run&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Save to a temp &lt;code&gt;.mp3&lt;/code&gt;, read back the bytes, clean up. The bytes go straight into Streamlit's &lt;code&gt;st.audio()&lt;/code&gt;. The temp file approach is necessary here — &lt;code&gt;edge-tts&lt;/code&gt; doesn't support writing directly to a bytes buffer, so the intermediate file is unavoidable.&lt;/p&gt;

&lt;p&gt;Available voices include US, British, and Australian accents in both male and female. For postmortem review on a commute, the British voices tend to sound most natural for technical content.&lt;/p&gt;




&lt;h2&gt;
  
  
  What the Output Looks Like
&lt;/h2&gt;

&lt;p&gt;Paste in a real incident — even a rough description:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Payments API went down at 02:13 UTC after a Kubernetes deployment.
HTTP 500s, DB CPU spike, Kafka consumer lag grew, HikariPool timeouts,
Redis cache miss rate jumped from 5% to 68%.
Async reconciliation workers retried failed jobs aggressively.
K6 load tests only covered steady-state traffic.
Playwright checkout tests were failing intermittently but marked as flaky.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You get back five structured sections plus audio:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Incident Summary&lt;/strong&gt; — what happened, timeline, blast radius&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Investigation Discussion&lt;/strong&gt; — two engineers walking through observations, questioning assumptions&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Root Cause&lt;/strong&gt; — a single primary cause with evidence, secondary causes ranked&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Prevention Plan&lt;/strong&gt; — concrete steps, not generic advice&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Recommended Tests&lt;/strong&gt; — specific test cases: load tests with burst scenarios, connection pool exhaustion alerts, retry backoff tests, synthetic checkout monitoring with failure thresholds&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The Recommended Tests section is the one that justifies building this. It's the section no standard postmortem includes.&lt;/p&gt;




&lt;h2&gt;
  
  
  Running It Locally
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Prerequisites: Python 3.9+, Ollama installed and running&lt;/span&gt;

git clone https://github.com/hbkandhi12/prod-incident-test-analyzer.git
&lt;span class="nb"&gt;cd &lt;/span&gt;prod-incident-test-analyzer

python &lt;span class="nt"&gt;-m&lt;/span&gt; venv venv
&lt;span class="nb"&gt;source &lt;/span&gt;venv/bin/activate

pip &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-r&lt;/span&gt; requirements.txt
ollama pull llama3

streamlit run prod_incident_to_podcast_agent.py
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Open &lt;code&gt;localhost:8501&lt;/code&gt;. Paste your incident. Hit Generate.&lt;/p&gt;

&lt;p&gt;The sidebar lets you swap models — &lt;code&gt;llama3:70b&lt;/code&gt; gives noticeably better root cause analysis if you have the VRAM, &lt;code&gt;mistral&lt;/code&gt; is faster for quick iterations.&lt;/p&gt;




&lt;h2&gt;
  
  
  What This Catches That Standard Postmortems Miss
&lt;/h2&gt;

&lt;p&gt;The value isn't in the audio — that's just a delivery mechanism. The value is in forcing a structured testing perspective on every incident:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Load tests that only covered steady state, not burst traffic&lt;/li&gt;
&lt;li&gt;Synthetic tests marked as flaky that were actually signalling real failures&lt;/li&gt;
&lt;li&gt;Missing alerts for resource exhaustion that would have given earlier warning&lt;/li&gt;
&lt;li&gt;Aggressive retry mechanisms that nobody stress-tested under failure conditions&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These patterns repeat across incidents at different companies. The root cause changes. The testing gaps are always the same.&lt;/p&gt;




&lt;h2&gt;
  
  
  Source
&lt;/h2&gt;

&lt;p&gt;Full code on GitHub: &lt;a href="https://github.com/hbkandhi12/prod-incident-test-analyzer" rel="noopener noreferrer"&gt;github.com/hbkandhi12/prod-incident-test-analyzer&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you're a tester who's ever read an incident report and thought &lt;em&gt;"we had signals for this"&lt;/em&gt; — this is for you.&lt;/p&gt;

</description>
      <category>llm</category>
      <category>ollama</category>
      <category>python</category>
      <category>ai</category>
    </item>
  </channel>
</rss>
