<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Peter</title>
    <description>The latest articles on DEV Community by Peter (@peterinfranexis).</description>
    <link>https://dev.to/peterinfranexis</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/peterinfranexis"/>
    <language>en</language>
    <item>
      <title>The Spot Instance That Killed Our Payments Service (And Why It Took Us 47 Minutes to Find It)</title>
      <dc:creator>Peter</dc:creator>
      <pubDate>Sun, 26 Apr 2026 03:09:54 +0000</pubDate>
      <link>https://dev.to/peterinfranexis/the-spot-instance-that-killed-our-payments-service-and-why-it-took-us-47-minutes-to-find-it-2ehp</link>
      <guid>https://dev.to/peterinfranexis/the-spot-instance-that-killed-our-payments-service-and-why-it-took-us-47-minutes-to-find-it-2ehp</guid>
      <description>&lt;p&gt;It started at 1:49 AM.&lt;/p&gt;

&lt;p&gt;PagerDuty fired — payments-service entering CrashLoopBackOff, 3 replicas simultaneously. On-call engineer paged. I joined the incident bridge 4 minutes later.                                                &lt;/p&gt;

&lt;p&gt;By 2:36 AM, we had the fix deployed. 47 minutes of debugging for a 2-line YAML change.                                                                                                                        &lt;/p&gt;

&lt;p&gt;This is the postmortem. Not of the incident itself — those exist internally — but of the &lt;em&gt;investigation&lt;/em&gt;. Every wrong turn, every wasted minute, and the exact signals that eventually cracked it.            &lt;/p&gt;




&lt;p&gt;## The First 10 Minutes: The Obvious Wrong Answer                                                                                                                                                             &lt;/p&gt;

&lt;p&gt;When pods crash simultaneously right after a deployment, the deployment is guilty until proven innocent. That's the right instinct most of the time. So the first 10 minutes were spent here:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;  kubectl rollout &lt;span class="nb"&gt;history &lt;/span&gt;deployment/payments-service &lt;span class="nt"&gt;-n&lt;/span&gt; production
  kubectl describe deployment/payments-service &lt;span class="nt"&gt;-n&lt;/span&gt; production                                                                                                                                                    
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The last deployment had gone out at 7:52 PM — over 6 hours earlier. The pods had been healthy for 6 hours since that deploy. This should have ruled out the deployment immediately, but we didn't internalize &lt;br&gt;
  it fast enough. We spent another 5 minutes reviewing the deployment diff anyway, looking for a subtle config change that could have caused a delayed failure.&lt;/p&gt;

&lt;p&gt;Nothing.        &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Time lost: ~12 minutes.&lt;/strong&gt;                                                                                                                                                                                   &lt;/p&gt;



&lt;p&gt;## The Next 15 Minutes: Log Archaeology&lt;/p&gt;

&lt;p&gt;With deployment ruled out, we went to logs.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;  kubectl logs payments-service-7d9f8b-xkp2q &lt;span class="nt"&gt;--previous&lt;/span&gt; &lt;span class="nt"&gt;-n&lt;/span&gt; production
  kubectl logs payments-service-7d9f8b-mn4lw &lt;span class="nt"&gt;--previous&lt;/span&gt; &lt;span class="nt"&gt;-n&lt;/span&gt; production                                                                                                                                           
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The logs showed the service starting up normally, attempting a Redis connection, and then… nothing. Process killed. No error, no panic, no stack trace. Just termination.                                     &lt;/p&gt;

&lt;p&gt;This looked like an OOMKill at first. So we checked resource limits:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;  kubectl describe pod payments-service-7d9f8b-xkp2q &lt;span class="nt"&gt;-n&lt;/span&gt; production | &lt;span class="nb"&gt;grep&lt;/span&gt; &lt;span class="nt"&gt;-A5&lt;/span&gt; Limits
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Memory usage was at 180Mi against a 512Mi limit. Not OOM.                                                                                                                                                     &lt;/p&gt;

&lt;p&gt;We pulled metrics from Prometheus looking for a memory spike. Nothing unusual.                                                                                                                                &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Time lost: ~15 more minutes. Now 27 minutes in.&lt;/strong&gt;                                                                                                                                                           &lt;/p&gt;




&lt;p&gt;## Minute 27: The Event Log (Should Have Started Here)                                                                                                                                                        &lt;/p&gt;

&lt;p&gt;This is the moment in every postmortem where I think: &lt;em&gt;why didn't we start with events?&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;  kubectl get events &lt;span class="nt"&gt;-n&lt;/span&gt; production &lt;span class="nt"&gt;--sort-by&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'.lastTimestamp'&lt;/span&gt; | &lt;span class="nb"&gt;tail&lt;/span&gt; &lt;span class="nt"&gt;-30&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;There it was:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="go"&gt;  LAST SEEN   TYPE      REASON    OBJECT                        MESSAGE
  2m          Warning   BackOff   pod/payments-service-7d9f8b   Back-off restarting failed container                                                                                                            
  4m          Warning   Unhealthy pod/payments-service-7d9f8b   Liveness probe failed: Get "http://10.0.1.12:8080/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)          
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Liveness probe failure. The probe was timing out.                                                                                                                                                             &lt;/p&gt;

&lt;p&gt;But here's where we made our second mistake: we assumed the liveness endpoint itself was broken. Maybe the service wasn't starting correctly. Maybe there was a dependency it couldn't reach. We spent another&lt;br&gt;
   10 minutes trying to curl the healthz endpoint manually, exec'ing into a running pod to see if the endpoint responded.&lt;/p&gt;

&lt;p&gt;It did. The endpoint worked fine.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Time lost: ~10 more minutes. Now 37 minutes in.&lt;/strong&gt;                                                                                                                                                           &lt;/p&gt;



&lt;p&gt;## Minute 37: The Node Event Nobody Checked&lt;/p&gt;

&lt;p&gt;One of the engineers on the call said something offhand: &lt;em&gt;"Wait, when did this start? 1:49? Can you check if anything happened on the node around then?"&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;  kubectl get events &lt;span class="nt"&gt;-n&lt;/span&gt; kube-system &lt;span class="nt"&gt;--sort-by&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'.lastTimestamp'&lt;/span&gt; | &lt;span class="nb"&gt;grep&lt;/span&gt; &lt;span class="nt"&gt;-E&lt;/span&gt; &lt;span class="s2"&gt;"Node|node"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="go"&gt;  47m   Normal   NodeReady    node/ip-10-0-1-5   Node ip-10-0-1-5 status is now: NodeReady
  49m   Normal   Starting     node/ip-10-0-1-5   Starting kubelet                                                                                                                                               
  52m   Normal   NodeNotReady node/ip-10-0-1-5   Node ip-10-0-1-5 status is now: NodeNotReady                                                                                                                   
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A node had cycled at 1:47 AM — 2 minutes before the crash loop started.                                                                                                                                       &lt;/p&gt;

&lt;p&gt;That node was running &lt;code&gt;redis-cache-0&lt;/code&gt;.                                                                                                                                                                        &lt;/p&gt;

&lt;p&gt;Redis had restarted when the node recycled. And Redis, spinning up fresh, takes about 3-4 seconds before it starts accepting connections.                                                                     &lt;/p&gt;

&lt;p&gt;The payments-service liveness probe hits &lt;code&gt;/healthz&lt;/code&gt;, which internally checks Redis connectivity. The probe has a 2-second timeout. Redis is taking 3-4 seconds to warm up. The probe fails. Kubernetes kills&lt;br&gt;&lt;br&gt;
  the pod. Repeat.&lt;/p&gt;

&lt;p&gt;The deployment was innocent. The pods were healthy. The liveness probe was doing exactly what it was configured to do. It was just configured too aggressively for the actual startup behavior of its&lt;br&gt;&lt;br&gt;
  dependencies.&lt;/p&gt;

&lt;p&gt;The fix:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;  &lt;span class="na"&gt;livenessProbe&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;httpGet&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/healthz&lt;/span&gt;
      &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;8080&lt;/span&gt;
    &lt;span class="na"&gt;timeoutSeconds&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;10&lt;/span&gt;        &lt;span class="c1"&gt;# was: 2&lt;/span&gt;
    &lt;span class="na"&gt;initialDelaySeconds&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;15&lt;/span&gt;   &lt;span class="c1"&gt;# was: 0&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Deploy. Pods stabilize. Incident resolved at 2:36 AM.                                                                                                                                                         &lt;/p&gt;




&lt;p&gt;## The Real Postmortem: Why Did This Take 47 Minutes?                                                                                                                                                         &lt;/p&gt;

&lt;p&gt;The fix was 2 lines of YAML. The investigation took 47 minutes. That ratio is the actual problem.                                                                                                             &lt;/p&gt;

&lt;p&gt;Here's the map of where the time went:                                                                                                                                                                        &lt;/p&gt;

&lt;p&gt;| Phase | Time Spent | Why |&lt;br&gt;&lt;br&gt;
  |---|---|---|&lt;br&gt;&lt;br&gt;
  | Deployment investigation | 12 min | Correct first instinct, but over-indexed |&lt;br&gt;
  | Log archaeology | 15 min | Logs showed the symptom (killed process) not the cause |&lt;br&gt;
  | Healthz endpoint testing | 10 min | Found the mechanism but not the root cause |&lt;br&gt;&lt;br&gt;
  | Node event discovery | 7 min | The actual signal — found last |&lt;br&gt;
  | Diagnosis + fix | 3 min | Trivial once cause was known |                                                                                                                                                    &lt;/p&gt;

&lt;p&gt;The signals that cracked it were all there from minute zero:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;k8s events&lt;/strong&gt; showing &lt;code&gt;Liveness probe failed: context deadline exceeded&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Node events&lt;/strong&gt; in kube-system showing the recycle at 1:47 AM&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Redis pod restart time&lt;/strong&gt; correlating with the node event
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Deployment history&lt;/strong&gt; showing the last deploy was 6 hours prior (early exoneration)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;If you had read these four things in that order, the investigation is 5 minutes, not 47. The problem is that these signals live in three different places, and under pressure at 2 AM, humans don't naturally &lt;br&gt;
  start with the most diagnostic view. We start with the most familiar one (logs) and dig deeper instead of wider.&lt;/p&gt;



&lt;p&gt;## What Changes With Automated Investigation&lt;/p&gt;

&lt;p&gt;We've been building &lt;a href="https://causa.infranexis.com" rel="noopener noreferrer"&gt;Causa&lt;/a&gt; partly because of incidents exactly like this one.                                                                                                  &lt;/p&gt;

&lt;p&gt;When this alert fired, Causa's investigation loop would have:                                                                                                                                                 &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Received the PagerDuty signal&lt;/li&gt;
&lt;li&gt;Pulled the k8s events for the affected pods — immediately seeing the liveness probe timeout&lt;/li&gt;
&lt;li&gt;Correlated the pod restart timestamps against node events in kube-system — finding the 1:47 recycle
&lt;/li&gt;
&lt;li&gt;Identified redis-cache-0 as running on the recycled node&lt;/li&gt;
&lt;li&gt;Checked recent deployments and exonerated them (6 hours ago, no correlation)
&lt;/li&gt;
&lt;li&gt;Run eBPF traces to confirm Redis connection latency in the 3-4 second range
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;And posted this to Slack in under 60 seconds:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  ROOT CAUSE: Liveness probe timeout (2s) on payments-service is failing
  because redis-cache-0 requires 3-4s to accept connections after the                                                                                                                                           
  spot instance recycle at 01:47 UTC.

  EVIDENCE:       
  • Node ip-10-0-1-5 recycled at 01:47 UTC (2 min before crash loop start)
  • redis-cache-0 was on ip-10-0-1-5 — restarted at same time                                                                                                                                                   
  • Liveness probe timeout: 2s — insufficient for Redis warmup
  • Last deployment: 6h ago — NOT correlated                                                                                                                                                                    

  FIX: Increase livenessProbe.timeoutSeconds to 10, add initialDelaySeconds: 15                                                                                                                                 

  CONFIDENCE: High (4 corroborating signals)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's not guesswork and it's not a summary of the alert we already saw. It's the actual investigation — reading the right signals in the right order, without the human bottleneck.                          &lt;/p&gt;

&lt;p&gt;The on-call engineer still makes the call. They still deploy the fix. But they do it with the full picture in front of them in minute one, not minute 47.                                                     &lt;/p&gt;




&lt;p&gt;## Three Changes We Made After This Incident                                                                                                                                                                  &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Start every investigation with events, not logs.&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
  &lt;code&gt;kubectl get events --sort-by='.lastTimestamp'&lt;/code&gt; is now the first command in our runbook. Logs show what happened to the process. Events show what Kubernetes did about it. Start wider, then drill down.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Node events are infrastructure events — check kube-system.&lt;/strong&gt;&lt;br&gt;
  Pod-level debugging often misses infrastructure-level causes. If you're not checking &lt;code&gt;kubectl get events -n kube-system&lt;/code&gt;, you're missing a category of signals.                                               &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Timestamp correlation before hypothesis formation.&lt;/strong&gt;&lt;br&gt;
  Before you start testing a hypothesis (broken deployment, bad code, OOM), check when things happened. The exact timing often rules out 80% of likely causes before you investigate them.                      &lt;/p&gt;




&lt;p&gt;## Final Word   &lt;/p&gt;

&lt;p&gt;The best SREs I know aren't necessarily the fastest debuggers in isolation. They're the ones who've internalized a &lt;em&gt;sequence&lt;/em&gt; — who know which signals to read in which order, and who don't get anchored on&lt;br&gt;&lt;br&gt;
  the first hypothesis.&lt;/p&gt;

&lt;p&gt;That sequence can be automated. Not to replace the engineer's judgment, but to surface the right information before the judgment is needed.&lt;/p&gt;

&lt;p&gt;If your team runs Kubernetes in production and you've had an incident that took longer than it should have, &lt;a href="https://causa.infranexis.com" rel="noopener noreferrer"&gt;Causa is worth 5 minutes of your time&lt;/a&gt;. Free tier, one Install command, works with whatever alerting you already have.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Have a war story like this one? I'd genuinely like to hear it — drop it in the comments.&lt;/em&gt;                                                                                                                    &lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
      <category>sre</category>
      <category>postmortem</category>
    </item>
  </channel>
</rss>
