<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Neeraja Khanapure</title>
    <description>The latest articles on DEV Community by Neeraja Khanapure (@neeraja_khanapure_4a33a5f).</description>
    <link>https://dev.to/neeraja_khanapure_4a33a5f</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/neeraja_khanapure_4a33a5f"/>
    <language>en</language>
    <item>
      <title>A hard-earned rule from incident retrospectives:</title>
      <dc:creator>Neeraja Khanapure</dc:creator>
      <pubDate>Tue, 07 Apr 2026 09:48:10 +0000</pubDate>
      <link>https://dev.to/neeraja_khanapure_4a33a5f/a-hard-earned-rule-from-incident-retrospectives-40jp</link>
      <guid>https://dev.to/neeraja_khanapure_4a33a5f/a-hard-earned-rule-from-incident-retrospectives-40jp</guid>
      <description>&lt;h1&gt;
  
  
  LinkedIn Draft — Workflow (2026-04-07)
&lt;/h1&gt;

&lt;p&gt;A hard-earned rule from incident retrospectives:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Incident RCA without a data-backed timeline is just a story you told yourself&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Most post-mortems produce lessons that don't stick. The root cause is almost always the same: the timeline was built from memory, not from data.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Memory-based timeline:     Data-backed timeline:

T+0  "Deploy happened"     T+0:00  Deploy (Argo event)
T+?  "Errors started"      T+0:07  Error rate +0.3% (Prometheus)
T+?  "Someone noticed"     T+0:12  P95 latency 340ms→2.1s (trace)
T+?  "We rolled back"      T+0:19  Alert fired (PD)
                           T+0:31  Rollback complete (Argo)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Where it breaks:&lt;br&gt;
▸ Log timestamps across services diverge by seconds without NTP — your timeline is wrong before you begin.&lt;br&gt;
▸ Correlation between a deploy event and a metric spike gets missed when dashboards lack deployment markers.&lt;br&gt;
▸ Contributing factors vanish from the narrative because they're hard to prove — and the same incident repeats.&lt;/p&gt;

&lt;p&gt;The rule I keep coming back to:&lt;br&gt;
→ Build the timeline from data only before the RCA meeting begins. If you can't source an event, mark it 'unverified' — not assumed.&lt;/p&gt;

&lt;p&gt;How I sanity-check it:&lt;br&gt;
▸ OpenTelemetry trace IDs as the timeline spine — they cross service boundaries with sub-millisecond precision.&lt;br&gt;
▸ Grafana annotations on every deploy, config change, and scaling event — visible on every dashboard automatically.&lt;/p&gt;

&lt;p&gt;Systems that are hard to debug were designed without the debugger in mind. Build observability in, not on.&lt;/p&gt;

&lt;p&gt;Deep dive: &lt;a href="https://neeraja-portfolio-v1.vercel.app/workflows/incident-rca-without-a-data-backed-timeline-is-just-a-story-you-told-yourself" rel="noopener noreferrer"&gt;https://neeraja-portfolio-v1.vercel.app/workflows/incident-rca-without-a-data-backed-timeline-is-just-a-story-you-told-yourself&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is where most runbooks stop — what's your next step after this?&lt;/p&gt;

&lt;h1&gt;
  
  
  sre #reliability #observability #devops
&lt;/h1&gt;

</description>
      <category>devops</category>
      <category>sre</category>
      <category>kubernetes</category>
      <category>terraform</category>
    </item>
    <item>
      <title>Something I keep explaining in architecture reviews:</title>
      <dc:creator>Neeraja Khanapure</dc:creator>
      <pubDate>Sun, 05 Apr 2026 16:55:28 +0000</pubDate>
      <link>https://dev.to/neeraja_khanapure_4a33a5f/something-i-keep-explaining-in-architecture-reviews-43n2</link>
      <guid>https://dev.to/neeraja_khanapure_4a33a5f/something-i-keep-explaining-in-architecture-reviews-43n2</guid>
      <description>&lt;h1&gt;
  
  
  LinkedIn Draft — Workflow (2026-04-05)
&lt;/h1&gt;

&lt;p&gt;Something I keep explaining in architecture reviews:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Secrets management: designing for rotation, not just storage&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Most orgs solve 'where do we store secrets securely.' The teams that get paged at 2am are the ones who never solved 'how do we rotate them without downtime.'&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Storage-only design:        Rotation-aware design:

Secret ──▶ Vault            Secret ──▶ Vault ──▶ Agent Injector
              │                                        │
         Pod (env var)                           Pod (file mount)
              │                                        │
         Restart to           Auto-reload ◀────── Lease renewer
         get new value        (zero downtime)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Where it breaks:&lt;br&gt;
▸ Secrets as env vars require pod restarts on rotation — making rotation a deployment event with blast radius.&lt;br&gt;
▸ Vault leases expiring in long-running jobs produce auth errors that look like app bugs, not infra failures.&lt;br&gt;
▸ Secret sprawl across namespaces means rotation happens in 12 places — and one always gets missed.&lt;/p&gt;

&lt;p&gt;The rule I keep coming back to:&lt;br&gt;
→ Design rotation before you design storage. If you can't rotate a secret in under 10 minutes with no downtime, the design isn't production-ready.&lt;/p&gt;

&lt;p&gt;How I sanity-check it:&lt;br&gt;
▸ Vault Agent Injector or External Secrets Operator — decouple secret delivery from pod lifecycle.&lt;br&gt;
▸ Monthly secret access log audit — stale consumers are how you discover forgotten service accounts before attackers do.&lt;/p&gt;

&lt;p&gt;Reliability is a product feature. The engineers who treat it that way are the ones who get asked into the room.&lt;/p&gt;

&lt;p&gt;Deep dive: &lt;a href="https://neeraja-portfolio-v1.vercel.app/workflows/secrets-management-designing-for-rotation-not-just-storage" rel="noopener noreferrer"&gt;https://neeraja-portfolio-v1.vercel.app/workflows/secrets-management-designing-for-rotation-not-just-storage&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If this triggered a war story, I'd genuinely love to hear it.&lt;/p&gt;

&lt;h1&gt;
  
  
  security #devops #kubernetes #sre
&lt;/h1&gt;

</description>
      <category>devops</category>
      <category>sre</category>
      <category>kubernetes</category>
      <category>terraform</category>
    </item>
    <item>
      <title>Something I wish someone had told me five years earlier:</title>
      <dc:creator>Neeraja Khanapure</dc:creator>
      <pubDate>Fri, 03 Apr 2026 09:39:52 +0000</pubDate>
      <link>https://dev.to/neeraja_khanapure_4a33a5f/something-i-wish-someone-had-told-me-five-years-earlier-4lo7</link>
      <guid>https://dev.to/neeraja_khanapure_4a33a5f/something-i-wish-someone-had-told-me-five-years-earlier-4lo7</guid>
      <description>&lt;h1&gt;
  
  
  LinkedIn Draft — Insight (2026-04-03)
&lt;/h1&gt;

&lt;p&gt;Something I wish someone had told me five years earlier:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Zero-downtime deployments: what 'zero' actually requires most teams don't have&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Most teams say they do zero-downtime deploys and mean 'we haven't gotten a complaint in a while.' Actually measuring it reveals the truth: connection drops, in-flight request failures, and cache invalidation spikes during rollouts that nobody's tracking because nobody defined what zero means.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;What 'zero downtime' actually requires:

✓ Health checks reflect REAL readiness (not just 'process started')
✓ Graceful shutdown drains in-flight requests (SIGTERM handling)
✓ Connection draining at the load balancer (not just the pod)
✓ Rollback faster than the deploy (&amp;lt; 5 min, automated)
✓ SLI measurement during the rollout window (not just after)

Missing any one of these = not zero downtime. Just unmonitored downtime.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The non-obvious part:&lt;br&gt;
→ The most common failure mode is passing health checks before the app is actually ready — DB connections not pooled, caches not warm, background workers not started. The pod is 'Ready' and the app is still initializing. Users see errors. Nobody's dashboard shows it because nobody's measuring error rate during the rollout window.&lt;/p&gt;

&lt;p&gt;My rule:&lt;br&gt;
→ Define 'zero downtime' with a measurable SLI: error rate &amp;lt; 0.1% during any 5-minute deploy window. Validate this in staging before calling it done. Measure it in production on every release.&lt;/p&gt;

&lt;p&gt;Worth reading:&lt;br&gt;
▸ Kubernetes deployment strategies — rolling, blue/green, canary with traffic splitting&lt;br&gt;
▸ AWS ALB / GCP Cloud Load Balancing — connection draining configuration and health check tuning&lt;/p&gt;

&lt;p&gt;&lt;a href="https://neeraja-portfolio-v1.vercel.app/insights/zero-downtime-deployments-what-zero-actually-requires-most-teams-dont-have" rel="noopener noreferrer"&gt;https://neeraja-portfolio-v1.vercel.app/insights/zero-downtime-deployments-what-zero-actually-requires-most-teams-dont-have&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you're a manager reading this — it's worth asking your team where they are on this.&lt;/p&gt;

&lt;h1&gt;
  
  
  devops #sre #observability #platformengineering
&lt;/h1&gt;

</description>
      <category>observability</category>
      <category>sre</category>
      <category>devops</category>
      <category>platformengineering</category>
    </item>
    <item>
      <title>End of week. Here's the thing I kept coming back to:</title>
      <dc:creator>Neeraja Khanapure</dc:creator>
      <pubDate>Thu, 02 Apr 2026 14:42:22 +0000</pubDate>
      <link>https://dev.to/neeraja_khanapure_4a33a5f/end-of-week-heres-the-thing-i-kept-coming-back-to-3hi9</link>
      <guid>https://dev.to/neeraja_khanapure_4a33a5f/end-of-week-heres-the-thing-i-kept-coming-back-to-3hi9</guid>
      <description>&lt;h1&gt;
  
  
  LinkedIn Draft — Insight (2026-04-02)
&lt;/h1&gt;

&lt;p&gt;End of week. Here's the thing I kept coming back to:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;SLOs work when they create conversations, not when they create compliance&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Most SLOs are set once, filed in a doc, and forgotten until an incident. The teams getting real value from error budgets use them as a weekly forcing function — a number that makes the reliability vs. velocity tradeoff visible to engineers and product managers in the same room.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;SLO as compliance (common):     SLO as conversation (effective):

Set SLO ──▶ Monitor              Set SLO ──▶ Weekly budget review
     │                                │          │
  Incident ──▶ Check SLO         Budget OK  Budget low
     │              │                │          │
   Blame       Finger-pointing    Ship fast  Freeze features
                                   │          │
                               Engineering + Product aligned
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The non-obvious part:&lt;br&gt;
→ An SLO that's never violated is almost always a problem. Either it's too loose (you're over-investing in reliability) or it's not being measured honestly. Both cost money in different ways. The goal is a number that occasionally creates productive tension.&lt;/p&gt;

&lt;p&gt;My rule:&lt;br&gt;
→ Review error budgets in sprint planning alongside features. If engineering and product aren't having an uncomfortable conversation once a quarter, your SLO isn't tight enough.&lt;/p&gt;

&lt;p&gt;Worth reading:&lt;br&gt;
▸ Alex Hidalgo — 'Implementing Service Level Objectives' (O'Reilly, 2020)&lt;br&gt;
▸ Google SRE Workbook — Alerting on SLOs (ch. 5, free at sre.google)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://neeraja-portfolio-v1.vercel.app/insights/slos-work-when-they-create-conversations-not-when-they-create-compliance" rel="noopener noreferrer"&gt;https://neeraja-portfolio-v1.vercel.app/insights/slos-work-when-they-create-conversations-not-when-they-create-compliance&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;What's the version of this that your org gets wrong? Drop it below.&lt;/p&gt;

&lt;h1&gt;
  
  
  devops #sre #observability #platformengineering
&lt;/h1&gt;

</description>
      <category>observability</category>
      <category>sre</category>
      <category>devops</category>
      <category>platformengineering</category>
    </item>
    <item>
      <title>This pattern has saved production twice in the last year:</title>
      <dc:creator>Neeraja Khanapure</dc:creator>
      <pubDate>Tue, 31 Mar 2026 09:44:19 +0000</pubDate>
      <link>https://dev.to/neeraja_khanapure_4a33a5f/this-pattern-has-saved-production-twice-in-the-last-year-38md</link>
      <guid>https://dev.to/neeraja_khanapure_4a33a5f/this-pattern-has-saved-production-twice-in-the-last-year-38md</guid>
      <description>&lt;h1&gt;
  
  
  LinkedIn Draft — Workflow (2026-03-31)
&lt;/h1&gt;

&lt;p&gt;This pattern has saved production twice in the last year:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Service mesh adoption: the operational debt lands before the value does&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Service meshes promise mTLS, traffic splitting, and deep observability. What arrives first is a new category of production failures your team has never debugged before.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Adoption curve reality:

Value
  │                              ╱ mTLS + traffic control
  │                         ╱
  │              ╱╲  complexity trough
  │         ╱╲╱
  │    ╱╲╱   ← sidecar failures, upgrade pain
  │╱
  └──────────────────────────────▶ Time
     Week 1     Month 3     Month 9
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Where it breaks:&lt;br&gt;
▸ Sidecar injection failures look like app bugs — hours spent debugging the wrong layer.&lt;br&gt;
▸ mTLS policy rollout in a live cluster requires namespace-by-namespace phasing — one mistake stops traffic.&lt;br&gt;
▸ Mesh upgrades require coordinated sidecar restarts across the cluster — on large deployments, that's everything.&lt;/p&gt;

&lt;p&gt;The rule I keep coming back to:&lt;br&gt;
→ Start mesh in observability-only mode (no policy enforcement). Prove value in one namespace first. Earn the rollout, don't mandate it.&lt;/p&gt;

&lt;p&gt;How I sanity-check it:&lt;br&gt;
▸ Linkerd for latency-sensitive workloads — lower resource overhead than Istio's Envoy per sidecar.&lt;br&gt;
▸ Namespace-level feature flags for mesh policy — lets you roll back one team without affecting others.&lt;/p&gt;

&lt;p&gt;The difference between a senior engineer and a principal is knowing which guardrails to build before you need them.&lt;/p&gt;

&lt;p&gt;Deep dive: &lt;a href="https://neeraja-portfolio-v1.vercel.app/workflows/service-mesh-adoption-the-operational-debt-lands-before-the-value-does" rel="noopener noreferrer"&gt;https://neeraja-portfolio-v1.vercel.app/workflows/service-mesh-adoption-the-operational-debt-lands-before-the-value-does&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If this triggered a war story, I'd genuinely love to hear it.&lt;/p&gt;

&lt;h1&gt;
  
  
  kubernetes #devops #sre #platformengineering
&lt;/h1&gt;

</description>
      <category>devops</category>
      <category>sre</category>
      <category>kubernetes</category>
      <category>terraform</category>
    </item>
    <item>
      <title>Something every senior engineer learns the expensive way:</title>
      <dc:creator>Neeraja Khanapure</dc:creator>
      <pubDate>Sat, 28 Mar 2026 20:48:45 +0000</pubDate>
      <link>https://dev.to/neeraja_khanapure_4a33a5f/something-every-senior-engineer-learns-the-expensive-way-a8h</link>
      <guid>https://dev.to/neeraja_khanapure_4a33a5f/something-every-senior-engineer-learns-the-expensive-way-a8h</guid>
      <description>&lt;h1&gt;
  
  
  LinkedIn Draft — Workflow (2026-03-28)
&lt;/h1&gt;

&lt;p&gt;Something every senior engineer learns the expensive way:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Terraform DAGs at scale: when the graph becomes the hazard&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Terraform's dependency graph is elegant at small scale. At 500+ resources across a mono-repo, it becomes the most dangerous part of your infrastructure.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;SAFE (small module):              DANGEROUS (at scale):

[vpc] ──▶ [subnet] ──▶ [ec2]     [shared-net] ──▶ [team-a-infra]
                                          │         [team-b-infra]
                                          │         [team-c-infra]
                                          │         [data-layer]
                                  One change → fan-out destroy/create
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Where it breaks:&lt;br&gt;
▸ Implicit ordering assumptions survive until a refactor exposes them — usually as an unplanned destroy chain in prod.&lt;br&gt;
▸ Fan-out graphs make blast radius review near-impossible. 'What does this change affect?' has no fast answer.&lt;br&gt;
▸ &lt;code&gt;depends_on&lt;/code&gt; papering over bad module interfaces — it fixes the symptom and couples the modules permanently.&lt;/p&gt;

&lt;p&gt;The rule I keep coming back to:&lt;br&gt;
→ If a module needs &lt;code&gt;depends_on&lt;/code&gt; to be safe, the module boundary is wrong. Redesign the interface — don't paper over it.&lt;/p&gt;

&lt;p&gt;How I sanity-check it:&lt;br&gt;
▸ &lt;code&gt;terraform graph | dot -Tsvg &amp;gt; graph.svg&lt;/code&gt; — visualize fan-out and cycles before every major refactor.&lt;br&gt;
▸ Gate all applies with OPA/Conftest + mandatory human review on any planned destroy operations.&lt;/p&gt;

&lt;p&gt;The difference between a senior engineer and a principal is knowing which guardrails to build before you need them.&lt;/p&gt;

&lt;p&gt;Deep dive: &lt;a href="https://neeraja-portfolio-v1.vercel.app/workflows/terraform-dags-at-scale-when-the-graph-becomes-the-hazard" rel="noopener noreferrer"&gt;https://neeraja-portfolio-v1.vercel.app/workflows/terraform-dags-at-scale-when-the-graph-becomes-the-hazard&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Curious what guardrails you've built around this. Drop your pattern below.&lt;/p&gt;

&lt;h1&gt;
  
  
  terraform #iac #devops #sre
&lt;/h1&gt;

</description>
      <category>devops</category>
      <category>sre</category>
      <category>kubernetes</category>
      <category>terraform</category>
    </item>
    <item>
      <title>A hard-earned rule from incident retrospectives:</title>
      <dc:creator>Neeraja Khanapure</dc:creator>
      <pubDate>Sat, 28 Mar 2026 17:11:20 +0000</pubDate>
      <link>https://dev.to/neeraja_khanapure_4a33a5f/a-hard-earned-rule-from-incident-retrospectives-1pj1</link>
      <guid>https://dev.to/neeraja_khanapure_4a33a5f/a-hard-earned-rule-from-incident-retrospectives-1pj1</guid>
      <description>&lt;h1&gt;
  
  
  LinkedIn Draft — Workflow (2026-03-28)
&lt;/h1&gt;

&lt;p&gt;A hard-earned rule from incident retrospectives:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;GitOps drift: the silent accumulation that makes clusters unmanageable&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;GitOps promises Git as the source of truth. The reality: every manual &lt;code&gt;kubectl&lt;/code&gt; during an incident is a lie you told your cluster and forgot to retract.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;GitOps truth gap over time:

Week 1:  Git ══════════ Cluster  (clean)
Week 4:  Git ══════╌╌╌╌ Cluster  (2 manual patches)
Week 12: Git ════╌╌╌╌╌╌╌╌╌╌╌╌╌  (drift accumulates)
                         Cluster  (unknown state)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Where it breaks:&lt;br&gt;
▸ Manual patches during incidents create cluster state Git doesn't know about — Argo/Flux will overwrite it silently.&lt;br&gt;
▸ Secrets managed outside GitOps (sealed-secrets, Vault agent) drift independently — invisible in sync status.&lt;br&gt;
▸ Multi-cluster setups multiply drift: each cluster diverges at its own pace once human intervention happens.&lt;/p&gt;

&lt;p&gt;The rule I keep coming back to:&lt;br&gt;
→ Treat every manual cluster change as a 5-minute loan. Commit it back to Git before the incident closes — or it's gone.&lt;/p&gt;

&lt;p&gt;How I sanity-check it:&lt;br&gt;
▸ Argo CD drift detection dashboard — surface out-of-sync resources before they become incident contributors.&lt;br&gt;
▸ Weekly diff job: live cluster state vs Git. Opens a PR for anything untracked. Makes drift visible before it's painful.&lt;/p&gt;

&lt;p&gt;The best platform teams I've seen measure success by how rarely product teams have to think about infrastructure.&lt;/p&gt;

&lt;p&gt;Deep dive: &lt;a href="https://neeraja-portfolio-v1.vercel.app/workflows" rel="noopener noreferrer"&gt;https://neeraja-portfolio-v1.vercel.app/workflows&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Curious what guardrails you've built around this. Drop your pattern below.&lt;/p&gt;

&lt;h1&gt;
  
  
  gitops #kubernetes #devops #platformengineering
&lt;/h1&gt;

</description>
      <category>devops</category>
      <category>sre</category>
      <category>kubernetes</category>
      <category>terraform</category>
    </item>
    <item>
      <title>One insight that changed how I design systems:</title>
      <dc:creator>Neeraja Khanapure</dc:creator>
      <pubDate>Sat, 28 Mar 2026 16:48:55 +0000</pubDate>
      <link>https://dev.to/neeraja_khanapure_4a33a5f/one-insight-that-changed-how-i-design-systems-1b1m</link>
      <guid>https://dev.to/neeraja_khanapure_4a33a5f/one-insight-that-changed-how-i-design-systems-1b1m</guid>
      <description>&lt;h1&gt;
  
  
  LinkedIn Draft — Insight (2026-03-28)
&lt;/h1&gt;

&lt;p&gt;One insight that changed how I design systems:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Runbook quality decays silently — and that decay kills MTTR&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Runbooks that haven't been run recently are wrong. Not outdated — wrong. The service changed. The tool was deprecated. The endpoint moved. Nobody updated the doc because nobody reads it until 3am. And at 3am, a wrong runbook is worse than no runbook — it sends engineers down confident paths that dead-end.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Runbook decay curve:

Quality
  │▓▓▓▓▓▓▓▓▓▓
  │         ▓▓▓▓▓
  │              ▓▓▓▓
  │                  ▓▓▓▓▓
  │                       ▓▓▓░░░░░░░
  │                             ░░░░░░░░ ← "last validated 8 months ago"
  └────────────────────────────────────▶
  Write   Month 1  Month 3  Month 6  Month 9
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The non-obvious part:&lt;br&gt;
→ The highest-leverage runbook improvement isn't better writing — it's a validation date and a quarterly review reminder. A runbook with 'last validated: 2 weeks ago' that's 70% accurate is worth more than a beautifully written one from 8 months ago that's 40% accurate.&lt;/p&gt;

&lt;p&gt;My rule:&lt;br&gt;
→ Every runbook gets a 'last validated' date. Anything older than 3 months is assumed broken until proven otherwise. Review is part of the on-call rotation, not optional.&lt;/p&gt;

&lt;p&gt;Worth reading:&lt;br&gt;
▸ PagerDuty Incident Response guide — runbook standards and validation cadence&lt;br&gt;
▸ Post-incident review template: 'Did the runbook help, mislead, or was it missing?' (standard question)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://neeraja-portfolio-v1.vercel.app/insights/runbook-quality-decays-silently-and-that-decay-kills-mttr" rel="noopener noreferrer"&gt;https://neeraja-portfolio-v1.vercel.app/insights/runbook-quality-decays-silently-and-that-decay-kills-mttr&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;What's the version of this that your org gets wrong? Drop it below.&lt;/p&gt;

&lt;h1&gt;
  
  
  devops #sre #observability #platformengineering
&lt;/h1&gt;

</description>
      <category>observability</category>
      <category>sre</category>
      <category>devops</category>
      <category>platformengineering</category>
    </item>
    <item>
      <title>Insight of the Week</title>
      <dc:creator>Neeraja Khanapure</dc:creator>
      <pubDate>Sat, 28 Mar 2026 16:06:06 +0000</pubDate>
      <link>https://dev.to/neeraja_khanapure_4a33a5f/insight-of-the-week-125o</link>
      <guid>https://dev.to/neeraja_khanapure_4a33a5f/insight-of-the-week-125o</guid>
      <description>&lt;h1&gt;
  
  
  LinkedIn Draft — Insight (2026-03-28)
&lt;/h1&gt;

&lt;p&gt;{{opener}}&lt;/p&gt;

&lt;p&gt;Observability is a label strategy problem disguised as a tooling problem&lt;/p&gt;

&lt;p&gt;You can’t debug what you can’t slice. Most “noisy dashboards” are really missing ownership labels, consistent dimensions, and SLI intent.&lt;/p&gt;

&lt;p&gt;What I’ve seen go wrong:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;{{pitfall1}}&lt;/li&gt;
&lt;li&gt;{{pitfall2}}&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;My rule:&lt;br&gt;
Define SLIs first, then design labels that let you isolate (service, env, version, tenant) without blowing up cardinality.&lt;/p&gt;

&lt;p&gt;If you want to go deeper: &lt;a href="https://neeraja-portfolio-v1.vercel.app/projects/prometheus-scaling" rel="noopener noreferrer"&gt;https://neeraja-portfolio-v1.vercel.app/projects/prometheus-scaling&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;{{closer}}&lt;/p&gt;

&lt;h1&gt;
  
  
  devops #sre #observability #platformengineering
&lt;/h1&gt;

</description>
      <category>observability</category>
      <category>aiops</category>
      <category>mlops</category>
      <category>automation</category>
    </item>
    <item>
      <title>Insight of the Week</title>
      <dc:creator>Neeraja Khanapure</dc:creator>
      <pubDate>Fri, 27 Mar 2026 09:39:22 +0000</pubDate>
      <link>https://dev.to/neeraja_khanapure_4a33a5f/insight-of-the-week-2kem</link>
      <guid>https://dev.to/neeraja_khanapure_4a33a5f/insight-of-the-week-2kem</guid>
      <description>&lt;h1&gt;
  
  
  LinkedIn Draft — Insight (2026-03-27)
&lt;/h1&gt;

&lt;p&gt;{{opener}}&lt;/p&gt;

&lt;p&gt;Observability is a label strategy problem disguised as a tooling problem&lt;/p&gt;

&lt;p&gt;You can’t debug what you can’t slice. Most “noisy dashboards” are really missing ownership labels, consistent dimensions, and SLI intent.&lt;/p&gt;

&lt;p&gt;What I’ve seen go wrong:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;{{pitfall1}}&lt;/li&gt;
&lt;li&gt;{{pitfall2}}&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;My rule:&lt;br&gt;
Define SLIs first, then design labels that let you isolate (service, env, version, tenant) without blowing up cardinality.&lt;/p&gt;

&lt;p&gt;If you want to go deeper: &lt;a href="https://neeraja-portfolio-v1.vercel.app/projects/prometheus-scaling" rel="noopener noreferrer"&gt;https://neeraja-portfolio-v1.vercel.app/projects/prometheus-scaling&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;{{closer}}&lt;/p&gt;

&lt;h1&gt;
  
  
  devops #sre #observability #platformengineering
&lt;/h1&gt;

</description>
      <category>observability</category>
      <category>aiops</category>
      <category>mlops</category>
      <category>automation</category>
    </item>
    <item>
      <title>Workflow Deep Dive</title>
      <dc:creator>Neeraja Khanapure</dc:creator>
      <pubDate>Tue, 24 Mar 2026 09:36:48 +0000</pubDate>
      <link>https://dev.to/neeraja_khanapure_4a33a5f/workflow-deep-dive-2o29</link>
      <guid>https://dev.to/neeraja_khanapure_4a33a5f/workflow-deep-dive-2o29</guid>
      <description>&lt;h1&gt;
  
  
  LinkedIn Draft — Workflow (2026-03-24)
&lt;/h1&gt;

&lt;p&gt;{{opener}}&lt;/p&gt;

&lt;p&gt;End‑to‑end MLOps retraining loop: reliability is in the guardrails&lt;/p&gt;

&lt;p&gt;Auto‑retraining is easy to wire. Making it safe in production is the hard part: data drift, silent label shifts, and rollback semantics.&lt;/p&gt;

&lt;p&gt;What usually bites later:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A “better” offline model can degrade live KPIs due to skew (training vs serving features) and traffic shift.&lt;/li&gt;
&lt;li&gt;Unversioned data/labels make incident RCA impossible — you can’t reproduce what trained the model.&lt;/li&gt;
&lt;li&gt;Promotion without canary + rollback turns retraining into a weekly outage generator.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;My default rule:&lt;br&gt;
No model ships without: dataset/version lineage, shadow/canary evaluation, and a one‑click rollback path.&lt;/p&gt;

&lt;p&gt;When I’m sanity-checking this, I usually do:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Track dataset + features with DVC/LakeFS + model registry (MLflow/SageMaker Registry) for auditable promotion.&lt;/li&gt;
&lt;li&gt;Monitor drift + performance slices with Prometheus/Grafana + alert on &lt;em&gt;trend&lt;/em&gt;, not single spikes.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Deep dive (stable link): &lt;a href="https://neeraja-portfolio-v1.vercel.app/workflows/resilient-architecture" rel="noopener noreferrer"&gt;https://neeraja-portfolio-v1.vercel.app/workflows/resilient-architecture&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;{{closer}}&lt;/p&gt;

&lt;h1&gt;
  
  
  mlops #aiops #automation #python
&lt;/h1&gt;

</description>
      <category>devops</category>
      <category>sre</category>
      <category>kubernetes</category>
      <category>terraform</category>
    </item>
    <item>
      <title>Insight of the Week</title>
      <dc:creator>Neeraja Khanapure</dc:creator>
      <pubDate>Fri, 20 Mar 2026 09:25:33 +0000</pubDate>
      <link>https://dev.to/neeraja_khanapure_4a33a5f/insight-of-the-week-28ac</link>
      <guid>https://dev.to/neeraja_khanapure_4a33a5f/insight-of-the-week-28ac</guid>
      <description>&lt;h1&gt;
  
  
  LinkedIn Draft — Insight (2026-03-20)
&lt;/h1&gt;

&lt;p&gt;{{opener}}&lt;/p&gt;

&lt;p&gt;CI/CD isn’t speed — it’s predictable change under load&lt;/p&gt;

&lt;p&gt;Most pipelines fail not because tests are slow, but because &lt;strong&gt;rollout risk isn’t modeled&lt;/strong&gt; (blast radius, rollback, and observability gates).&lt;/p&gt;

&lt;p&gt;What I’ve seen go wrong:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;{{pitfall1}}&lt;/li&gt;
&lt;li&gt;{{pitfall2}}&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;My rule:&lt;br&gt;
If you can’t explain rollback + SLO gates in one slide, the pipeline is not production‑ready.&lt;/p&gt;

&lt;p&gt;If you want to go deeper: &lt;a href="https://neeraja-portfolio-v1.vercel.app/insights/cicd-isnt-speed-its-predictable-change-under-load" rel="noopener noreferrer"&gt;https://neeraja-portfolio-v1.vercel.app/insights/cicd-isnt-speed-its-predictable-change-under-load&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;{{closer}}&lt;/p&gt;

&lt;h1&gt;
  
  
  devops #sre #observability #platformengineering
&lt;/h1&gt;

</description>
      <category>observability</category>
      <category>aiops</category>
      <category>mlops</category>
      <category>automation</category>
    </item>
  </channel>
</rss>
