<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Naveen Karasu</title>
    <description>The latest articles on DEV Community by Naveen Karasu (@thinkkun).</description>
    <link>https://dev.to/thinkkun</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/thinkkun"/>
    <language>en</language>
    <item>
      <title>Day 9/60: Configuration management - Go Backend Engineering</title>
      <dc:creator>Naveen Karasu</dc:creator>
      <pubDate>Tue, 12 May 2026 18:26:18 +0000</pubDate>
      <link>https://dev.to/thinkkun/day-960-configuration-management-go-backend-engineering-2978</link>
      <guid>https://dev.to/thinkkun/day-960-configuration-management-go-backend-engineering-2978</guid>
      <description>&lt;h1&gt;
  
  
  Day 9/60: Configuration management
&lt;/h1&gt;

&lt;p&gt;&lt;em&gt;60 Day Go Backend Engineering Challenge&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Today I focused on making configuration management improve production behavior, not just local developer comfort. The strongest checkpoints were emit logs, config reads, and shutdown signals in a way that tells the real runtime story, make test seams deliberate so handlers and services can be checked without ceremony, and treat observability and operational hygiene as part of the product behavior, not as cleanup.&lt;/p&gt;

&lt;p&gt;The mistake I wanted to avoid was adding logs and tests that make noise without proving anything important. The day felt better once the service boundary stayed visible and each component had one job.&lt;/p&gt;

&lt;p&gt;The goal for tomorrow is simple: keep the rule clear enough that the next service still feels related to the same design idea. I also want to keep tracing one request path end to end because it exposes weak assumptions faster than a larger demo.&lt;/p&gt;

</description>
      <category>go</category>
      <category>backend</category>
      <category>api</category>
      <category>softwareengineering</category>
    </item>
    <item>
      <title>Day 9/60: Sliding window patterns - JavaScript DSA</title>
      <dc:creator>Naveen Karasu</dc:creator>
      <pubDate>Tue, 12 May 2026 18:26:00 +0000</pubDate>
      <link>https://dev.to/thinkkun/day-960-sliding-window-patterns-javascript-dsa-53fp</link>
      <guid>https://dev.to/thinkkun/day-960-sliding-window-patterns-javascript-dsa-53fp</guid>
      <description>&lt;h1&gt;
  
  
  Day 9/60: Sliding window patterns
&lt;/h1&gt;

&lt;p&gt;&lt;em&gt;60 Day DSA in JavaScript Challenge&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Today I focused on using a stable invariant so sliding window patterns feels like a process instead of a trick. The strongest checkpoints were name the exact window, prefix, or pointer region each variable owns, reuse prior work instead of recomputing the same range each iteration, and test boundary sizes first because they expose weak invariants quickly.&lt;/p&gt;

&lt;p&gt;The mistake I wanted to avoid was moving boundaries before stating what region they actually represent. The day felt better once the invariant stayed visible and each update had one job.&lt;/p&gt;

&lt;p&gt;The goal for tomorrow is simple: keep the rule clear enough that the next variation still feels related to the same idea. I also want to keep tracing tiny examples because they expose weak assumptions faster than large random tests. I also want to keep tracing tiny examples because they expose weak assumptions faster than large random tests.&lt;/p&gt;

</description>
      <category>javascript</category>
      <category>dsa</category>
      <category>algorithms</category>
      <category>codinginterview</category>
    </item>
    <item>
      <title>Day 9/60: Sliding window patterns - Rust DSA</title>
      <dc:creator>Naveen Karasu</dc:creator>
      <pubDate>Tue, 12 May 2026 18:25:41 +0000</pubDate>
      <link>https://dev.to/thinkkun/day-960-sliding-window-patterns-rust-dsa-49fp</link>
      <guid>https://dev.to/thinkkun/day-960-sliding-window-patterns-rust-dsa-49fp</guid>
      <description>&lt;h1&gt;
  
  
  Day 9/60: Sliding window patterns
&lt;/h1&gt;

&lt;p&gt;&lt;em&gt;60 Day Rust DSA Challenge&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Today I focused on using a stable invariant so sliding window patterns feels like a process instead of a trick. The strongest checkpoints were name the exact window, prefix, or pointer region each variable owns, reuse prior work instead of recomputing the same range each iteration, and test boundary sizes first because they expose weak invariants quickly.&lt;/p&gt;

&lt;p&gt;The mistake I wanted to avoid was moving boundaries before stating what region they actually represent. The day felt better once the invariant stayed visible and each update had one job.&lt;/p&gt;

&lt;p&gt;The goal for tomorrow is simple: keep the rule clear enough that the next variation still feels related to the same idea. I also want to keep tracing tiny examples because they expose weak assumptions faster than large random tests. I also want to keep tracing tiny examples because they expose weak assumptions faster than large random tests.&lt;/p&gt;

</description>
      <category>rust</category>
      <category>dsa</category>
      <category>algorithms</category>
      <category>codinginterview</category>
    </item>
    <item>
      <title>Day 9/75: Sliding window - fixed size - Go DSA</title>
      <dc:creator>Naveen Karasu</dc:creator>
      <pubDate>Tue, 12 May 2026 18:25:23 +0000</pubDate>
      <link>https://dev.to/thinkkun/day-975-sliding-window-fixed-size-go-dsa-4k3c</link>
      <guid>https://dev.to/thinkkun/day-975-sliding-window-fixed-size-go-dsa-4k3c</guid>
      <description>&lt;h1&gt;
  
  
  Day 9/75: Sliding window - fixed size
&lt;/h1&gt;

&lt;p&gt;&lt;em&gt;75 Day DSA in Go Challenge&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Today I focused on using a stable invariant so sliding window - fixed size feels like a process instead of a trick. The strongest checkpoints were name the exact window, prefix, or pointer region each variable owns, reuse prior work instead of recomputing the same range each iteration, and test boundary sizes first because they expose weak invariants quickly.&lt;/p&gt;

&lt;p&gt;The mistake I wanted to avoid was moving boundaries before stating what region they actually represent. The day felt better once the invariant stayed visible and each update had one job.&lt;/p&gt;

&lt;p&gt;The goal for tomorrow is simple: keep the rule clear enough that the next variation still feels related to the same idea. I also want to keep tracing tiny examples because they expose weak assumptions faster than large random tests. I also want to keep tracing tiny examples because they expose weak assumptions faster than large random tests.&lt;/p&gt;

</description>
      <category>go</category>
      <category>dsa</category>
      <category>algorithms</category>
      <category>codinginterview</category>
    </item>
    <item>
      <title>Day 9/75: Sliding window - fixed size - C++ DSA</title>
      <dc:creator>Naveen Karasu</dc:creator>
      <pubDate>Tue, 12 May 2026 18:25:05 +0000</pubDate>
      <link>https://dev.to/thinkkun/day-975-sliding-window-fixed-size-c-dsa-3onb</link>
      <guid>https://dev.to/thinkkun/day-975-sliding-window-fixed-size-c-dsa-3onb</guid>
      <description>&lt;h1&gt;
  
  
  Day 9/75: Sliding window - fixed size
&lt;/h1&gt;

&lt;p&gt;&lt;em&gt;75 Day DSA in C++ Challenge&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Today I focused on using a stable invariant so sliding window - fixed size feels like a process instead of a trick. The strongest checkpoints were name the exact window, prefix, or pointer region each variable owns, reuse prior work instead of recomputing the same range each iteration, and test boundary sizes first because they expose weak invariants quickly.&lt;/p&gt;

&lt;p&gt;The mistake I wanted to avoid was moving boundaries before stating what region they actually represent. The day felt better once the invariant stayed visible and each update had one job.&lt;/p&gt;

&lt;p&gt;The goal for tomorrow is simple: keep the rule clear enough that the next variation still feels related to the same idea. I also want to keep tracing tiny examples because they expose weak assumptions faster than large random tests. I also want to keep tracing tiny examples because they expose weak assumptions faster than large random tests.&lt;/p&gt;

</description>
      <category>cpp</category>
      <category>dsa</category>
      <category>algorithms</category>
      <category>codinginterview</category>
    </item>
    <item>
      <title>Day 9/75: Sliding window - fixed size - Java DSA</title>
      <dc:creator>Naveen Karasu</dc:creator>
      <pubDate>Tue, 12 May 2026 18:24:47 +0000</pubDate>
      <link>https://dev.to/thinkkun/day-975-sliding-window-fixed-size-java-dsa-1dbe</link>
      <guid>https://dev.to/thinkkun/day-975-sliding-window-fixed-size-java-dsa-1dbe</guid>
      <description>&lt;h1&gt;
  
  
  Day 9/75: Sliding window - fixed size
&lt;/h1&gt;

&lt;p&gt;&lt;em&gt;75 Day DSA in Java Challenge&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Today I focused on using a stable invariant so sliding window - fixed size feels like a process instead of a trick. The strongest checkpoints were name the exact window, prefix, or pointer region each variable owns, reuse prior work instead of recomputing the same range each iteration, and test boundary sizes first because they expose weak invariants quickly.&lt;/p&gt;

&lt;p&gt;The mistake I wanted to avoid was moving boundaries before stating what region they actually represent. The day felt better once the invariant stayed visible and each update had one job.&lt;/p&gt;

&lt;p&gt;The goal for tomorrow is simple: keep the rule clear enough that the next variation still feels related to the same idea. I also want to keep tracing tiny examples because they expose weak assumptions faster than large random tests. I also want to keep tracing tiny examples because they expose weak assumptions faster than large random tests.&lt;/p&gt;

</description>
      <category>java</category>
      <category>dsa</category>
      <category>algorithms</category>
      <category>codinginterview</category>
    </item>
    <item>
      <title>Day 9/75: Sliding window - fixed size - Python DSA</title>
      <dc:creator>Naveen Karasu</dc:creator>
      <pubDate>Tue, 12 May 2026 18:24:29 +0000</pubDate>
      <link>https://dev.to/thinkkun/day-975-sliding-window-fixed-size-python-dsa-eb1</link>
      <guid>https://dev.to/thinkkun/day-975-sliding-window-fixed-size-python-dsa-eb1</guid>
      <description>&lt;h1&gt;
  
  
  Day 9/75: Sliding window - fixed size
&lt;/h1&gt;

&lt;p&gt;&lt;em&gt;75 Day DSA in Python Challenge&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Today I focused on using a stable invariant so sliding window - fixed size feels like a process instead of a trick. The strongest checkpoints were name the exact window, prefix, or pointer region each variable owns, reuse prior work instead of recomputing the same range each iteration, and test boundary sizes first because they expose weak invariants quickly.&lt;/p&gt;

&lt;p&gt;The mistake I wanted to avoid was moving boundaries before stating what region they actually represent. The day felt better once the invariant stayed visible and each update had one job.&lt;/p&gt;

&lt;p&gt;The goal for tomorrow is simple: keep the rule clear enough that the next variation still feels related to the same idea. I also want to keep tracing tiny examples because they expose weak assumptions faster than large random tests. I also want to keep tracing tiny examples because they expose weak assumptions faster than large random tests.&lt;/p&gt;

</description>
      <category>python</category>
      <category>dsa</category>
      <category>algorithms</category>
      <category>codinginterview</category>
    </item>
    <item>
      <title>Day 9/365: Basic Array Operations -- DSA &amp; System Design</title>
      <dc:creator>Naveen Karasu</dc:creator>
      <pubDate>Tue, 12 May 2026 18:24:11 +0000</pubDate>
      <link>https://dev.to/thinkkun/day-9365-basic-array-operations-dsa-system-design-52ba</link>
      <guid>https://dev.to/thinkkun/day-9365-basic-array-operations-dsa-system-design-52ba</guid>
      <description>&lt;h1&gt;
  
  
  Day 9/365: Basic Array Operations -- DSA &amp;amp; System Design
&lt;/h1&gt;

&lt;p&gt;&lt;em&gt;365 Day DSA &amp;amp; System Design Challenge&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Day 9 was about making simple array tasks do real teaching work.&lt;/p&gt;

&lt;p&gt;What I wanted clear today:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;reverse-array problems compare extra-space and in-place thinking cleanly&lt;/li&gt;
&lt;li&gt;rotation rewards structure over repeated shifting&lt;/li&gt;
&lt;li&gt;max/min should usually become a clean one-pass scan&lt;/li&gt;
&lt;li&gt;second-largest trains running-state discipline&lt;/li&gt;
&lt;li&gt;edge cases show up fast even in simple operations&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The biggest shift was realizing that these tasks are not trivial drills. They are a great place to build habits that show up again in much harder problems. That makes them useful practice, not just warm-up material for later topics.&lt;/p&gt;

&lt;p&gt;That made Day 9 much more useful than a warm-up day. It turned basic operations into a way to practice cleaner algorithm judgment.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Day 9/365 of the 365 Day DSA &amp;amp; System Design Challenge.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>dsa</category>
      <category>arrays</category>
      <category>algorithms</category>
      <category>interview</category>
    </item>
    <item>
      <title>Day 9/60: Alerting Strategies -- Production Engineering</title>
      <dc:creator>Naveen Karasu</dc:creator>
      <pubDate>Tue, 12 May 2026 18:23:52 +0000</pubDate>
      <link>https://dev.to/thinkkun/day-960-alerting-strategies-production-engineering-2n5h</link>
      <guid>https://dev.to/thinkkun/day-960-alerting-strategies-production-engineering-2n5h</guid>
      <description>&lt;h1&gt;
  
  
  Day 9/60: Alerting Strategies -- Production Engineering
&lt;/h1&gt;

&lt;p&gt;&lt;em&gt;60 Day Production Engineering Challenge&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Alert fatigue is the number one reason on-call rotations burn people out. Today I am covering the strategies that cut noise while keeping signal.&lt;/p&gt;

&lt;h2&gt;
  
  
  Symptom-Based Alerting with PromQL
&lt;/h2&gt;

&lt;p&gt;Page on what users feel, not what servers report internally. Here is a burn rate alert that fires when your error budget is burning at 14.4x the allowed rate:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Critical burn rate: will exhaust monthly budget in 1 hour
(
  sum(rate(http_requests_total{code=~"5.."}[1h]))
  / sum(rate(http_requests_total[1h]))
) &amp;gt; (14.4 * 0.001)
and
(
  sum(rate(http_requests_total{code=~"5.."}[5m]))
  / sum(rate(http_requests_total[5m]))
) &amp;gt; (14.4 * 0.001)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The dual window (1h AND 5m) means you only page when the problem has statistical significance AND is actively happening right now.&lt;/p&gt;

&lt;h2&gt;
  
  
  Alertmanager Inhibition Rules
&lt;/h2&gt;

&lt;p&gt;When a node dies, you do not need fifty alerts for every pod that was on it. Inhibition suppresses the cascade:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# alertmanager.yml inhibition config&lt;/span&gt;
&lt;span class="na"&gt;inhibit_rules&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;source_match&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;severity&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;critical&lt;/span&gt;
    &lt;span class="na"&gt;target_match&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;severity&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;warning&lt;/span&gt;
    &lt;span class="na"&gt;equal&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;alertname&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;cluster&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;source_match&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;alertname&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;NodeDown&lt;/span&gt;
    &lt;span class="na"&gt;target_match_re&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;alertname&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Pod.+"&lt;/span&gt;
    &lt;span class="na"&gt;equal&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;node&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;One NodeDown critical alert. Zero PodCrashLoopBackOff warnings until the node recovers.&lt;/p&gt;

&lt;h2&gt;
  
  
  Catching Silent Failures with absent()
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Alert when a target stops reporting entirely&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;alert&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;TargetVanished&lt;/span&gt;
  &lt;span class="na"&gt;expr&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;absent(up{job="payment-service"} == 1)&lt;/span&gt;
  &lt;span class="na"&gt;for&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;5m&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;severity&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;critical&lt;/span&gt;
  &lt;span class="na"&gt;annotations&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;summary&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;payment-service&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;target&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;missing&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;from&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Prometheus"&lt;/span&gt;
    &lt;span class="na"&gt;runbook&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;https://runbooks.internal/target-vanished"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is the one alert that catches what every other alert misses: the silent failure where metrics just stop arriving.&lt;/p&gt;

&lt;h2&gt;
  
  
  Key Takeaways
&lt;/h2&gt;

&lt;p&gt;Alert on symptoms. Use burn rates. Configure inhibition. Link runbooks. Test your pipeline.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Day 9/60 of the 60 Day Production Engineering Challenge&lt;/em&gt;&lt;/p&gt;

</description>
      <category>sre</category>
      <category>prometheus</category>
      <category>devops</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Day 9/90: Remote state management</title>
      <dc:creator>Naveen Karasu</dc:creator>
      <pubDate>Tue, 12 May 2026 18:23:34 +0000</pubDate>
      <link>https://dev.to/thinkkun/day-990-remote-state-management-4kmn</link>
      <guid>https://dev.to/thinkkun/day-990-remote-state-management-4kmn</guid>
      <description>&lt;h1&gt;
  
  
  Day 9/90: Remote state management
&lt;/h1&gt;

&lt;p&gt;&lt;em&gt;90 Day Security Infrastructure Challenge&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;I am writing this one as if a teammate opened the pull request and asked what actually matters. My answer is that keeping Terraform honest when the state file, provider behavior, and module boundaries are all capable of hiding drift. Good infrastructure content should make the operational boundary visible, not bury it behind screenshots or one happy-path command.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Local State Breaks in Teams
&lt;/h2&gt;

&lt;p&gt;The practical reason to spend time on why local state breaks in teams is simple: it is one of the places where infrastructure drift hides behind a clean-looking diff. Keeping terraform honest when the state file, provider behavior, and module boundaries are all capable of hiding drift.&lt;/p&gt;

&lt;p&gt;The repo earns trust when terraform plan output, remote state configuration, provider aliases, variables, outputs, and CI checks wired into the repo tell the same story as the PR summary. That is how you get to a Terraform workflow where the plan, state backend, and module interfaces explain exactly what the change will touch. If the state file is trusted, it needs ownership, locking, and review discipline. If the module boundary is trusted, it needs readable inputs and outputs.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;terraform &lt;span class="nb"&gt;fmt&lt;/span&gt; &lt;span class="nt"&gt;-check&lt;/span&gt; &lt;span class="nt"&gt;-recursive&lt;/span&gt;
terraform init &lt;span class="nt"&gt;-backend-config&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"key=remote_state_management/terraform.tfstate"&lt;/span&gt;
terraform validate
terraform plan &lt;span class="nt"&gt;-out&lt;/span&gt; remote_state_management.tfplan
tfsec &lt;span class="nb"&gt;.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Building a Hardened S3 Backend Module
&lt;/h2&gt;

&lt;p&gt;Building a Hardened S3 Backend Module sounds narrow until it fails under pressure. Then you find out whether the infrastructure repo can explain its own behavior. The real work is still keeping Terraform honest when the state file, provider behavior, and module boundaries are all capable of hiding drift.&lt;/p&gt;

&lt;p&gt;I do not want a magical success message. I want terraform plan output, remote state configuration, provider aliases, variables, outputs, and CI checks wired into the repo, because that evidence is what turns the work into a Terraform workflow where the plan, state backend, and module interfaces explain exactly what the change will touch. If the state file is trusted, it needs ownership, locking, and review discipline. If the module boundary is trusted, it needs readable inputs and outputs.&lt;/p&gt;

&lt;h2&gt;
  
  
  Migrating Local State
&lt;/h2&gt;

&lt;p&gt;Migrating Local State is where remote state management becomes operational. On a real team, the argument is rarely about syntax. It is about keeping Terraform honest when the state file, provider behavior, and module boundaries are all capable of hiding drift.&lt;/p&gt;

&lt;p&gt;What I want back from this day is a Terraform workflow where the plan, state backend, and module interfaces explain exactly what the change will touch. That only happens when the change leaves evidence in terraform plan output, remote state configuration, provider aliases, variables, outputs, and CI checks wired into the repo. If the state file is trusted, it needs ownership, locking, and review discipline. If the module boundary is trusted, it needs readable inputs and outputs.&lt;/p&gt;

&lt;h2&gt;
  
  
  Review posture
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;I want the pull request to name the affected environment, the rollback path, and the state or inventory boundary touched by remote state management.&lt;/li&gt;
&lt;li&gt;The review should show exactly how why local state breaks in teams changes behavior, not just that the file format is valid.&lt;/li&gt;
&lt;li&gt;If a pipeline gate, policy check, or drift report disagrees with the proposed change, that disagreement belongs in the review thread instead of hidden in logs.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The outcome I care about is a Terraform workflow where the plan, state backend, and module interfaces explain exactly what the change will touch. That is only believable when terraform plan output, remote state configuration, provider aliases, variables, outputs, and CI checks wired into the repo are easy to find and consistent with the explanation in the pull request. If the state file is trusted, it needs ownership, locking, and review discipline. If the module boundary is trusted, it needs readable inputs and outputs.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why this belongs in a security infrastructure track
&lt;/h2&gt;

&lt;p&gt;IaC, Ansible, policy as code, GitOps, and infrastructure testing all share the same responsibility: make change review safer than console drift. That is why I keep bringing the lesson back to Terraform, IaC boundaries, pipeline evidence, and the operational story behind the diff.&lt;/p&gt;

</description>
      <category>terraform</category>
      <category>ansible</category>
      <category>devsecops</category>
      <category>security</category>
    </item>
    <item>
      <title>Burp Suite Advanced Features: Intruder Attack Types Explained</title>
      <dc:creator>Naveen Karasu</dc:creator>
      <pubDate>Tue, 12 May 2026 18:23:16 +0000</pubDate>
      <link>https://dev.to/thinkkun/burp-suite-advanced-features-intruder-attack-types-explained-5451</link>
      <guid>https://dev.to/thinkkun/burp-suite-advanced-features-intruder-attack-types-explained-5451</guid>
      <description>&lt;h1&gt;
  
  
  Burp Intruder Attack Types: When to Use Each One
&lt;/h1&gt;

&lt;p&gt;Day 9 of my pentesting challenge. Intruder's four attack types confuse people, so here is the cheat sheet.&lt;/p&gt;

&lt;h2&gt;
  
  
  Sniper -- Independent Parameter Testing
&lt;/h2&gt;

&lt;p&gt;One list, multiple positions, tested one at a time. Use for finding which parameter is injectable:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight http"&gt;&lt;code&gt;&lt;span class="nf"&gt;POST&lt;/span&gt; &lt;span class="nn"&gt;/search&lt;/span&gt; &lt;span class="k"&gt;HTTP&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="m"&gt;1.1&lt;/span&gt;
&lt;span class="na"&gt;Content-Type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s"&gt;application/x-www-form-urlencoded&lt;/span&gt;

query=$$test$$&amp;amp;category=$$all$$&amp;amp;sort=$$date$$
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With an XSS payload list, Sniper tests &lt;code&gt;query&lt;/code&gt; with all payloads while &lt;code&gt;category&lt;/code&gt; and &lt;code&gt;sort&lt;/code&gt; stay default, then moves to &lt;code&gt;category&lt;/code&gt;, then &lt;code&gt;sort&lt;/code&gt;. Three positions, 50 payloads = 150 requests.&lt;/p&gt;

&lt;h2&gt;
  
  
  Pitchfork -- Paired Credential Testing
&lt;/h2&gt;

&lt;p&gt;Multiple lists in parallel. Position 1 gets list 1, position 2 gets list 2:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;List 1 (emails): alice@corp.com, bob@corp.com
List 2 (passwords): Spring2026!, Welcome1

Request 1: alice@corp.com / Spring2026!
Request 2: bob@corp.com / Welcome1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Use when you have matched pairs from OSINT or breach data.&lt;/p&gt;

&lt;h2&gt;
  
  
  Cluster Bomb -- Full Combination
&lt;/h2&gt;

&lt;p&gt;Every value in list 1 x every value in list 2. Expensive but thorough. 100 x 100 = 10,000 requests.&lt;/p&gt;

&lt;h2&gt;
  
  
  Reading Results
&lt;/h2&gt;

&lt;p&gt;Sort by &lt;strong&gt;response length&lt;/strong&gt;, not status code. When testing privilege escalation on admin endpoints:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight http"&gt;&lt;code&gt;&lt;span class="nf"&gt;GET&lt;/span&gt; &lt;span class="nn"&gt;/admin/$$path$$&lt;/span&gt; &lt;span class="k"&gt;HTTP&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="m"&gt;1.1&lt;/span&gt;
&lt;span class="na"&gt;Cookie&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s"&gt;session=&amp;lt;regular_user_token&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Most responses: 403 at ~90 bytes. A few 200s at 2,000+ bytes? Those admin pages have no server-side role checks. Real bug, found in seconds.&lt;/p&gt;

</description>
      <category>security</category>
      <category>pentesting</category>
      <category>burpsuite</category>
      <category>webdev</category>
    </item>
    <item>
      <title>CloudWatch Metric Filters for AWS Security Monitoring</title>
      <dc:creator>Naveen Karasu</dc:creator>
      <pubDate>Tue, 12 May 2026 18:22:58 +0000</pubDate>
      <link>https://dev.to/thinkkun/cloudwatch-metric-filters-for-aws-security-monitoring-12n4</link>
      <guid>https://dev.to/thinkkun/cloudwatch-metric-filters-for-aws-security-monitoring-12n4</guid>
      <description>&lt;h2&gt;
  
  
  Day 9: CloudWatch Security Filters
&lt;/h2&gt;

&lt;p&gt;CloudTrail records API calls. CloudWatch makes them actionable. Here's a quick setup for the most critical security detection -- catching someone disabling your security controls:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_cloudwatch_log_metric_filter"&lt;/span&gt; &lt;span class="s2"&gt;"config_tampering"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt;           &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"aws-config-changes"&lt;/span&gt;
  &lt;span class="nx"&gt;log_group_name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"/aws/cloudtrail/security"&lt;/span&gt;
  &lt;span class="nx"&gt;pattern&lt;/span&gt;        &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="no"&gt;PATTERN&lt;/span&gt;&lt;span class="sh"&gt;
{ ($.eventSource = config.amazonaws.com) &amp;amp;&amp;amp;
  (($.eventName = StopConfigurationRecorder) ||
   ($.eventName = DeleteDeliveryChannel)) }
&lt;/span&gt;&lt;span class="no"&gt;PATTERN

&lt;/span&gt;  &lt;span class="nx"&gt;metric_transformation&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;name&lt;/span&gt;      &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"ConfigTamperingCount"&lt;/span&gt;
    &lt;span class="nx"&gt;namespace&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Security/CIS"&lt;/span&gt;
    &lt;span class="nx"&gt;value&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"1"&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_cloudwatch_metric_alarm"&lt;/span&gt; &lt;span class="s2"&gt;"config_tampering"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;alarm_name&lt;/span&gt;          &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"CRITICAL-ConfigTampering"&lt;/span&gt;
  &lt;span class="nx"&gt;namespace&lt;/span&gt;           &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Security/CIS"&lt;/span&gt;
  &lt;span class="nx"&gt;metric_name&lt;/span&gt;         &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"ConfigTamperingCount"&lt;/span&gt;
  &lt;span class="nx"&gt;statistic&lt;/span&gt;           &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Sum"&lt;/span&gt;
  &lt;span class="nx"&gt;period&lt;/span&gt;              &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;60&lt;/span&gt;
  &lt;span class="nx"&gt;evaluation_periods&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;
  &lt;span class="nx"&gt;threshold&lt;/span&gt;           &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;
  &lt;span class="nx"&gt;comparison_operator&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"GreaterThanOrEqualToThreshold"&lt;/span&gt;
  &lt;span class="nx"&gt;treat_missing_data&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"notBreaching"&lt;/span&gt;
  &lt;span class="nx"&gt;alarm_actions&lt;/span&gt;       &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;aws_sns_topic&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;security&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;arn&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Why this matters: an attacker with admin access often disables Config to stop recording evidence. A 60-second alarm period means you know within a minute.&lt;/p&gt;

&lt;p&gt;Key tip: always set &lt;code&gt;treat_missing_data = "notBreaching"&lt;/code&gt;. The default causes false alarms during quiet periods.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>security</category>
      <category>cloudwatch</category>
      <category>devops</category>
    </item>
  </channel>
</rss>
