<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Aditya singh</title>
    <description>The latest articles on DEV Community by Aditya singh (@aditya_singh_172b37651201).</description>
    <link>https://dev.to/aditya_singh_172b37651201</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/aditya_singh_172b37651201"/>
    <language>en</language>
    <item>
      <title>30 Core Algorithm:Ep-08:Round Robin Scheduling</title>
      <dc:creator>Aditya singh</dc:creator>
      <pubDate>Tue, 30 Dec 2025 07:00:29 +0000</pubDate>
      <link>https://dev.to/aditya_singh_172b37651201/30-core-algorithmep-08round-robin-scheduling-5ei6</link>
      <guid>https://dev.to/aditya_singh_172b37651201/30-core-algorithmep-08round-robin-scheduling-5ei6</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7dn2tm8os7ag8h80pdec.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7dn2tm8os7ag8h80pdec.png" alt="Round Robin Image" width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Round Robin Scheduling Is Really About Preventing Invisible Starvation
&lt;/h2&gt;

&lt;p&gt;Some systems don’t fail loudly.&lt;/p&gt;

&lt;p&gt;They fail quietly by letting certain work wait forever while others keep moving.&lt;/p&gt;

&lt;p&gt;Round Robin scheduling exists because of that failure mode.&lt;/p&gt;

&lt;p&gt;It isn’t about fairness as a moral idea.&lt;br&gt;
It’s about making starvation visible and impossible in time-shared systems.&lt;/p&gt;
&lt;h2&gt;
  
  
  The Problem With Letting One Task Run Too Long
&lt;/h2&gt;

&lt;p&gt;In a system where multiple tasks compete for CPU time, letting a task run until completion feels efficient.&lt;/p&gt;

&lt;p&gt;Context switches are expensive.&lt;br&gt;
State changes are costly.&lt;br&gt;
Letting momentum continue seems logical.&lt;/p&gt;

&lt;p&gt;But this creates an imbalance.&lt;/p&gt;

&lt;p&gt;Long-running tasks dominate execution.&lt;br&gt;
Short tasks get delayed.&lt;br&gt;
Some tasks may never run at all.&lt;/p&gt;

&lt;p&gt;The system appears busy — yet parts of it are effectively frozen.&lt;/p&gt;
&lt;h2&gt;
  
  
  Round Robin: Time as the Only Equalizer
&lt;/h2&gt;

&lt;p&gt;Round Robin introduces a simple constraint:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;No task is allowed to hold the CPU indefinitely.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Each task receives a fixed slice of time.&lt;br&gt;
When that slice expires, control moves on — regardless of whether the task is finished.&lt;/p&gt;

&lt;p&gt;Progress becomes bounded.&lt;br&gt;
Waiting becomes visible.&lt;/p&gt;

&lt;p&gt;This is not about optimizing throughput.&lt;br&gt;
It’s about guaranteeing &lt;em&gt;eventual execution&lt;/em&gt;.&lt;/p&gt;
&lt;h2&gt;
  
  
  The Core Mechanism
&lt;/h2&gt;

&lt;p&gt;At a structural level, Round Robin looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;queue = all_ready_tasks

while system_is_running:
task = queue.pop_front()
run task for time_quantum
if task is not finished:
queue.push_back(task)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Nothing about this is clever.&lt;/p&gt;

&lt;p&gt;The power comes from repetition.&lt;br&gt;
Every task re-enters the queue.&lt;br&gt;
No task disappears unless it completes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Time Quantum Is a Trade-Off, Not a Detail
&lt;/h2&gt;

&lt;p&gt;The time quantum is often taught as a tuning parameter.&lt;/p&gt;

&lt;p&gt;In reality, it encodes a trade-off.&lt;/p&gt;

&lt;p&gt;A large quantum:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;reduces context switching&lt;/li&gt;
&lt;li&gt;increases response time&lt;/li&gt;
&lt;li&gt;risks short-task starvation&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A small quantum:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;improves responsiveness&lt;/li&gt;
&lt;li&gt;increases overhead&lt;/li&gt;
&lt;li&gt;fragments execution&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Round Robin doesn’t solve this trade-off.&lt;br&gt;
It exposes it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Round Robin as a Liveness Guarantee
&lt;/h2&gt;

&lt;p&gt;Many scheduling algorithms optimize for speed or priority.&lt;/p&gt;

&lt;p&gt;Round Robin optimizes for &lt;em&gt;liveness&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;It guarantees that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;every runnable task will execute&lt;/li&gt;
&lt;li&gt;delays are bounded&lt;/li&gt;
&lt;li&gt;starvation is impossible by design&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is why Round Robin appears in:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;operating systems&lt;/li&gt;
&lt;li&gt;network packet schedulers&lt;/li&gt;
&lt;li&gt;request handling systems&lt;/li&gt;
&lt;li&gt;cooperative multitasking environments&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Wherever waiting indefinitely is unacceptable, Round Robin becomes necessary.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where Round Robin Breaks Down
&lt;/h2&gt;

&lt;p&gt;Round Robin assumes tasks are cooperative with time slicing.&lt;/p&gt;

&lt;p&gt;If tasks:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;block frequently&lt;/li&gt;
&lt;li&gt;perform heavy I/O&lt;/li&gt;
&lt;li&gt;depend on cache locality&lt;/li&gt;
&lt;li&gt;require real-time guarantees&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;then Round Robin alone isn’t enough.&lt;/p&gt;

&lt;p&gt;That’s why real systems layer it with priorities, aging, and feedback queues.&lt;/p&gt;

&lt;p&gt;Round Robin isn’t a complete scheduler.&lt;br&gt;
It’s a foundation.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Trade-off It Makes Explicit
&lt;/h2&gt;

&lt;p&gt;Round Robin trades efficiency for predictability.&lt;/p&gt;

&lt;p&gt;It accepts more context switches.&lt;br&gt;
It fragments long-running work.&lt;br&gt;
It gives up optimal throughput.&lt;/p&gt;

&lt;p&gt;In return, it guarantees that no task becomes invisible.&lt;/p&gt;

&lt;p&gt;That trade-off is intentional.&lt;/p&gt;

&lt;h2&gt;
  
  
  Takeaway
&lt;/h2&gt;

&lt;p&gt;Round Robin scheduling isn’t about sharing CPU time evenly.&lt;/p&gt;

&lt;p&gt;It exists to prevent silent starvation — to ensure that every task, no matter how small, eventually gets a turn.&lt;/p&gt;

&lt;p&gt;In systems where time itself is the contested resource, Round Robin isn’t optional.&lt;/p&gt;

&lt;p&gt;It’s the baseline for fairness that systems can actually enforce.&lt;/p&gt;

</description>
      <category>algorithms</category>
      <category>architecture</category>
      <category>computerscience</category>
    </item>
    <item>
      <title>30 Core Algorithm :Ep-07:Kadane’s Algorithm</title>
      <dc:creator>Aditya singh</dc:creator>
      <pubDate>Tue, 30 Dec 2025 04:37:47 +0000</pubDate>
      <link>https://dev.to/aditya_singh_172b37651201/30-core-algorithm-ep-07kadanes-algorithm-jgd</link>
      <guid>https://dev.to/aditya_singh_172b37651201/30-core-algorithm-ep-07kadanes-algorithm-jgd</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fts5goa4grjzbb7odfdo8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fts5goa4grjzbb7odfdo8.png" alt="Kadans Algorithm" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Kadane’s Algorithm Is Really About Knowing When to Let Go
&lt;/h2&gt;

&lt;p&gt;Many systems don’t fail because they lack momentum.&lt;/p&gt;

&lt;p&gt;They fail because they refuse to drop it.&lt;/p&gt;

&lt;p&gt;Kadane’s Algorithm exists to answer a deceptively hard question:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;When does continuing cost more than restarting?&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;It isn’t about arrays.&lt;br&gt;
It’s about recognizing when past effort has become a liability.&lt;/p&gt;
&lt;h2&gt;
  
  
  The Hidden Cost of Carrying History
&lt;/h2&gt;

&lt;p&gt;Imagine tracking performance over time.&lt;/p&gt;

&lt;p&gt;Some segments contribute positively.&lt;br&gt;
Others introduce loss, noise, or drag.&lt;/p&gt;

&lt;p&gt;A naive approach tries to evaluate every possible segment independently, hoping to find the best outcome somewhere inside the data.&lt;/p&gt;

&lt;p&gt;That works.&lt;br&gt;
But it ignores something important.&lt;/p&gt;

&lt;p&gt;Not all history deserves to be remembered.&lt;/p&gt;

&lt;p&gt;Sometimes the best decision is to stop carrying the past forward.&lt;/p&gt;
&lt;h2&gt;
  
  
  Kadane’s Insight: Momentum Is Conditional
&lt;/h2&gt;

&lt;p&gt;Kadane’s Algorithm makes a strict rule:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;If the past hurts the present, drop it.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;At every step, the algorithm decides whether extending the current sequence is beneficial — or whether starting fresh produces a better outcome.&lt;/p&gt;

&lt;p&gt;This is not a global comparison.&lt;br&gt;
It’s a continuous judgment.&lt;/p&gt;

&lt;p&gt;Progress is earned.&lt;br&gt;
History is optional.&lt;/p&gt;
&lt;h2&gt;
  
  
  The Idea in Code Form
&lt;/h2&gt;

&lt;p&gt;At a structural level, Kadane’s logic looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
best = -infinity
current = 0

for value in sequence:
current = max(value, current + value)
best = max(best, current)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;There’s no memory of subarrays.&lt;br&gt;
No backtracking.&lt;br&gt;
No lookahead.&lt;/p&gt;

&lt;p&gt;Each step asks a single question:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“Does keeping what I have still help?”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Why Kadane’s Works Without Looking Back
&lt;/h2&gt;

&lt;p&gt;Kadane’s Algorithm relies on a simple but powerful observation:&lt;/p&gt;

&lt;p&gt;If a running sum becomes negative, it will only reduce the value of anything that follows.&lt;/p&gt;

&lt;p&gt;So the algorithm doesn’t hesitate.&lt;br&gt;
It resets.&lt;/p&gt;

&lt;p&gt;That willingness to abandon effort is what keeps the solution optimal — without ever revisiting past decisions.&lt;/p&gt;

&lt;h2&gt;
  
  
  Kadane’s as a Signal Filter
&lt;/h2&gt;

&lt;p&gt;Outside of arrays, Kadane’s logic appears wherever systems must detect meaningful runs amid noise.&lt;/p&gt;

&lt;p&gt;Performance streaks.&lt;br&gt;
Profit and loss windows.&lt;br&gt;
Sensor signals.&lt;br&gt;
User engagement bursts.&lt;/p&gt;

&lt;p&gt;In each case, the goal isn’t to preserve continuity.&lt;br&gt;
It’s to isolate periods where momentum is genuinely positive.&lt;/p&gt;

&lt;p&gt;Kadane’s doesn’t smooth noise.&lt;br&gt;
It cuts it off.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where Kadane’s Stops Applying
&lt;/h2&gt;

&lt;p&gt;Kadane’s assumes something critical:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The cost of restarting is low.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;If resetting state is expensive, or if history carries long-term dependencies, Kadane’s logic breaks down.&lt;/p&gt;

&lt;p&gt;That’s why it struggles with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;multi-dimensional constraints&lt;/li&gt;
&lt;li&gt;delayed penalties&lt;/li&gt;
&lt;li&gt;systems where recovery cost matters&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Kadane’s is decisive — but not cautious.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Trade-off It Makes Explicit
&lt;/h2&gt;

&lt;p&gt;Kadane’s Algorithm trades completeness for immediacy.&lt;/p&gt;

&lt;p&gt;It doesn’t explore all segments.&lt;br&gt;
It doesn’t remember alternatives.&lt;br&gt;
It commits fully to the present.&lt;/p&gt;

&lt;p&gt;In return, it delivers optimal results in linear time with constant memory.&lt;/p&gt;

&lt;p&gt;That trade-off is intentional.&lt;/p&gt;

&lt;h2&gt;
  
  
  Takeaway
&lt;/h2&gt;

&lt;p&gt;Kadane’s Algorithm isn’t about finding the maximum subarray.&lt;/p&gt;

&lt;p&gt;It’s about knowing when past effort stops being an asset — and having the discipline to let it go immediately.&lt;/p&gt;

&lt;p&gt;That idea shows up everywhere systems must distinguish real momentum from accumulated drag.&lt;/p&gt;

&lt;p&gt;And that’s why Kadane’s remains a core algorithm, not just a clever trick.&lt;/p&gt;

</description>
      <category>algorithms</category>
      <category>computerscience</category>
      <category>learning</category>
    </item>
    <item>
      <title>30 Core Algorithm : Ep-06 :Prefix Sum</title>
      <dc:creator>Aditya singh</dc:creator>
      <pubDate>Tue, 30 Dec 2025 04:28:08 +0000</pubDate>
      <link>https://dev.to/aditya_singh_172b37651201/30-core-algorithm-ep-06-prefix-sum-1ii0</link>
      <guid>https://dev.to/aditya_singh_172b37651201/30-core-algorithm-ep-06-prefix-sum-1ii0</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn04dcy0m09qiytj4zd61.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn04dcy0m09qiytj4zd61.png" alt="Prefix Sum" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Why Prefix Sum Is Really About Making Accumulated Cost Explicit
&lt;/h1&gt;

&lt;p&gt;Many performance problems don’t come from doing too much work.&lt;/p&gt;

&lt;p&gt;They come from doing the &lt;em&gt;same work again&lt;/em&gt;, without realizing it has already been paid for.&lt;/p&gt;

&lt;p&gt;Prefix Sum exists to expose that hidden cost.&lt;/p&gt;

&lt;p&gt;It isn’t about faster range queries.&lt;br&gt;
It’s about acknowledging accumulation early, instead of rediscovering it repeatedly.&lt;/p&gt;
&lt;h2&gt;
  
  
  The Problem With Recomputing the Past
&lt;/h2&gt;

&lt;p&gt;Consider a system that repeatedly asks questions like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;“What happened between these two points?”&lt;/li&gt;
&lt;li&gt;“How much has accumulated so far?”&lt;/li&gt;
&lt;li&gt;“What’s the total impact up to now?”&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A naive approach recomputes everything inside the range every time. Each query starts fresh, walking through history as if nothing has been learned before.&lt;/p&gt;

&lt;p&gt;The result is predictable.&lt;/p&gt;

&lt;p&gt;The system keeps paying for past work again and again, even though nothing about that past has changed.&lt;/p&gt;
&lt;h2&gt;
  
  
  Prefix Sum: Paying the Cost Once
&lt;/h2&gt;

&lt;p&gt;Prefix Sum changes the economics.&lt;/p&gt;

&lt;p&gt;Instead of treating every query as independent, it makes accumulation explicit upfront.&lt;/p&gt;

&lt;p&gt;Each position stores the total effect of everything before it. Once that’s done, the past becomes cheap to reference.&lt;/p&gt;

&lt;p&gt;The system stops &lt;em&gt;recounting&lt;/em&gt; history.&lt;br&gt;
It starts &lt;em&gt;indexing&lt;/em&gt; it.&lt;/p&gt;
&lt;h2&gt;
  
  
  The Core Idea in Code
&lt;/h2&gt;

&lt;p&gt;Structurally, Prefix Sum looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
prefix[0] = 0
for i in range(1, n + 1):
prefix[i] = prefix[i - 1] + value[i]

range_sum(l, r):
return prefix[r] - prefix[l - 1]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The computation isn’t clever.&lt;br&gt;
The discipline is.&lt;/p&gt;

&lt;p&gt;Accumulation happens once.&lt;br&gt;
Queries become simple subtraction.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Prefix Sum Depends on Immutability
&lt;/h2&gt;

&lt;p&gt;Prefix Sum assumes something critical:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The past does not change.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Once values are accumulated, they’re treated as fixed truth. That assumption is what allows fast lookups without re-evaluation.&lt;/p&gt;

&lt;p&gt;This is why Prefix Sum works naturally with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;static arrays&lt;/li&gt;
&lt;li&gt;logs&lt;/li&gt;
&lt;li&gt;historical metrics&lt;/li&gt;
&lt;li&gt;time-series snapshots&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When data is stable, accumulation becomes an asset.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prefix Sum as a Lens on Work Already Done
&lt;/h2&gt;

&lt;p&gt;Most Prefix Sum applications aren’t about numbers.&lt;/p&gt;

&lt;p&gt;They’re about recognizing that work has already been performed.&lt;/p&gt;

&lt;p&gt;Energy consumed over time.&lt;br&gt;&lt;br&gt;
Requests processed so far.&lt;br&gt;&lt;br&gt;
Costs accumulated up to a checkpoint.  &lt;/p&gt;

&lt;p&gt;Prefix Sum turns these from repeated calculations into constant-time lookups.&lt;/p&gt;

&lt;p&gt;It doesn’t speed up thinking.&lt;br&gt;
It prevents redundant thinking.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where Prefix Sum Stops Being Enough
&lt;/h2&gt;

&lt;p&gt;Prefix Sum breaks the moment the past becomes mutable.&lt;/p&gt;

&lt;p&gt;If values change frequently, accumulated sums become stale. Corrections ripple forward, invalidating everything that follows.&lt;/p&gt;

&lt;p&gt;That’s where more complex structures step in.&lt;/p&gt;

&lt;p&gt;Prefix Sum doesn’t fail silently.&lt;br&gt;
It clearly signals when its assumptions no longer hold.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Trade-off It Makes Explicit
&lt;/h2&gt;

&lt;p&gt;Prefix Sum trades flexibility for clarity.&lt;/p&gt;

&lt;p&gt;It commits to the past being fixed.&lt;br&gt;
It commits to upfront computation.&lt;br&gt;
It removes the ability to cheaply change history.&lt;/p&gt;

&lt;p&gt;In return, it makes repeated questions about that history almost free.&lt;/p&gt;

&lt;p&gt;That trade-off is intentional.&lt;/p&gt;

&lt;h2&gt;
  
  
  Takeaway
&lt;/h2&gt;

&lt;p&gt;Prefix Sum isn’t a trick for range queries.&lt;/p&gt;

&lt;p&gt;It’s a way of making accumulated cost visible, explicit, and reusable — so systems stop rediscovering the same information over and over.&lt;/p&gt;

&lt;p&gt;Whenever the past is stable and queried often, Prefix Sum isn’t an optimization.&lt;/p&gt;

&lt;p&gt;It’s the correct representation.&lt;/p&gt;

</description>
      <category>algorithms</category>
      <category>computerscience</category>
      <category>performance</category>
    </item>
    <item>
      <title>30 Core Algorithm:EP-05: Sliding Window Algorithm</title>
      <dc:creator>Aditya singh</dc:creator>
      <pubDate>Tue, 30 Dec 2025 04:16:20 +0000</pubDate>
      <link>https://dev.to/aditya_singh_172b37651201/30-core-algorithmep-05-sliding-window-algorithm-924</link>
      <guid>https://dev.to/aditya_singh_172b37651201/30-core-algorithmep-05-sliding-window-algorithm-924</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9iv1voysw2mlcyxsnvfz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9iv1voysw2mlcyxsnvfz.png" alt="Sliding Window " width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Why the Sliding Window Algorithm Is Really About Preserving Context Over Time
&lt;/h2&gt;

&lt;p&gt;Many systems don’t fail because they lack information.&lt;/p&gt;

&lt;p&gt;They fail because they keep forgetting what just happened.&lt;/p&gt;

&lt;p&gt;Sliding Window exists to solve that problem.&lt;/p&gt;

&lt;p&gt;It isn’t about optimizing loops.&lt;br&gt;
It isn’t about fixed-size subarrays.&lt;br&gt;
It’s about maintaining &lt;em&gt;relevant context&lt;/em&gt; as the system moves forward.&lt;/p&gt;
&lt;h2&gt;
  
  
  The Cost of Forgetting Too Much
&lt;/h2&gt;

&lt;p&gt;Consider a system evaluating a condition repeatedly over a sequence.&lt;/p&gt;

&lt;p&gt;A naive approach recalculates everything from scratch each time. Every step starts fresh, ignoring the fact that most of the context hasn’t changed.&lt;/p&gt;

&lt;p&gt;That approach works.&lt;br&gt;
It’s also wasteful.&lt;/p&gt;

&lt;p&gt;When input shifts gradually, recomputation destroys continuity. The system keeps answering the same questions again, paying the full cost every time.&lt;/p&gt;

&lt;p&gt;Progress happens.&lt;br&gt;
But memory is lost.&lt;/p&gt;
&lt;h2&gt;
  
  
  Sliding Window: Carrying Context Forward
&lt;/h2&gt;

&lt;p&gt;Sliding Window changes the unit of work.&lt;/p&gt;

&lt;p&gt;Instead of recomputing from zero, it asks:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“What changed since the last step?”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;One element enters the window.&lt;br&gt;
One element leaves.&lt;br&gt;
Everything else stays.&lt;/p&gt;

&lt;p&gt;The algorithm preserves just enough state to keep decisions accurate — without carrying the entire history.&lt;/p&gt;

&lt;p&gt;This isn’t optimization by cleverness.&lt;br&gt;
It’s optimization by restraint.&lt;/p&gt;
&lt;h2&gt;
  
  
  The Core Mechanism
&lt;/h2&gt;

&lt;p&gt;At a structural level, Sliding Window looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;initialize window state

for each new element:
add new element to window
while window violates constraint:
remove old element from window
evaluate window
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The details vary.&lt;br&gt;
The principle doesn’t.&lt;/p&gt;

&lt;p&gt;State is updated incrementally. Nothing is recalculated unless it &lt;em&gt;has&lt;/em&gt; to be.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Sliding Window Depends on Time
&lt;/h2&gt;

&lt;p&gt;Sliding Window assumes something fundamental:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Order matters.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The window moves forward, never backward. Decisions are based on &lt;em&gt;recent&lt;/em&gt; information, not global knowledge.&lt;/p&gt;

&lt;p&gt;That’s why the technique fits naturally in problems involving:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;streams&lt;/li&gt;
&lt;li&gt;time-series data&lt;/li&gt;
&lt;li&gt;rate limits&lt;/li&gt;
&lt;li&gt;rolling averages&lt;/li&gt;
&lt;li&gt;bounded histories&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When time flows in one direction, sliding windows become inevitable.&lt;/p&gt;

&lt;h2&gt;
  
  
  Sliding Window as a Stability Mechanism
&lt;/h2&gt;

&lt;p&gt;Most sliding window problems aren’t about finding something.&lt;/p&gt;

&lt;p&gt;They’re about keeping a condition stable over a moving boundary.&lt;/p&gt;

&lt;p&gt;A maximum that must stay within a range.&lt;br&gt;&lt;br&gt;
A frequency that must not exceed a limit.&lt;br&gt;&lt;br&gt;
A latency that must remain acceptable.&lt;/p&gt;

&lt;p&gt;The window expands when it’s safe.&lt;br&gt;
It contracts when it isn’t.&lt;/p&gt;

&lt;p&gt;That constant adjustment prevents sudden spikes and silent drift.&lt;/p&gt;

&lt;h2&gt;
  
  
  Fixed vs Variable Windows Isn’t the Point
&lt;/h2&gt;

&lt;p&gt;Fixed-size windows and variable-size windows are usually taught as different patterns.&lt;/p&gt;

&lt;p&gt;They aren’t.&lt;/p&gt;

&lt;p&gt;They’re the same idea applied under different constraints.&lt;/p&gt;

&lt;p&gt;Fixed windows assume stability is guaranteed by size.&lt;br&gt;
Variable windows assume stability must be enforced dynamically.&lt;/p&gt;

&lt;p&gt;In both cases, the window exists to protect context from being lost as the system advances.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where Sliding Window Breaks Down
&lt;/h2&gt;

&lt;p&gt;Sliding Window relies on locality.&lt;/p&gt;

&lt;p&gt;Recent information must be more relevant than distant information. If that assumption fails, the window loses meaning.&lt;/p&gt;

&lt;p&gt;That’s why the technique struggles with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;problems requiring global history&lt;/li&gt;
&lt;li&gt;non-linear dependencies&lt;/li&gt;
&lt;li&gt;conditions that depend on distant past states&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Sliding Window doesn’t fail silently.&lt;br&gt;
It simply stops being applicable.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Trade-off It Makes Explicit
&lt;/h2&gt;

&lt;p&gt;Sliding Window trades completeness for continuity.&lt;/p&gt;

&lt;p&gt;It doesn’t remember everything.&lt;br&gt;
It doesn’t explore all possibilities.&lt;br&gt;
It commits to what’s &lt;em&gt;recently relevant&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;That trade-off is intentional.&lt;/p&gt;

&lt;p&gt;In systems where time matters, forgetting is not a bug.&lt;br&gt;
It’s a requirement.&lt;/p&gt;

&lt;h2&gt;
  
  
  Takeaway
&lt;/h2&gt;

&lt;p&gt;The Sliding Window algorithm isn’t about subarrays or pointer tricks.&lt;/p&gt;

&lt;p&gt;It exists to preserve just enough context as time moves forward — preventing recomputation, stabilizing decisions, and keeping systems responsive without losing accuracy.&lt;/p&gt;

&lt;p&gt;That idea shows up everywhere data arrives continuously.&lt;/p&gt;

&lt;p&gt;And that’s why Sliding Window remains a core technique, not a coding shortcut.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkxjzklat5ty7b0i93fxf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkxjzklat5ty7b0i93fxf.png" alt=" " width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>algorithms</category>
      <category>computerscience</category>
      <category>performance</category>
    </item>
    <item>
      <title>30 Core Algorithm :Ep-03: Depth First Search (DFS)</title>
      <dc:creator>Aditya singh</dc:creator>
      <pubDate>Tue, 30 Dec 2025 04:04:09 +0000</pubDate>
      <link>https://dev.to/aditya_singh_172b37651201/30-core-algorithm-ep-03-depth-first-search-dfs-2lcn</link>
      <guid>https://dev.to/aditya_singh_172b37651201/30-core-algorithm-ep-03-depth-first-search-dfs-2lcn</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1drkumyuzzqqi2vqjclo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1drkumyuzzqqi2vqjclo.png" alt="Depth First Search" width="286" height="176"&gt;&lt;/a&gt; Why Depth-First Search Is Really About Commitment Under Uncertainty&lt;/p&gt;

&lt;p&gt;Depth-First Search is often described as a way to traverse trees or graphs.&lt;/p&gt;

&lt;p&gt;That description is accurate and mostly unhelpful.&lt;/p&gt;

&lt;p&gt;DFS doesn’t exist because we needed another traversal order.&lt;br&gt;&lt;br&gt;
It exists because some systems must &lt;em&gt;commit to a direction&lt;/em&gt; before they can know whether it was the right one.&lt;/p&gt;
&lt;h2&gt;
  
  
  The Real Problem DFS Solves
&lt;/h2&gt;

&lt;p&gt;Imagine exploring a space where:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The number of possibilities is large
&lt;/li&gt;
&lt;li&gt;Memory is limited
&lt;/li&gt;
&lt;li&gt;A valid solution may exist deep inside one path
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Exploring everything fairly isn’t an option. Waiting for the best choice isn’t possible. You have to move.&lt;/p&gt;

&lt;p&gt;DFS makes a very explicit decision:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Go deep, accept the risk, and recover later if needed.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;That philosophy shows up far beyond algorithms.&lt;/p&gt;
&lt;h2&gt;
  
  
  DFS: Choosing Progress Over Certainty
&lt;/h2&gt;

&lt;p&gt;DFS doesn’t try to minimize distance.&lt;br&gt;&lt;br&gt;
It doesn’t try to guarantee optimality.&lt;br&gt;&lt;br&gt;
It doesn’t try to be fair.&lt;/p&gt;

&lt;p&gt;It prioritizes &lt;em&gt;momentum&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;Once a path is chosen, DFS follows it fully. Alternatives are remembered only enough to return to them later. Everything else is ignored for now.&lt;/p&gt;

&lt;p&gt;That choice dramatically reduces what the system needs to hold in memory.&lt;/p&gt;
&lt;h2&gt;
  
  
  The Code Reflects the Philosophy
&lt;/h2&gt;

&lt;p&gt;At a code level, DFS looks almost boring:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
def dfs(node):
visit(node)
for neighbor in node.neighbors:
if not seen(neighbor):
dfs(neighbor)

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;There’s no global coordination here.&lt;br&gt;&lt;br&gt;
No comparison between paths.&lt;br&gt;&lt;br&gt;
No awareness of what might be better elsewhere.&lt;/p&gt;

&lt;p&gt;The algorithm commits, and trusts backtracking to correct mistakes.&lt;/p&gt;

&lt;p&gt;That’s not accidental — it’s the design.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Backtracking Is the Real Feature
&lt;/h2&gt;

&lt;p&gt;DFS is rarely valuable because it reaches the end of a path.&lt;/p&gt;

&lt;p&gt;It’s valuable because it can &lt;em&gt;undo&lt;/em&gt; that path cleanly.&lt;/p&gt;

&lt;p&gt;Every recursive return represents a controlled retreat — restoring state, reversing assumptions, and trying the next option without losing context.&lt;/p&gt;

&lt;p&gt;This is why DFS dominates problems involving:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Constraint satisfaction
&lt;/li&gt;
&lt;li&gt;Parsing and validation
&lt;/li&gt;
&lt;li&gt;Puzzle solving
&lt;/li&gt;
&lt;li&gt;Dependency resolution
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In these systems, failure is expected. Recovery is the requirement.&lt;/p&gt;

&lt;p&gt;DFS treats failure as part of normal execution.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where DFS Quietly Powers Systems
&lt;/h2&gt;

&lt;p&gt;DFS shows up wherever depth matters more than order.&lt;/p&gt;

&lt;p&gt;Compilers resolving nested scopes.&lt;br&gt;&lt;br&gt;
File systems walking directory trees.&lt;br&gt;&lt;br&gt;
Configuration engines applying layered overrides.&lt;br&gt;&lt;br&gt;
Security checks validating permission inheritance.&lt;/p&gt;

&lt;p&gt;In these cases, reaching &lt;em&gt;any&lt;/em&gt; consistent resolution is more important than reaching the best one quickly.&lt;/p&gt;

&lt;p&gt;DFS fits naturally.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Risk DFS Makes You Accept
&lt;/h2&gt;

&lt;p&gt;DFS assumes something dangerous:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;That depth is bounded, or failure will eventually occur.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;When that assumption is wrong, systems break loudly.&lt;/p&gt;

&lt;p&gt;Stack overflows.&lt;br&gt;&lt;br&gt;
Infinite recursion.&lt;br&gt;&lt;br&gt;
Unbounded exploration.&lt;/p&gt;

&lt;p&gt;Most DFS bugs aren’t logical errors. They’re missing constraints.&lt;/p&gt;

&lt;p&gt;The algorithm will keep going exactly as instructed.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Trade-Off DFS Chooses Explicitly
&lt;/h2&gt;

&lt;p&gt;DFS optimizes memory by sacrificing guarantees.&lt;/p&gt;

&lt;p&gt;It reaches deep solutions quickly.&lt;br&gt;&lt;br&gt;
It uses minimal space.&lt;br&gt;&lt;br&gt;
It offers no promise that the solution is optimal — or even close.&lt;/p&gt;

&lt;p&gt;That’s not a flaw.&lt;/p&gt;

&lt;p&gt;DFS is chosen when:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Memory matters more than time
&lt;/li&gt;
&lt;li&gt;Solutions are rare
&lt;/li&gt;
&lt;li&gt;Any valid answer is acceptable
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It’s an algorithm built for commitment, not comparison.&lt;/p&gt;

&lt;h2&gt;
  
  
  Takeaway
&lt;/h2&gt;

&lt;p&gt;Depth-First Search isn’t about traversal order.&lt;/p&gt;

&lt;p&gt;It’s about making progress when certainty isn’t available, committing fully to a path, and relying on controlled backtracking to recover from wrong decisions.&lt;/p&gt;

&lt;p&gt;That mindset is why DFS keeps reappearing in systems that value focus over fairness — long after the algorithm itself stops being visible.&lt;/p&gt;

</description>
      <category>algorithms</category>
      <category>computerscience</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>30 Core Algorithm Ep:04- Two Pointers Technique</title>
      <dc:creator>Aditya singh</dc:creator>
      <pubDate>Mon, 29 Dec 2025 16:55:24 +0000</pubDate>
      <link>https://dev.to/aditya_singh_172b37651201/30-core-algorithm-ep04-two-pointers-technique-2nao</link>
      <guid>https://dev.to/aditya_singh_172b37651201/30-core-algorithm-ep04-two-pointers-technique-2nao</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fknndyy51j01mgtarkf7t.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fknndyy51j01mgtarkf7t.jpeg" alt="Two Pointer" width="429" height="117"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Why the Two Pointers Technique Is Really About Preventing Drift
&lt;/h1&gt;

&lt;p&gt;Systems that rely on a single moving reference tend to drift.&lt;/p&gt;

&lt;p&gt;They overshoot.&lt;br&gt;
They oscillate.&lt;br&gt;
They keep making locally correct decisions that slowly move them away from what actually matters.&lt;/p&gt;

&lt;p&gt;The Two Pointers technique exists because of that failure mode.&lt;/p&gt;

&lt;p&gt;It isn’t about scanning arrays faster.&lt;br&gt;
It’s about keeping movement stable when decisions are constrained from multiple sides.&lt;/p&gt;
&lt;h2&gt;
  
  
  The Cost of Moving in One Direction
&lt;/h2&gt;

&lt;p&gt;Imagine searching for a condition in a large, ordered structure.&lt;/p&gt;

&lt;p&gt;One obvious approach is to move step by step from one side, checking everything. It’s straightforward. It’s reliable.&lt;/p&gt;

&lt;p&gt;But it ignores something important.&lt;/p&gt;

&lt;p&gt;The structure itself already tells you where &lt;em&gt;not&lt;/em&gt; to look.&lt;/p&gt;

&lt;p&gt;A single pointer only reacts to what it sees locally. Each move is made in isolation, without feedback from the other boundary.&lt;/p&gt;

&lt;p&gt;Progress happens.&lt;br&gt;
But it drifts.&lt;/p&gt;
&lt;h2&gt;
  
  
  Two Pointers: Movement With Feedback
&lt;/h2&gt;

&lt;p&gt;The moment you introduce a second pointer, the problem changes.&lt;/p&gt;

&lt;p&gt;Now movement is no longer absolute.&lt;br&gt;
It’s relative.&lt;/p&gt;

&lt;p&gt;One pointer advances based on what the other reveals.&lt;br&gt;
Decisions are made in pairs, not alone.&lt;/p&gt;

&lt;p&gt;Instead of asking:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“What should I do next?”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;the system asks:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“How do these two positions need to move together to stay valid?”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;That feedback loop is what collapses possibilities quickly — without extra memory or global bookkeeping.&lt;/p&gt;
&lt;h2&gt;
  
  
  The Idea in Code Form
&lt;/h2&gt;

&lt;p&gt;At a structural level, Two Pointers usually looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;left = start
right = end

while left &amp;lt; right:
if constraint is violated:
move one pointer inward
else:
move the other pointer inward
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The logic itself isn’t interesting.&lt;/p&gt;

&lt;p&gt;What matters is that &lt;strong&gt;no pointer moves without being checked against the other&lt;/strong&gt;. Every step corrects for potential drift.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Ordering Matters So Much
&lt;/h2&gt;

&lt;p&gt;Two Pointers only works when movement has meaning.&lt;/p&gt;

&lt;p&gt;Sorted arrays.&lt;br&gt;&lt;br&gt;
Monotonic sequences.&lt;br&gt;&lt;br&gt;
Structured boundaries.&lt;/p&gt;

&lt;p&gt;Without order, pointer movement becomes guesswork. With order, each move eliminates an entire class of invalid states.&lt;/p&gt;

&lt;p&gt;That’s why the technique feels almost magical in the right context — and completely useless in the wrong one.&lt;/p&gt;

&lt;p&gt;The power doesn’t come from the pointers.&lt;br&gt;
It comes from the guarantees the data provides.&lt;/p&gt;

&lt;h2&gt;
  
  
  Two Pointers as a Constraint Stabilizer
&lt;/h2&gt;

&lt;p&gt;Most Two Pointer problems aren’t really about searching.&lt;/p&gt;

&lt;p&gt;They’re about maintaining an invariant.&lt;/p&gt;

&lt;p&gt;A sum that must stay within bounds.&lt;br&gt;&lt;br&gt;
A window that must satisfy a condition.&lt;br&gt;&lt;br&gt;
A distance that must not exceed a limit.&lt;/p&gt;

&lt;p&gt;The pointers don’t explore.&lt;br&gt;
They rebalance.&lt;/p&gt;

&lt;p&gt;When one side violates the constraint, the other compensates. The system continuously nudges itself back toward validity.&lt;/p&gt;

&lt;p&gt;This isn’t exploration.&lt;br&gt;
It’s stabilization.&lt;/p&gt;

&lt;h2&gt;
  
  
  Sliding Windows Are a Negotiation
&lt;/h2&gt;

&lt;p&gt;Sliding window techniques are grouped with Two Pointers for a reason.&lt;/p&gt;

&lt;p&gt;The left pointer tightens the constraint.&lt;br&gt;&lt;br&gt;
The right pointer relaxes it.&lt;/p&gt;

&lt;p&gt;Neither move makes sense alone.&lt;/p&gt;

&lt;p&gt;Together, they negotiate — expanding, contracting, and settling around the condition that matters.&lt;/p&gt;

&lt;p&gt;That negotiation is the algorithm.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where Two Pointers Break Down
&lt;/h2&gt;

&lt;p&gt;Two Pointers assumes cooperation.&lt;/p&gt;

&lt;p&gt;The data must respond predictably when pointers move. If the structure doesn’t change monotonically, the feedback loop collapses.&lt;/p&gt;

&lt;p&gt;That’s why the technique struggles with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Unsorted data
&lt;/li&gt;
&lt;li&gt;Non-monotonic conditions
&lt;/li&gt;
&lt;li&gt;Problems requiring global context
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When local moves don’t reliably improve or worsen a condition, pointer movement loses its signal.&lt;/p&gt;

&lt;p&gt;The technique isn’t fragile.&lt;br&gt;
It’s explicit about its limits.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Trade-off It Makes Explicit
&lt;/h2&gt;

&lt;p&gt;Two Pointers optimizes space and time by sacrificing generality.&lt;/p&gt;

&lt;p&gt;It doesn’t explore everything.&lt;br&gt;
It doesn’t backtrack.&lt;br&gt;
It doesn’t recover from bad assumptions.&lt;/p&gt;

&lt;p&gt;But when the constraints hold, it reaches valid answers with almost no overhead.&lt;/p&gt;

&lt;p&gt;That trade-off is deliberate.&lt;/p&gt;

&lt;h2&gt;
  
  
  Takeaway
&lt;/h2&gt;

&lt;p&gt;The Two Pointers technique isn’t a shortcut.&lt;/p&gt;

&lt;p&gt;It exists to prevent uncontrolled drift when decisions are constrained from multiple sides. Every movement is checked, every step balanced against another boundary.&lt;/p&gt;

&lt;p&gt;That idea shows up far beyond arrays — anywhere systems must move forward without losing stability.&lt;/p&gt;

&lt;p&gt;And that’s why Two Pointers remains a core technique, not a clever trick.&lt;/p&gt;

</description>
      <category>algorithms</category>
      <category>computerscience</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>30 Core Algorithm: EP-02:Breadth-First Search (BFS)</title>
      <dc:creator>Aditya singh</dc:creator>
      <pubDate>Mon, 29 Dec 2025 14:05:23 +0000</pubDate>
      <link>https://dev.to/aditya_singh_172b37651201/30-core-algorithm-ep-02breadth-first-search-bfs-2nb8</link>
      <guid>https://dev.to/aditya_singh_172b37651201/30-core-algorithm-ep-02breadth-first-search-bfs-2nb8</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9zvoyctyljuxrnnj3bv0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9zvoyctyljuxrnnj3bv0.png" alt="BFS Image" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Breadth-First Search Is Really About Fairness
&lt;/h2&gt;

&lt;p&gt;Breadth-First Search is often introduced as a traversal technique.&lt;br&gt;&lt;br&gt;
Queues, levels, neighbors  the mechanics are clear, the examples are simple.&lt;/p&gt;

&lt;p&gt;That framing misses what BFS is actually good at.&lt;/p&gt;

&lt;p&gt;BFS isn’t important because it visits everything.&lt;br&gt;&lt;br&gt;
It’s important because it decides &lt;em&gt;when&lt;/em&gt; things deserve attention.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Hidden Problem BFS Solves
&lt;/h2&gt;

&lt;p&gt;Imagine a system exploring possibilities.&lt;/p&gt;

&lt;p&gt;It could be states in a game.&lt;br&gt;&lt;br&gt;
Pages on the web.&lt;br&gt;&lt;br&gt;
Routes in a network.&lt;br&gt;&lt;br&gt;
Messages in a distributed system.&lt;/p&gt;

&lt;p&gt;There are many directions to go, and each step opens up even more options.&lt;/p&gt;

&lt;p&gt;The naive approach is to follow one path until it ends.&lt;/p&gt;

&lt;p&gt;That feels efficient. It’s focused. It’s easy to implement.&lt;/p&gt;

&lt;p&gt;It’s also unfair.&lt;/p&gt;

&lt;p&gt;Some paths get all the attention early.&lt;br&gt;&lt;br&gt;
Others wait indefinitely, even if they were just as promising.&lt;/p&gt;

&lt;p&gt;BFS exists to prevent that kind of starvation.&lt;/p&gt;

&lt;h2&gt;
  
  
  BFS: Progress Without Favoritism
&lt;/h2&gt;

&lt;p&gt;Breadth-First Search makes a simple promise:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Every option gets a turn, in the order it becomes available.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Instead of diving deep, BFS expands outward.&lt;br&gt;&lt;br&gt;
One layer at a time. One step further for everyone.&lt;/p&gt;

&lt;p&gt;This changes the nature of exploration.&lt;/p&gt;

&lt;p&gt;You’re no longer asking:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“How far can I go down this path?”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;You’re asking:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“What can I reach with one more step, no matter where I started?”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;That shift guarantees something subtle but powerful.&lt;/p&gt;

&lt;p&gt;The first time you reach a node, you reached it &lt;em&gt;as early as possible&lt;/em&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Shortest Paths Fall Out Naturally
&lt;/h2&gt;

&lt;p&gt;BFS doesn’t chase optimality.&lt;br&gt;&lt;br&gt;
It doesn’t calculate costs.&lt;br&gt;&lt;br&gt;
It doesn’t compare alternatives.&lt;/p&gt;

&lt;p&gt;And yet, in unweighted graphs, it reliably finds the shortest path.&lt;/p&gt;

&lt;p&gt;Not because it’s smart — but because it’s disciplined.&lt;/p&gt;

&lt;p&gt;By exploring level by level, BFS ensures that no longer path can sneak in before a shorter one. Depth never gets a chance to dominate breadth.&lt;/p&gt;

&lt;p&gt;Shortest paths aren’t a goal.&lt;br&gt;&lt;br&gt;
They’re a consequence of fairness.&lt;/p&gt;

&lt;h2&gt;
  
  
  BFS as a Scheduling Strategy
&lt;/h2&gt;

&lt;p&gt;Outside of algorithms, BFS shows up as behavior.&lt;/p&gt;

&lt;p&gt;Task schedulers giving equal time slices.&lt;br&gt;&lt;br&gt;
Message queues processing in arrival order.&lt;br&gt;&lt;br&gt;
Network broadcasts spreading hop by hop.&lt;br&gt;&lt;br&gt;
Web crawlers expanding outward from seed URLs.&lt;/p&gt;

&lt;p&gt;In each case, the system isn’t trying to be clever.&lt;/p&gt;

&lt;p&gt;It’s trying to be predictable.&lt;/p&gt;

&lt;p&gt;BFS says:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“Everyone who’s waiting gets served before anyone goes deeper.”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;That property matters more than speed in many real systems.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where BFS Breaks Down
&lt;/h2&gt;

&lt;p&gt;BFS is honest about its costs.&lt;/p&gt;

&lt;p&gt;It remembers everything it has seen.&lt;br&gt;&lt;br&gt;
Frontiers grow wide. Memory usage climbs quickly.&lt;/p&gt;

&lt;p&gt;That’s the trade-off.&lt;/p&gt;

&lt;p&gt;BFS spends space to guarantee fairness and minimal delay.&lt;br&gt;&lt;br&gt;
Depth-first approaches spend fairness to save space.&lt;/p&gt;

&lt;p&gt;When BFS fails in production, it’s rarely because the algorithm is wrong.&lt;/p&gt;

&lt;p&gt;It’s because:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The state space exploded faster than expected&lt;/li&gt;
&lt;li&gt;Memory limits were underestimated&lt;/li&gt;
&lt;li&gt;The graph wasn’t as shallow as assumed&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Again, the failure isn’t subtle. BFS exposes system constraints early.&lt;/p&gt;

&lt;h2&gt;
  
  
  BFS Isn’t About Traversal
&lt;/h2&gt;

&lt;p&gt;Traversal is just the surface.&lt;/p&gt;

&lt;p&gt;What BFS really provides is a guarantee:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;No path is ignored&lt;/li&gt;
&lt;li&gt;No shortcut cuts the line&lt;/li&gt;
&lt;li&gt;No outcome arrives earlier than it deserves to&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That guarantee is why BFS keeps reappearing — not just in textbooks, but in systems that care about order, latency, and fairness.&lt;/p&gt;

&lt;h2&gt;
  
  
  Takeaway
&lt;/h2&gt;

&lt;p&gt;Breadth-First Search isn’t about visiting nodes.&lt;/p&gt;

&lt;p&gt;It’s about refusing to prioritize depth over opportunity.&lt;/p&gt;

&lt;p&gt;That mindset shows up wherever systems must explore, schedule, or expand without bias.&lt;/p&gt;

&lt;p&gt;And that’s why BFS remains relevant long after people stop thinking of it as just another graph algorithm.&lt;/p&gt;

</description>
      <category>algorithms</category>
      <category>computerscience</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>30 Core Algorithm:EP-01: Binary Search</title>
      <dc:creator>Aditya singh</dc:creator>
      <pubDate>Mon, 29 Dec 2025 13:52:54 +0000</pubDate>
      <link>https://dev.to/aditya_singh_172b37651201/30-core-algorithmep-01-binary-search-4blf</link>
      <guid>https://dev.to/aditya_singh_172b37651201/30-core-algorithmep-01-binary-search-4blf</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flpy7otkhi59aq2ocba5h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flpy7otkhi59aq2ocba5h.png" alt=" " width="275" height="183"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Binary Search Is Really About Reducing Uncertainty
&lt;/h2&gt;

&lt;p&gt;The first time Binary Search shows up in a developer’s life, it’s usually framed as a warm-up problem. Something simple. Something solved once and forgotten.&lt;/p&gt;

&lt;p&gt;That framing undersells it.&lt;/p&gt;

&lt;p&gt;Binary Search isn’t important because it’s fast.&lt;br&gt;&lt;br&gt;
It’s important because it captures a way of reasoning under uncertainty that real systems depend on.&lt;/p&gt;
&lt;h2&gt;
  
  
  The Naive Way: Learning Too Slowly
&lt;/h2&gt;

&lt;p&gt;Imagine a system holding a large, ordered dataset. You’re looking for a single value.&lt;/p&gt;

&lt;p&gt;The most straightforward approach is to scan sequentially and compare each element until you either find what you’re looking for or reach the end. This works. It’s also easy to reason about.&lt;/p&gt;

&lt;p&gt;But there’s a hidden cost.&lt;/p&gt;

&lt;p&gt;Each comparison gives you almost no new information. After checking thousands of values, you still don’t know where the target lies relative to the remaining data. You’ve only ruled out what you’ve already seen.&lt;/p&gt;

&lt;p&gt;The problem isn’t correctness.&lt;br&gt;&lt;br&gt;
It’s how slowly certainty improves.&lt;/p&gt;
&lt;h2&gt;
  
  
  Binary Search: Discarding Possibilities Aggressively
&lt;/h2&gt;

&lt;p&gt;Binary Search changes the nature of the question.&lt;/p&gt;

&lt;p&gt;Instead of asking:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“Is this the value I want?”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;it asks:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“Which half can I safely ignore?”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;That single shift makes every comparison meaningful.&lt;/p&gt;

&lt;p&gt;By checking the midpoint, the algorithm doesn’t just test a value — it eliminates an entire range of possibilities. One decision cuts uncertainty in half. Repeating that decision compounds quickly.&lt;/p&gt;

&lt;p&gt;After a handful of comparisons, the search space collapses.&lt;/p&gt;

&lt;p&gt;At a code level, this idea is almost uninteresting. The logic itself is plain:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
python
low, high = 0, len(arr) - 1

while low &amp;lt;= high:
    mid = (low + high) // 2

    if arr[mid] == target:
        break
    elif arr[mid] &amp;lt; target:
        low = mid + 1
    else:
        high = mid - 1 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;There’s nothing clever here.&lt;/p&gt;

&lt;p&gt;The entire advantage comes from what the loop removes from consideration each time it runs.&lt;/p&gt;

&lt;h2&gt;
  
  
  Sorted Data Is the Real Constraint
&lt;/h2&gt;

&lt;p&gt;Binary Search doesn’t require speed.&lt;br&gt;&lt;br&gt;
It requires order.&lt;/p&gt;

&lt;p&gt;The algorithm only works because the data obeys a reliable rule: values increase (or decrease) consistently. That guarantee lets the system draw conclusions from a single comparison.&lt;/p&gt;

&lt;p&gt;In real systems, this shows up as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Time-ordered events
&lt;/li&gt;
&lt;li&gt;Versioned records
&lt;/li&gt;
&lt;li&gt;Thresholds and limits
&lt;/li&gt;
&lt;li&gt;Ranges and partitions
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Whenever a system can confidently say &lt;em&gt;“everything on this side is irrelevant”&lt;/em&gt;, Binary Search becomes possible — even if no array is involved.&lt;/p&gt;

&lt;h2&gt;
  
  
  Binary Search as a Boundary Finder
&lt;/h2&gt;

&lt;p&gt;In production systems, Binary Search rarely appears as an explicit loop.&lt;/p&gt;

&lt;p&gt;Instead, it shows up as behavior.&lt;/p&gt;

&lt;p&gt;Databases narrowing down index ranges.&lt;br&gt;&lt;br&gt;
Schedulers probing capacity limits.&lt;br&gt;&lt;br&gt;
Rate limiters adjusting thresholds.&lt;br&gt;&lt;br&gt;
Feature rollouts determining cutoff points.&lt;/p&gt;

&lt;p&gt;In each case, the system isn’t searching for a value.&lt;/p&gt;

&lt;p&gt;It’s searching for a boundary — the point where behavior should change.&lt;/p&gt;

&lt;p&gt;Binary Search is the simplest reliable way to find that boundary without testing every option.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Binary Search Bugs Are Usually System Bugs
&lt;/h2&gt;

&lt;p&gt;Most Binary Search failures aren’t algorithmic.&lt;/p&gt;

&lt;p&gt;They happen because:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The data isn’t actually sorted
&lt;/li&gt;
&lt;li&gt;Boundary conditions changed silently
&lt;/li&gt;
&lt;li&gt;Assumptions about ordering were violated
&lt;/li&gt;
&lt;li&gt;Concurrent updates broke invariants
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When Binary Search gives the wrong result, it’s often exposing a deeper system issue.&lt;/p&gt;

&lt;p&gt;The algorithm is unforgiving in that way. It assumes the rules are true — and fails loudly when they aren’t.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Trade-off Hidden Behind the Speed
&lt;/h2&gt;

&lt;p&gt;Binary Search optimizes reads, not writes.&lt;/p&gt;

&lt;p&gt;Keeping data ordered costs something. Inserts become slower. Rebalancing is required. Indexes consume memory. Systems pay that cost upfront so that queries can make decisions quickly later.&lt;/p&gt;

&lt;p&gt;This trade-off appears everywhere in backend design:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Spend effort maintaining structure so decisions can be cheap.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Binary Search is the smallest expression of that idea.&lt;/p&gt;

&lt;h2&gt;
  
  
  Takeaway
&lt;/h2&gt;

&lt;p&gt;Binary Search isn’t just about finding elements efficiently.&lt;/p&gt;

&lt;p&gt;It’s about structuring information so uncertainty collapses quickly.&lt;/p&gt;

&lt;p&gt;That’s why it survives beyond arrays.&lt;br&gt;&lt;br&gt;
That’s why it appears in systems that don’t look algorithmic at all.&lt;br&gt;&lt;br&gt;
And that’s why it remains relevant long after developers stop thinking of it as a beginner problem.&lt;/p&gt;

</description>
      <category>algorithms</category>
      <category>beginners</category>
      <category>computerscience</category>
      <category>tutorial</category>
    </item>
  </channel>
</rss>
