<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Cfir Aguston</title>
    <description>The latest articles on DEV Community by Cfir Aguston (@cfir_aguston_f751a11907c2).</description>
    <link>https://dev.to/cfir_aguston_f751a11907c2</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/cfir_aguston_f751a11907c2"/>
    <language>en</language>
    <item>
      <title>A Date That Never Existed</title>
      <dc:creator>Cfir Aguston</dc:creator>
      <pubDate>Sat, 14 Mar 2026 22:00:00 +0000</pubDate>
      <link>https://dev.to/cfir_aguston_f751a11907c2/a-date-that-never-existed-3pag</link>
      <guid>https://dev.to/cfir_aguston_f751a11907c2/a-date-that-never-existed-3pag</guid>
      <description>&lt;p&gt;If you enter February 29, 1900 into Microsoft Excel, the program will accept it without complaint.&lt;/p&gt;

&lt;p&gt;It will sort it, compare it with other dates, and use it in calculations just like any other value.&lt;/p&gt;

&lt;p&gt;There is only one problem.&lt;/p&gt;

&lt;p&gt;That day never existed.&lt;/p&gt;

&lt;p&gt;According to the Gregorian calendar, 1900 was not a leap year. Century years are leap years only if they are divisible by 400. The year 2000 qualifies. The year 1900 does not. The calendar moved directly from February 28 to March 1.&lt;/p&gt;

&lt;p&gt;Yet inside Excel, February 29, 1900 exists as a valid day.&lt;/p&gt;

&lt;p&gt;So how did this happen?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3drlcq1k1h0i60i3ebi3.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3drlcq1k1h0i60i3ebi3.webp" alt="Excel’s serial date system in action. February 29, 1900 appears as a real day with its own serial number, quietly shifting every date that follows."&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Original Spreadsheet Mistake
&lt;/h2&gt;

&lt;p&gt;Early spreadsheets needed an efficient way to handle dates. Instead of storing them as text, they stored them as numbers representing days since a starting point.&lt;/p&gt;

&lt;p&gt;This made calculations simple:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Adding days became addition&lt;/li&gt;
&lt;li&gt;Finding differences became subtraction&lt;/li&gt;
&lt;li&gt;Sorting dates became numeric sorting&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When Lotus 1–2–3, the dominant spreadsheet of the early 1980s, implemented this system, it made a small mistake.&lt;/p&gt;

&lt;p&gt;It treated 1900 as a leap year. That meant its internal calendar included a day that never existed.&lt;/p&gt;

&lt;h2&gt;
  
  
  When Compatibility Beats Correctness
&lt;/h2&gt;

&lt;p&gt;When Microsoft later created Excel, Lotus 1–2–3 already dominated the spreadsheet world.&lt;/p&gt;

&lt;p&gt;Businesses had enormous numbers of spreadsheets built with Lotus. Models, templates and financial calculations all depended on Lotus behavior.&lt;/p&gt;

&lt;p&gt;If Excel had fixed the leap-year logic, imported spreadsheets would produce different results. Dates would shift. Calculations might silently change.&lt;/p&gt;

&lt;p&gt;So Microsoft made a deliberate decision: Excel would reproduce the Lotus bug exactly.&lt;/p&gt;

&lt;p&gt;The result is a strange artifact that still exists today: a phantom day in February 1900 that only exists inside spreadsheets.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Real Lesson
&lt;/h2&gt;

&lt;p&gt;This story is not really about calendars. It is about how software evolves.&lt;/p&gt;

&lt;p&gt;A small implementation decision in the early 1980s became:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;a compatibility requirement.&lt;/li&gt;
&lt;li&gt;a permanent behavior.&lt;/li&gt;
&lt;li&gt;a contract that millions of spreadsheets depend on.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Software remembers its past, even when that past includes mistakes.&lt;/p&gt;

&lt;p&gt;Sometimes the safest fix is not fixing it at all.&lt;/p&gt;

&lt;p&gt;You can read the full story, including technical details and lessons, here:&lt;br&gt;
&lt;a href="https://medium.com/gitconnected/a-date-that-never-existed-ab3fac907cb9" class="crayons-btn crayons-btn--primary" rel="noopener noreferrer"&gt;A Date That Never Existed&lt;/a&gt;
&lt;/p&gt;

</description>
      <category>softwareengineering</category>
      <category>programming</category>
      <category>software</category>
      <category>softwaredevelopment</category>
    </item>
    <item>
      <title>How “Reliable” Systems Almost Started a Nuclear War</title>
      <dc:creator>Cfir Aguston</dc:creator>
      <pubDate>Sat, 14 Mar 2026 05:58:26 +0000</pubDate>
      <link>https://dev.to/cfir_aguston_f751a11907c2/how-reliable-systems-almost-started-a-nuclear-war-56gg</link>
      <guid>https://dev.to/cfir_aguston_f751a11907c2/how-reliable-systems-almost-started-a-nuclear-war-56gg</guid>
      <description>&lt;p&gt;In June 1980, shortly after midnight, computers at a U.S. strategic command center suddenly showed something terrifying: incoming nuclear missiles.&lt;/p&gt;

&lt;p&gt;More launches kept appearing on the screen. Crews rushed to their B-52 bombers. Missile units were placed on higher alert.&lt;/p&gt;

&lt;p&gt;For a few tense minutes, the early steps of nuclear war had already begun.&lt;/p&gt;

&lt;p&gt;But at NORAD, the central warning hub, radar and satellites saw nothing.&lt;/p&gt;

&lt;p&gt;After comparing the data, commanders realized: the attack did not exist.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Real Cause
&lt;/h2&gt;

&lt;p&gt;The problem turned out to be a failed integrated circuit in a Data General computer that handled communication between command centers.&lt;/p&gt;

&lt;p&gt;To keep links healthy, the system constantly sent test messages that mimicked real alerts but always reported zero missiles detected.&lt;/p&gt;

&lt;p&gt;When the chip failed, those zeros became random numbers. The system interpreted them as incoming missiles.&lt;/p&gt;

&lt;p&gt;Worse, the messages had no error checks, so the corrupted data was accepted as valid.&lt;/p&gt;

&lt;h2&gt;
  
  
  When “Reliable” Systems Fail
&lt;/h2&gt;

&lt;p&gt;What makes this story interesting is that the system wasn’t broken in the usual sense. It was working exactly according to its specification.&lt;/p&gt;

&lt;p&gt;But the real world did not match the assumptions the system was built on.&lt;/p&gt;

&lt;p&gt;And in systems where decisions must happen in less than a second, that gap can become dangerous.&lt;/p&gt;

&lt;p&gt;You can read the full story, including technical details and lessons, here:&lt;br&gt;
&lt;a href="https://medium.com/gitconnected/how-reliable-systems-almost-started-a-nuclear-war-5ccc6c5012b2" class="crayons-btn crayons-btn--primary" rel="noopener noreferrer"&gt;How “Reliable” Systems Almost Started a Nuclear War&lt;/a&gt;
&lt;/p&gt;

</description>
      <category>softwareengineering</category>
      <category>programming</category>
      <category>systems</category>
      <category>history</category>
    </item>
    <item>
      <title>How Developers Really Learn: 10 Lessons Backed by Science</title>
      <dc:creator>Cfir Aguston</dc:creator>
      <pubDate>Sat, 15 Nov 2025 09:03:46 +0000</pubDate>
      <link>https://dev.to/cfir_aguston_f751a11907c2/how-developers-really-learn-10-lessons-backed-by-science-57pa</link>
      <guid>https://dev.to/cfir_aguston_f751a11907c2/how-developers-really-learn-10-lessons-backed-by-science-57pa</guid>
      <description>&lt;p&gt;As developers we often assume learning means “look it up → copy → done”. But real learning is deeper and science shows how we learn best. Here are 10 evidence-based lessons tailored for developers.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Human Memory Is Not Made Of Bits&lt;/strong&gt;&lt;br&gt;
Unlike a computer’s memory, our brains don’t simply store and retrieve fixed chunks. When we recall something, memory can change (“reconsolidation”). Also, memories are connected in networks (“spreading activation”) so recall isn’t isolated.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Human Memory Is Composed Of One Limited And One Unlimited System&lt;/strong&gt;&lt;br&gt;
We have working memory (small capacity) and long-term memory (effectively huge). Experts free up working memory by “chunking” knowledge (e.g., recognizing patterns).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Experts Recognise, Beginners Reason&lt;/strong&gt;&lt;br&gt;
Beginners often step through code line-by-line. Experts recognise patterns and jump straight to solutions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Understanding A Concept Goes From Abstract To Concrete And Back&lt;/strong&gt;&lt;br&gt;
Learning works best when you start with the concept, dive into concrete examples (good &amp;amp; bad), then return to the abstract view.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Spaceing And Repetition Matter&lt;/strong&gt;&lt;br&gt;
Last-minute studying feels productive, but it doesn’t work. Spaced practice plus revisiting topics over time beats marathon sessions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;6. The Internet Has Not Made Learning Obsolete&lt;/strong&gt;&lt;br&gt;
Just because you “can Google it” doesn’t mean you’ve learned it. Searching is not the same as understanding.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;7. Problem-Solving Is Not A Generic Skill&lt;/strong&gt;&lt;br&gt;
Doing puzzles or “brain teasers” doesn’t necessarily make you better at programming. The domain matters.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;8. Expertise Can Be Problematic In Some Situations&lt;/strong&gt;&lt;br&gt;
What makes you an expert also can blind you (you assume patterns that newcomers don’t see). Teaching beginners or learning new domains can suffer from expert assumptions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;9. The Predictors Of Programming Ability Are Unclear&lt;/strong&gt;&lt;br&gt;
There’s no reliable shortcut (years of experience, IQ, etc.) that guarantees programming ability. Good practitioners come from diverse paths.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;10. Your Mindset Matters&lt;/strong&gt;&lt;br&gt;
Believing you can improve (growth mindset) helps. But mindset alone isn’t enough, you still need practice, feedback, reflection.&lt;/p&gt;

&lt;p&gt;As developers, our learning toolkit should include code reading, active practice, spaced revisits, abstract/concrete toggling, and a realistic view of memory &amp;amp; expertise. The Internet and AI are tools, not shortcuts to understanding. Use them, but don’t rely on them.&lt;/p&gt;

&lt;p&gt;The full breakdown, including all lessons and examples, is available here:&lt;br&gt;
&lt;a href="https://medium.com/gitconnected/how-developers-really-learn-10-lessons-backed-by-science-23a0cf6c9c3a" class="crayons-btn crayons-btn--primary" rel="noopener noreferrer"&gt;How Developers Really Learn: 10 Lessons Backed by Science&lt;/a&gt;
&lt;/p&gt;

</description>
      <category>learning</category>
      <category>softwaredevelopment</category>
      <category>productivity</category>
      <category>developers</category>
    </item>
    <item>
      <title>When Floating-Point Failed Catastrophically</title>
      <dc:creator>Cfir Aguston</dc:creator>
      <pubDate>Sun, 12 Oct 2025 11:10:56 +0000</pubDate>
      <link>https://dev.to/cfir_aguston_f751a11907c2/when-floating-point-failed-catastrophically-pid</link>
      <guid>https://dev.to/cfir_aguston_f751a11907c2/when-floating-point-failed-catastrophically-pid</guid>
      <description>&lt;p&gt;IEEE-754 gave us a common way to handle decimals on computers. But even with a standard, tiny numeric mistakes can grow into big failures. Here are four famous ones.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1) The Patriot Missile Failure (1991)&lt;/strong&gt;&lt;br&gt;
A U.S. Patriot missile system failed to intercept an Iraqi Scud missile, killing 28 soldiers. The software tracked time in tenths of a second but stored it as an integer.&lt;br&gt;
To convert it into seconds, the code used a floating-point calculation that wasn’t exact. Over 100 hours of operation, the tiny rounding error accumulated into a 0.34-second drift, enough for the missile to miss its target.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2) The Pentium FDIV Bug (1994)&lt;/strong&gt;&lt;br&gt;
Intel’s Pentium had a flaw in its floating-point division hardware. A few entries in a lookup table were missing, so certain divisions returned slightly wrong answers (often far out in the decimal places). For most users it was rare, but for scientists and finance it had influence. After public pressure, Intel replaced affected chips and took a huge loss.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3) Ariane 5 Flight 501 (1996)&lt;/strong&gt;&lt;br&gt;
Ariane 5 reused code from Ariane 4. During flight, a 64-bit floating value (horizontal velocity) was converted to a 16-bit integer and overflowed. The guidance system crashed, the backup did the same, and the rocket self-destructed 37 seconds after launch.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4) Vancouver Stock Exchange Index (1980s)&lt;/strong&gt;&lt;br&gt;
The index was updated after each trade, but the value was truncated to three decimals every time. Tiny losses stacked up. Over many updates, the index drifted down—far from reality. When they fixed the math and recalculated with more precision, the index jumped back to where it should have been.&lt;/p&gt;

&lt;p&gt;What engineers should remember:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Precision, rounding and overflow are design choices, not footnotes.&lt;/li&gt;
&lt;li&gt;Long-running systems amplify tiny numeric errors.&lt;/li&gt;
&lt;li&gt;Reused code must be re-validated for new data ranges.&lt;/li&gt;
&lt;li&gt;In safety-critical and financial systems, prove the math, don’t assume it.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can read the full story, including technical details and lessons, here:&lt;br&gt;
&lt;a href="https://medium.com/gitconnected/when-floating-point-failed-catastrophically-a630110ec94c" class="crayons-btn crayons-btn--primary" rel="noopener noreferrer"&gt;When Floating-Point Failed Catastrophically&lt;/a&gt;
&lt;/p&gt;

</description>
      <category>software</category>
      <category>programming</category>
      <category>techhistory</category>
      <category>computerscience</category>
    </item>
    <item>
      <title>Oops I goto It Again</title>
      <dc:creator>Cfir Aguston</dc:creator>
      <pubDate>Sun, 12 Oct 2025 11:07:07 +0000</pubDate>
      <link>https://dev.to/cfir_aguston_f751a11907c2/oops-i-goto-it-again-18ee</link>
      <guid>https://dev.to/cfir_aguston_f751a11907c2/oops-i-goto-it-again-18ee</guid>
      <description>&lt;p&gt;Back in 2014, Apple’s SSL code had a tiny flaw: just one extra &lt;code&gt;goto fail;&lt;/code&gt; line, that broke secure connections. On a public Wi-Fi, your iPhone’s “lock” icon could lie.&lt;/p&gt;

&lt;p&gt;That duplicated &lt;code&gt;goto&lt;/code&gt; made a security check skip critical steps. Attackers could intercept, tamper with or inject data — a man-in-the-middle attack.&lt;/p&gt;

&lt;p&gt;Why this bug was terrifying?&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The bug was trivial to miss: very small, very innocent looking.&lt;/li&gt;
&lt;li&gt;It bypassed core SSL verification logic.&lt;/li&gt;
&lt;li&gt;Because Apple’s SSL was trusted deeply in iOS/macOS, the impact was broad.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;What engineers can learn:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Simplicity can be dangerous. Tiny code mistakes may have massive security consequences.&lt;/li&gt;
&lt;li&gt;Don’t assume the safety of framework code. Even trusted libraries must be audited.&lt;/li&gt;
&lt;li&gt;Be paranoid about branches. Control flow errors are subtle but powerful.&lt;/li&gt;
&lt;li&gt;Trust but verify. Always build redundancy and extra validation layers.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can read the full story, including technical details and lessons, here:&lt;br&gt;
&lt;a href="https://medium.com/gitconnected/oops-i-goto-it-again-when-apples-secure-code-is-not-eb473656a154" class="crayons-btn crayons-btn--primary" rel="noopener noreferrer"&gt;Oops I goto It Again&lt;/a&gt;
&lt;/p&gt;

</description>
      <category>security</category>
      <category>softwareengineering</category>
      <category>programming</category>
      <category>techhistory</category>
    </item>
    <item>
      <title>The Car That Would Not Stop</title>
      <dc:creator>Cfir Aguston</dc:creator>
      <pubDate>Sun, 12 Oct 2025 11:06:08 +0000</pubDate>
      <link>https://dev.to/cfir_aguston_f751a11907c2/the-car-that-would-not-stop-1hem</link>
      <guid>https://dev.to/cfir_aguston_f751a11907c2/the-car-that-would-not-stop-1hem</guid>
      <description>&lt;p&gt;In 2009, a Lexus suddenly started speeding on a California highway. The driver, off-duty police officer Mark Saylor, called 911.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;&lt;strong&gt;“We can’t stop… we’re going 120…”&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Moments later, the car crashed. Everyone inside was killed.&lt;/p&gt;

&lt;p&gt;At first, Toyota blamed floor mats and driver error. But more reports came in: cars that accelerated by themselves, brakes that didn’t respond, and no clear signs of failure in the data.&lt;/p&gt;

&lt;p&gt;The reality was more complicated. Inside modern cars, computers control almost everything: speed, brakes and sensors. A small software bug in one system can affect the others.&lt;/p&gt;

&lt;p&gt;Investigators found that even tiny faults in Toyota’s code could cause dangerous results. One mistake could lock up the CPU, disable safety checks, and make the car ignore brake commands.&lt;/p&gt;

&lt;p&gt;This wasn’t just a mechanical problem. It was a software failure, one that showed how invisible code can turn deadly when design, testing, and safety don’t align.&lt;/p&gt;

&lt;p&gt;What engineers can learn:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Hardware isn’t perfect, software must be ready for it to fail.&lt;/li&gt;
&lt;li&gt;Safety systems need multiple layers, not just one check.&lt;/li&gt;
&lt;li&gt;Testing culture is important as much as the code itself.&lt;/li&gt;
&lt;li&gt;Small logic bugs can lead to big real-world disasters.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can read the full story, including technical details, investigations and lessons, here:&lt;br&gt;
&lt;a href="https://levelup.gitconnected.com/the-car-that-would-not-stop-f9eea9d35ae0" class="crayons-btn crayons-btn--primary" rel="noopener noreferrer"&gt;The Car That Would Not Stop&lt;/a&gt;
&lt;/p&gt;

</description>
      <category>programming</category>
      <category>techhistory</category>
      <category>automotive</category>
      <category>software</category>
    </item>
    <item>
      <title>The Forbidden Intel Opcode That Melts CPUs</title>
      <dc:creator>Cfir Aguston</dc:creator>
      <pubDate>Sun, 12 Oct 2025 11:04:21 +0000</pubDate>
      <link>https://dev.to/cfir_aguston_f751a11907c2/the-forbidden-intel-opcode-that-melts-cpus-13ml</link>
      <guid>https://dev.to/cfir_aguston_f751a11907c2/the-forbidden-intel-opcode-that-melts-cpus-13ml</guid>
      <description>&lt;p&gt;In the 1990s, a strange bug in Intel’s Pentium processors caused real panic. A few bytes of machine code, &lt;code&gt;F0 0F C7 C8&lt;/code&gt;, could completely freeze the CPU. No error message. No reboot. No recovery. The system just died.&lt;/p&gt;

&lt;p&gt;This flaw became known as the &lt;strong&gt;F00F bug&lt;/strong&gt;, named after its hexadecimal pattern. It wasn’t a normal software crash. The CPU itself stopped responding, leaving even the operating system helpless.&lt;/p&gt;

&lt;p&gt;What it revealed:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Hardware bugs can be just as dangerous as software ones.&lt;/li&gt;
&lt;li&gt;Intel and OS developers had to rethink how they handle “impossible” instructions.&lt;/li&gt;
&lt;li&gt;Even one line of code can bring everything down.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Even today, we rely on billions of assumptions inside our chips. When one of them fails, the results can be catastrophic. The F00F bug shows that undefined behavior isn’t just a programming issue, it’s a hardware story too.&lt;/p&gt;

&lt;p&gt;You can read the full story, including technical details and how engineers found it and fixed it, here:&lt;br&gt;
&lt;a href="https://medium.com/@cfiraguston/the-forbidden-intel-opcode-that-melts-cpus-8d526e5853e1" class="crayons-btn crayons-btn--primary" rel="noopener noreferrer"&gt;The Forbidden Intel Opcode That Melts CPUs&lt;/a&gt;
&lt;/p&gt;

</description>
      <category>programming</category>
      <category>software</category>
      <category>techhistory</category>
      <category>computerscience</category>
    </item>
    <item>
      <title>Statement Coverage Is Not Enough</title>
      <dc:creator>Cfir Aguston</dc:creator>
      <pubDate>Fri, 10 Oct 2025 16:04:15 +0000</pubDate>
      <link>https://dev.to/cfir_aguston_f751a11907c2/statement-coverage-is-not-enough-37e1</link>
      <guid>https://dev.to/cfir_aguston_f751a11907c2/statement-coverage-is-not-enough-37e1</guid>
      <description>&lt;p&gt;Every now and then, I see someone say they’ve stopped caring about coverage metrics or that “50% coverage is good enough” (or other arbitrary number).&lt;/p&gt;

&lt;p&gt;But I think a major point is being missed.&lt;/p&gt;

&lt;p&gt;Coverage isn’t just a number. It’s a way to understand how much of your software’s behavior you actually know.&lt;/p&gt;

&lt;p&gt;If your coverage is 50%, it doesn’t mean your tests are “half decent”. It means that half of your code has never been executed while testing: its logic and behavior are unknown. And if we take that idea one step further, in safety-critical applications, that’s unacceptable.&lt;/p&gt;

&lt;p&gt;This is where Modified Condition / Decision Coverage (MC/DC) comes in. It ensures that every individual condition within a decision can independently affect the outcome. It’s not just about hitting lines or branches, it’s about proving that each logical factor has influence and is verified on its own. Whenever you stack conditions, combine flags or write complex &lt;code&gt;if&lt;/code&gt; statements, you’re dealing with logic that can hide high-impact bugs.&lt;/p&gt;

&lt;p&gt;Code coverage itself includes several levels of criteria, while statement coverage and MC/DC (together with brute force) are on the opposite sides of the scale. In between, each level gives you a different depth of assurance:&lt;br&gt;
&lt;strong&gt;1. Statement Coverage:&lt;/strong&gt; ensures every line of code executes at least once. Good for catching dead code, but blind to logic paths.&lt;br&gt;
&lt;strong&gt;2. Decision (or Branch) Coverage:&lt;/strong&gt; checks that every branch (&lt;code&gt;if&lt;/code&gt;, &lt;code&gt;else&lt;/code&gt;, &lt;code&gt;switch&lt;/code&gt;) is taken both true and false. Reveals untested decision outcomes, but still misses compound logic.&lt;br&gt;
&lt;strong&gt;3. Condition Coverage:&lt;/strong&gt; verifies that each Boolean condition inside a decision is evaluated both true and false. Tests the individual pieces, but not their combinations.&lt;br&gt;
&lt;strong&gt;4. Decision/Condition Coverage:&lt;/strong&gt; merges both: every decision outcome and each individual condition must be tested true and false. Better visibility, but doesn’t guarantee that each condition’s influence is isolated.&lt;br&gt;
&lt;strong&gt;5. Modified Condition / Decision Coverage (MC/DC):&lt;/strong&gt; goes one step further by requiring proof that each condition can independently change the decision’s outcome. This is the level demanded, for example, in avionics and other critical systems because it provides meaningful assurance of logical completeness.&lt;/p&gt;

&lt;p&gt;You can see a full article that walks through different coverage methods, practical examples, why MC/DC is needed and how it ties into requirements coverage here:&lt;br&gt;
&lt;a href="https://medium.com/gitconnected/mc-dc-and-the-logic-of-safer-software-1cec6a384296" class="crayons-btn crayons-btn--primary" rel="noopener noreferrer"&gt;MC/DC and the Logic of Safer Software&lt;/a&gt;
&lt;/p&gt;

</description>
      <category>testing</category>
      <category>programming</category>
      <category>softwareengineering</category>
      <category>software</category>
    </item>
  </channel>
</rss>
