Type this into your Python console right now:
print(0.1 + 0.2)
You expect to see 0.3. Instead, you get 0.30000000000000004.
Your calculator says 0.3. Excel says 0.3. Your brain says 0.3. But your code disagrees. And this isn't a Python bug—every programming language does this. JavaScript, Java, C++, Ruby, Go—they all betray basic arithmetic in exactly the same way.
This seemingly tiny quirk has caused multi-million dollar disasters. In 1991, a Patriot missile defense system failed to intercept an Iraqi Scud missile because of accumulated floating-point errors, killing 28 soldiers. Financial systems lose thousands daily to rounding errors in currency calculations. Scientific simulations produce garbage results. Medical dosing software miscalculates.
Yet most developers never learn why this happens or how to fix it. They just add random rounding functions until things look right. Let's end that today.
The Real Problem
Computers don't actually store decimal numbers the way you think. When you write 0.1 in your code, the computer converts it to binary—ones and zeros. But here's the kicker: most decimal fractions can't be represented exactly in binary.
Think about the fraction one-third. In decimal, you write 0.333... forever. No matter how many threes you write, you never capture the exact value. Binary has the same issue with tenths, which is why 0.1 becomes 0.0001100110011001100110011... repeating forever.
Your computer has finite memory, so it cuts off that infinite sequence. The truncated version is close to 0.1, but not exact. When you add two approximate numbers, their tiny errors combine and become visible.
This affects every calculation with decimals. Multiplication, division, subtraction—they all inherit this fundamental approximation. The error compounds with each operation. Run a million calculations in a loop, and your result can drift miles from the truth.
Why Nobody Warned You
This problem lives in a blind spot. Beginners learn integers and strings first, so by the time they hit floating-point math, they assume it works like normal math. Computer science courses mention IEEE 754 floating-point standard in passing, but rarely explain the real-world consequences. You might graduate never knowing why financial software treats money as integers.
The symptoms also hide. If you print a float, most languages automatically round the display to hide the ugliness. So 0.30000000000000004 shows up as 0.3 in your logs. You only discover the truth when equality checks mysteriously fail or when precision matters.
Plus, for simple programs, you don't notice. Adding a few numbers works fine. The errors stay invisible until you work with massive datasets, financial calculations, or iterative algorithms where small errors explode.
The Fix That Actually Works
Stop comparing floats with exact equality. Seriously, never write if x == 0.3 when x comes from floating-point math. Instead, check if the values are close enough:
def float_equal(a, b, tolerance=1e-9):
return abs(a - b) < tolerance
result = 0.1 + 0.2
if float_equal(result, 0.3):
print("Close enough!")
That tolerance value (epsilon) defines your acceptable margin of error. For most applications, one billionth is plenty. Adjust based on your precision requirements.
For money, never use floats. Ever. Store currency as the smallest unit in integers—cents instead of dollars, paise instead of rupees. So $19.99 becomes 1999 cents. Do all your math with integers, then convert back for display:
price_cents = 1999
tax_cents = int(price_cents * 0.07) # 7% tax
total_cents = price_cents + tax_cents
print(f"Total: ${total_cents / 100:.2f}")
This completely eliminates floating-point error from financial calculations.
When you need exact decimal arithmetic—accounting systems, scientific measurements, legal documents—use a decimal library. Python has one built-in:
from decimal import Decimal
a = Decimal('0.1')
b = Decimal('0.2')
print(a + b) # Exactly 0.3
Notice the strings. Don't write Decimal(0.1) because Python will convert 0.1 to a float first, inheriting the error. Use string literals to preserve exact values.
For algorithms that accumulate errors over many iterations, use higher precision. Python's float is 64-bit (double precision), giving you about 15-17 decimal digits of accuracy. If that's not enough, the decimal module lets you set arbitrary precision:
from decimal import Decimal, getcontext
getcontext().prec = 50 # 50 decimal places
result = Decimal(1) / Decimal(3)
print(result) # 0.33333333333333333333333333333333333333333333333333
This slows down calculations, but sometimes correctness matters more than speed.
When Precision Attacks
Financial systems are obvious victims, but the danger extends further. Scientific simulations that run millions of iterations can drift completely off course. Machine learning models training on floating-point data accumulate errors that degrade accuracy. Game physics engines that don't account for float precision create glitches where objects phase through walls.
GPS systems, weather forecasting, cryptocurrency exchanges, medical equipment—all vulnerable. A small error in a sensor reading becomes a catastrophic error in calculated velocity. Your self-driving car's position estimate drifts. Bad things follow.
The famous Vancouver Stock Exchange index provides a cautionary tale. In 1982, they launched an index starting at 1000.00 points. Over time, it steadily declined for no economic reason. After 22 months, it sat at 524.811, even though it should have been around 1100. The culprit? They truncated instead of rounding during thousands of daily calculations. Each truncation shaved off a tiny fraction. Eventually, half the index value evaporated into floating-point hell.
The Survival Strategy
First, know when precision matters. Blog post view counts? Floats are fine. Bank account balances? Absolutely not. If you can't tolerate any error—financial transactions, legal documents, life-critical systems—use exact arithmetic from the start.
Second, test your edge cases. Write unit tests that specifically check calculations with repeating decimals. Feed your functions values like 0.1, 0.01, 0.001. See what comes out. Many bugs hide until someone enters an amount like $10.10 instead of a round number.
Third, understand your tools. Read the documentation for whatever decimal or bignum library your language provides. Learn its quirks. Some libraries have precision limits. Some perform differently with different operations. Know before you commit.
Fourth, watch for accumulation. If you're summing thousands of values in a loop, small errors stack up. Consider using algorithms specifically designed for accurate summation, like Kahan summation, which tracks and compensates for lost precision.
Last, document your decisions. When you choose integers for money or decimals for precision, leave a comment explaining why. Future you—or another developer—will appreciate understanding the reasoning when they're tempted to "simplify" by switching to floats.
The Takeaway
Computers lie about decimals. They always have, they always will. This isn't a bug—it's how binary arithmetic works at the hardware level. Once you accept that floats are approximations, not exact values, you'll write more resilient code.
The next time you see a weird calculation result, before you assume your code is broken, check if floating-point precision is the real culprit. Then reach for the appropriate tool: epsilon comparisons, integer arithmetic, or decimal libraries.
Your financial software will balance. Your scientific simulations will converge. And you'll never again wonder why 0.1 plus 0.2 betrays everything you learned in elementary school.
Top comments (2)
This is such a clear explanation of something most of us run into but rarely fully understand.
I still remember the first time
0.1 + 0.2 !== 0.3broke one of my conditions — I thought I had messed up basic math 😅The reminder to never use floats for money and to avoid direct equality checks is gold. This is one of those “small” topics that actually separates beginner bugs from production-safe code. Really well explained and practical 🙌
Thanks a lot, Bhavin! 🙌
That “did I just break math?” moment is almost a rite of passage for developers 😅
Glad the explanation and practical tips resonated with you — especially around avoiding floats for money and equality checks. Those tiny details really do make a big difference once code hits production. Appreciate you taking the time to share your experience!