DEV Community

TildAlice
TildAlice

Posted on • Originally published at tildalice.io

pytest vs unittest vs hypothesis: Coverage Blind Spots

Why Your 100% Coverage Still Ships Bugs

You hit 100% line coverage, ship to production, and a user finds a bug in a function your tests supposedly covered. The issue? Your test framework only measured which lines ran, not which inputs were tested. I've seen this pattern repeat across teams: unittest shows green checkmarks, pytest reports perfect coverage, but edge cases slip through because traditional coverage tools count execution, not exploration.

Here's what actually happens when you run the same buggy function through all three frameworks.

Stylish abstract black wave pattern conveying depth and texture, perfect for backgrounds and design projects.

Photo by Adrien Olichon on Pexels

The Bug That 100% Coverage Missed

Consider this function from a price calculator service:


python
def calculate_discount(price: float, discount_percent: float) -> float:
    """Apply discount and return final price."""
    if discount_percent < 0:
        raise ValueError("Discount cannot be negative")
    if discount_percent > 100:
        raise ValueError("Discount cannot exceed 100%")


---

*Continue reading the full article on [TildAlice](https://tildalice.io/pytest-vs-unittest-vs-hypothesis-coverage-detection/)*
Enter fullscreen mode Exit fullscreen mode

Top comments (0)