DEV Community

Cover image for Part 1: The Philosophy of Testable Code
Gil Goldzweig Goldbaum
Gil Goldzweig Goldbaum

Posted on

Part 1: The Philosophy of Testable Code

Great software isn't just written; it's engineered. Like any complex structure, it relies on the integrity of its smallest components. If a single brick is flawed, the entire wall is at risk. This handbook is about learning to forge perfect bricks, and was born from an internal need to have a one-stop shop for everything unit testing—the discipline of verifying each component in isolation so that we can confidently build large, reliable systems.

This is more than just a style guide—it's a practical framework for building quality and speed into our development process. I aimed to make this a comprehensive resource that benefits everyone, from developers writing their first unit tests to senior engineers looking to align on best practices. This handbook helps us create the safety net we need to refactor and innovate by establishing a common vocabulary, providing a clear path for new features and serving as a consistent standard for code reviews.

A few notes before we get started!

Perspective:
Like most things in software, this write-up is opinionated. I’ve done my best to gather feedback from various engineers before publishing to include multiple perspectives. However, it’s still filtered through my own experience and preferences.
Treat it as a guide, not gospel.

Platform Agnosticism:
While the code examples in this handbook are written in Kotlin and use common Android libraries, the
underlying principles of isolation, dependency injection, and
contract-based design are universal. Developers on any
platform—Backend, iOS, or Web—can apply these philosophies to build
more testable and maintainable software. An appendix is included at
the end of Part 3 to help translate platform-specific terms.

In this handbook, you will learn:

  • The Principle of Isolation and why it matters.
  • How to use interfaces for testable code.
  • The difference between Mocks, Stubs, and Fakes
  • Best practices for organizing your tests for maintainability and speed.

Part 1 (This Page): The Philosophy. We will explore our foundational principle—the Principle of Isolation—and the architectural mindset required to support it.
Part 2: From Theory to Practice(Show me the code!). We will cover the specific architectural patterns and testing techniques we use to build and verify isolated components.
Part 3: Standards and Troubleshooting. We will detail our concrete standards for naming, organization, and style, and provide guidance for common issues.

The Core Philosophy of Unit Testing: The Principle of Isolation(The “Why”)

Before we write a single line of test code, we must understand our guiding philosophy: The Principle of Isolation.

This principle states that a unit test must verify one "unit" of code—typically a single class—in complete isolation from its dependencies. Think of it as placing your code in a sterile laboratory environment. You control all the inputs and observe the outputs, ensuring that the test result is influenced only by the logic of the unit under test.

Why is this non-negotiable?

  • Pinpoint Bugs with Precision: When an isolated unit test fails, you know exactly where the bug is: in the logic of the component you are testing. This is its primary strength. Let's contrast this with an integration test, which serves a different, equally important purpose. An integration test might fire up a ViewModel, which calls a real Repository, which makes a real network call. If that test fails, the scope of the problem is broader: Is the ViewModel's logic wrong? Did the Repository fail to parse the JSON? Did the network request time out? Is the backend server down? You've started a broader debugging process to pinpoint the issue across several components. An isolated unit test eliminates this ambiguity for a single unit and dramatically reduces debugging time for that unit's specific logic.

  • Create a Fast and Reliable Test Suite: Real dependencies, like network clients or databases, are slow and can be unreliable. A network call involves latency and can fail for reasons outside our code's control. Mocking these dependencies makes our tests lightning-fast (running in milliseconds) and, crucially, deterministic—they produce the same result every single time. This is essential for a fast and stable CI/CD pipeline and a quick feedback loop for developers.

  • Enable Fearless and Safe Refactoring: When you have a comprehensive suite of isolated unit tests, you gain the confidence to make changes. You can refactor a component's internal logic, and as long as your tests still pass, you can be confident you haven't broken its behavior from the outside world's perspective.

To be clear, this is not a case against integration tests, but rather
a case for what a unit test should be and do. Integration tests are an
essential part of a comprehensive testing strategy and are not a
replacement for unit tests, or vice-versa. They are invaluable for
catching problems that unit tests, by design, cannot: deserialization
failures, API contract mismatches, and end-to-end logic errors. For
example, without integration tests leveraging in-memory databases,
it's difficult to verify persistence logic or query correctness prior
to deployment. The goal of this handbook is to define the specific,
distinct role of a unit test: to verify a single component’s logic
with surgical precision. This, in turn, makes the feedback from our
integration tests even more valuable.

In short, we isolate our units during testing to ensure our tests are precise, fast, and maintainable. This approach builds a foundation of trust in our code, allowing us to develop features more quickly and refactor with confidence.

The Contract of Trust

To achieve isolation, we operate on a "contract of trust." When we test a class, we trust that its dependencies will work as advertised because their correctness should also be validated in their own separate unit tests. This leads to a clear separation of concerns in our testing:

  • The ViewModel test asks: "When the UI requests data, does the ViewModel correctly call the repository and properly handle the success or error result that the repository says it will return?"

  • The Repository test asks: "When asked for data, does the repository correctly use its dependencies (like a Retrofit service) to fetch it from the network/database?"

Extending the Contract: Trusting Third-Party Code

This contract of trust extends beyond our own codebase. We also inherently trust the frameworks, libraries, and even the language's standard library that we use.

It is not our job to verify that third-party code works. For example:

  • We don't need to write a test to confirm that Retrofit successfully makes an HTTP request when we call apiService.someNetworkRequest(). We trust the Retrofit team has already tested this thoroughly. Our test should simply verify that our code calls the apiService.someNetworkRequest() method.

  • We don't need to write a test that String.trim() actually removes whitespace. We trust that the language developers have validated this basic functionality.

Trusting the API Contract

Our most important external dependency is our own backend. Our unit tests operate under a strict assumption: the backend will adhere to its established API contract (e.g., as defined in OpenAPI/Swagger/Your slack thread…).

  • What We Test: Our responsibility is to verify that the client correctly handles all documented success and error states from the API. Including edge cases, for example: dealing with an unknown/default enum parsing.

  • What We Don't Test: We do not write tests for hypothetical scenarios where the backend violates the contract (e.g., sending an Int for a field that should be a String).

This approach has two powerful benefits:

  1. Faster Debugging: If a production issue occurs and all our client-side tests are passing, it gives us high confidence that the root cause is not in the client application. This allows us to more quickly identify the problem as a likely backend or environmental issue. On the flip side, if the issue turns out to be on our end and our test suite failed to catch the issue, we have a chance to improve our own testing suite. Perfection doesn’t exist; it’s about continuous improvement.

  2. Enabling API Evolution: Our test suite acts as a "consumer contract" for the API. If the backend team needs to make a change, we can simulate the change on the client. If all tests pass, it provides high confidence that the change is backward-compatible and won't break existing clients in production. If something does fail, we probably need to add some versioning or create a new endpoint.

Our responsibility is to test the logic we write—the code that glues these trusted components together. We test our business logic, our state transformations, and our interactions with dependencies, but not the dependencies themselves. This focus keeps our test suite lean, relevant, and centred on the value we are adding.

When each component is rigorously tested in isolation, it becomes a trusted, reliable building block—like a Lego brick. We can then combine these "bricks" to construct complex features, confident that each individual piece will behave exactly as expected. This compounding trust is what allows us to build large, stable systems with confidence.

We’ve laid the philosophical groundwork for our testing strategy and know why we forge each "Lego brick" in isolation. Now, it's time to see how those bricks are made. The next section will move from theory to practice, detailing the architectural patterns and tools we use to build testable, trustworthy components.
On to Part 2 ->

Top comments (0)