Welcome to the final part of our testing handbook. Having covered the "why" (our philosophy) and the "how" (our techniques), this section serves as a practical style guide. It outlines the specific standards we adhere to for naming, organization, and writing valuable tests, as well as how to solve common problems.
5 Qualities of a High-Value Unit Test
A test is only valuable if it increases our confidence in the code's correctness. Before diving into specific rules, remember that a good unit test has the following characteristics:
- It Tests Behavior, Not Implementation: We test the what, not the how. A test should not care if you used a
for
loop or aforEach
to iterate a list. It should only care that the final, observable outcome is correct. This makes tests resilient to refactoring. - It Has Clear Inputs and Outputs: A test provides a known set of inputs (the "Given" state, including mock responses) and asserts a known, predictable output (the "Then" state).
- It Tests One Thing: Each test method should focus on a single scenario or logical path through the code. One test for the success case, another for the network error case, another for the server error case, etc. This makes it immediately obvious what broke when a test fails.
- It is Fast and Deterministic: It must run quickly and produce the same result every single time. This is a direct result of following the Principle of Isolation.
- It Values Quality Over Quantity: Coverage isn’t everything. While having many tests is good, they are only helpful if they can be trusted. Flaky or non-deterministic tests may increase coverage metrics, but they erode our confidence in the test suite and should be avoided or fixed immediately. A suite of 50 trusted tests is infinitely more valuable than 100 flaky ones.
The Style Guide: Writing Readable and Consistent Tests
To ensure our test suite is easy to navigate and understand, we adhere to the following style conventions.
Naming Conventions
The name of a test should clearly and concisely describe what it's testing. We follow a methodName_should_doSomething_when_conditionIsMet
structure. Using backticks (`
) in Kotlin allows us to write these descriptive, sentence-like function names.
Good Examples:
@Test
fun `fetchProfile should emit Success when repository returns success`() { ... }
@Test
fun `login should make one attempt and return AuthenticationError when credentials are invalid`() { ... }
Bad Examples:
@Test
fun testProfileLoading() { ... } // Too vague. What about it? Success? Failure?
@Test
fun profileSuccess() { ... } // Not a sentence. What action is being tested? What is the condition?
Test File Organization
To keep our project navigable, the location of test files must be consistent and predictable. The standard is to mirror the production code's package structure within the test source set.
If your production code is located at:
src/main/java/ca/skipthedishes/customer/profile/ProfileViewModelImpl.kt
The corresponding test class should be located at:
src/test/java/ca/skipthedishes/customer/profile/ProfileViewModelImplTest.kt
This simple rule makes it trivial to locate the tests for any given class and to see which classes might be missing tests.
Structuring Tests with JUnit: Best Practices for Organization
We use a standard set of tools to structure, organize, and execute our tests.
Structuring a Single Test Class
Within a single test class, we use annotations to reduce boilerplate and improve readability.
-
@Before
: Marks a function that will run before each@Test
method in the class. This is perfect for setting up common objects that every test needs, preventing code duplication. -
@After
: Marks a function that will run after each@Test
method. This is useful for cleanup tasks, such as clearing mocks or closing resources, ensuring no state leaks between tests. -
@Rule
: A more powerful way to add reusable behaviour to every test, such as theInstantTaskExecutorRule
forLiveData
.
Example: Refactoring ProfileViewModelImplTest
with @Before
class ProfileViewModelImplTest {
private lateinit var viewModel: ProfileViewModelImpl
private val mockUserRepository: IUserRepository = mockk()
@Before
fun setUp() {
// This code runs before each test, providing a fresh instance
viewModel = ProfileViewModelImpl(mockUserRepository)
}
@After
fun tearDown() {
// This runs after each test. Good for cleanup.
unmockkAll() // Clears all mock states and recorded calls.
}
@Test
fun `loadProfile should emit Success...`() = runTest {
// Given
coEvery { mockUserRepository.fetchUserProfile() } returns Result.success(...)
// When - viewModel is already initialized!
viewModel.loadProfile()
// Then...
}
}
This structure is cleaner, less repetitive, and clearly separates the setup (@Before
), execution (@Test
), and tear down (@After
) phases of our tests.
Organizing Multiple Test Classes and Types
As our project grows, we need ways to group related tests.
@Suite
for Grouping by Feature: To run all tests related to a single feature (e.g., "Authentication"), we can group them into a test suite. This is the preferred way to create logical groups over complex inheritance structures. You create an empty placeholder class and annotate it.
Example: Creating a Feature Test Suite
import org.junit.runner.RunWith
import org.junit.runners.Suite
@RunWith(Suite::class) // 1. Specify the Suite runner
@Suite.SuiteClasses( // 2. List all the test classes to include in this suite
LoginViewModelTest::class,
LogoutUseCaseTest::class,
PasswordValidatorTest::class,
TokenRepositoryTest::class
)
class AuthenticationFeatureTestSuite
Now, running AuthenticationFeatureTestSuite
will execute all the tests from the listed classes.
@Category
for Grouping by Type: The @Category
annotation allows us to tag tests, which is extremely useful for separating fast unit tests from slow integration tests. This allows our CI pipeline to run them at different stages.
First, define marker interfaces for your categories:
interface FastTest
interface SlowTest
Then, apply these categories to your test classes or individual methods:
import org.junit.experimental.categories.Category
@Category(SlowTest::class)
class UserDatabaseTest {
@Test
fun `test something that hits a real database`() { ... }
}
class ProfileViewModelImplTest {
@Test
@Category(FastTest::class)
fun `loadProfile should emit Success...`() { ... }
}
With this setup, we can configure our build system (e.g., Gradle) to run only tests marked with @Category(FastTest::class)
on every pull request, and run the SlowTest
suite nightly.
Advanced Topics and Troubleshooting
Ensuring Parallel Execution
For our CI/CD pipeline to be fast and efficient, tests must be able to run in parallel without interfering with each other. This requires that tests be completely independent and stateless. The most common source of flaky, non-parallelizable tests is shared mutable state, often found in companion objects
.
Example: A Test that CANNOT Run in Parallel
// The problematic class with shared state
class UnstableAnalyticsTracker {
companion object {
var eventCount = 0
}
fun trackEvent() {
eventCount++
}
}
// The flaky tests
class UnstableAnalyticsTrackerTest {
@Test
fun `tracking one event should increment count to 1`() {
val tracker = UnstableAnalyticsTracker()
UnstableAnalyticsTracker.eventCount = 0 // Resetting state
tracker.trackEvent()
assertEquals(1, UnstableAnalyticsTracker.eventCount)
}
@Test
fun `tracking two events should increment count to 2`() {
val tracker = UnstableAnalyticsTracker()
UnstableAnalyticsTracker.eventCount = 0 // Resetting state
tracker.trackEvent()
tracker.trackEvent()
assertEquals(2, UnstableAnalyticsTracker.eventCount)
}
}
If these two tests run in parallel, they will create a "race condition." Both tests try to modify eventCount
at the same time, and the final result will be unpredictable. One test will interfere with the other.
Example: A Test that CAN Run in Parallel
The solution is to remove the shared state and use instances and dependency injection.
// The fixed, stable class
class StableAnalyticsTracker {
var eventCount = 0 // State is now part of the instance
fun trackEvent() {
eventCount++
}
}
// The robust tests
class StableAnalyticsTrackerTest {
@Test
fun `tracking one event should increment count to 1`() {
val tracker = StableAnalyticsTracker() // A fresh instance for this test
tracker.trackEvent()
assertEquals(1, tracker.eventCount)
}
@Test
fun `tracking two events should increment count to 2`() {
val tracker = StableAnalyticsTracker() // A different fresh instance for this test
tracker.trackEvent()
tracker.trackEvent()
assertEquals(2, tracker.eventCount)
}
}
Because each test creates its own StableAnalyticsTracker
instance, they are completely isolated and can run in parallel without issue. Our standard architecture of injecting dependencies achieves this goal by default.
Debugging Unit Tests: A Q&A
Here are solutions to some of the most common problems encountered when writing unit tests.
Q: Why is my test flaky (sometimes passes, sometimes fails)?
A: This is likely due to a race condition or unhandled asynchronicity. Ensure you are using runTest
for any test involving coroutines. Check for any shared mutable state (companion object
properties or module-level variables) that could be modified by multiple tests running in parallel.
Q: Why is my test setup so complicated?
A: A complex setup is often a "code smell" indicating that the class under test has too many responsibilities (violating the Single Responsibility Principle). Consider if the class can be refactored into smaller, more focused units, each with its own simple test. Review the "Case Study: Testing Complex Methods" section in Part 2 for an example of how to do this.
Q: Why is my test so slow?
A: You might be accidentally using a real dependency (like a real database or network call) instead of a mock or fake. Using Thread.sleep()
is also an anti-pattern that causes slowness and flakiness. Double-check that all external dependencies are replaced with test doubles and use tools like runTest
, which handle delays in virtual time. If the test must be slow (e.g., an integration test), categorize it with @Category(SlowTest::class)
so it can be run separately from your fast unit tests.
Summary: Do's and Don'ts with Examples
To summarize the principles from this handbook, here is a quick reference to best practices and common anti-patterns to avoid when writing your unit tests.
✅ Do: Test public API behaviour.
@Test
fun `loadProfile should emit Success state with parsed User`() {
// This is robust. It only cares about the final, observable state.
// We can refactor the internals of ProfileViewModelImpl freely.
coEvery { mockRepo.fetchUserProfile() } returns Result.success(...)
viewModel.loadProfile()
viewModel.uiState.test {
// ... assert the final state is State.Success with the correct user data.
}
}
❌ Don't: Test private implementation details. Testing private implementation is brittle; if the implementation changes, the test breaks even if the behaviour is correct.
@Test
fun `loadProfile should call private method 'parseUser'`() {
// This is brittle. If we rename or remove parseUser, the test breaks
// even if the final UI state is still correct.
val viewModel = ...
val spiedViewModel = spyk(viewModel)
spiedViewModel.loadProfile()
verify { spiedViewModel["parseUser"](any()) }
}
✅ Do: Mock all external dependencies.
@Test
fun `test with mocked repository`() {
// This is fast, deterministic, and isolated.
val mockRepository: IUserRepository = mockk()
val viewModel = ProfileViewModelImpl(mockRepository)
// ...
}
❌ Don't: Instantiate real dependencies in a unit test. Instantiating real dependencies makes tests slow, flaky, and not true unit tests.
@Test
fun `test with real repository`() {
// This test now depends on a real ApiService, making it slow,
// flaky (network might fail), and not a true unit test.
val realRepository = UserRepositoryImpl(FakeApiService())
val viewModel = ProfileViewModelImpl(realRepository)
// ...
}
✅ Do: Use deterministic error types.
@Test
fun `should fail with specific error type`() {
val specificError = RepositoryError.NetworkError(IOException())
coEvery { mockRepo.fetchUserProfile() } returns Result.failure(specificError)
// ...
val errorState = awaitItem() as State.Error
assertEquals(specificError, errorState.error) // ROBUST!
}
❌ Don't: Rely on Throwable.message
strings. Checking for string literals is extremely brittle and will fail if the message is changed.
@Test
fun `should fail with specific message`() {
coEvery { mockRepo.fetchUserProfile() } returns Result.failure(Exception("Network connection error"))
// ...
val errorState = awaitItem() as State.Error
assertEquals("Network connection error", errorState.error.message) // BRITTLE!
}
Appendix: Platform-Specific Terminology
To help developers from other platforms, here’s a quick guide to some of the Android-specific terms and libraries used in this handbook and their common equivalents.
-
Coroutines
(Kotlin)- What it is: A language feature in Kotlin for managing long-running tasks concurrently in a non-blocking way.
-
Platform Equivalents:
async/await
in C# and JavaScript,Promises
in JavaScript,Futures
in Scala, orGrand Central Dispatch
in Swift.
-
ViewModel
(Android Jetpack)- What it is: A class designed to store and manage UI-related data in a lifecycle-conscious way, surviving configuration changes like screen rotations.
-
Platform Equivalents: The concept is similar to a
Presenter
in MVP, aController
in MVC, or a state management object in declarative UI frameworks like React or SwiftUI.
-
Compose
(Android Jetpack)- What it is: Android's modern, declarative UI toolkit for building native user interfaces.
-
Platform Equivalents:
SwiftUI
(iOS),React
/Vue
/Angular
(Web),Flutter
.
-
Retrofit
- What it is: A type-safe HTTP client for Android and Java, used to make network requests.
-
Platform Equivalents:
Alamofire
(iOS),Axios
(Web/JS),HttpClient
in .NET, or other standard HTTP clients in backend frameworks.
-
Mockk
- What it is: A mocking library for Kotlin, used to create test doubles (mocks, fakes, spies).
-
Platform Equivalents:
Mockito
(Java/Kotlin),XCTest
mocking features (iOS),Jest
/Sinon.JS
(Web/JS),Moq
(.NET).
Conclusion
Writing good unit tests is an investment in quality and confidence. It's a discipline that pays for itself many times over in reduced bugs, easier maintenance, and the ability to evolve our application fearlessly. By embracing the Principle of Isolation as our core philosophy and adhering to these standards, we empower ourselves to build better software.
Key Takeaways:
- Test in Isolation: This is our guiding star. It leads to fast, reliable tests that pinpoint bugs with precision.
- Define Contracts: Program to interfaces. This is the architectural key that unlocks isolation and testability.
-
Create Deterministic Errors: Use sealed classes for your
Throwable
types to make error-state testing robust and specific. - Use Dependency Injection: Use DI to connect your components in the app and substitute mocks in your tests.
-
Be Structured: Use "Given, When, Then" and JUnit annotations like
@Before
,@Suite
, and@Category
to organize your tests effectively. -
Know Your Toolkit: Master
mockk
for creating test doubles andrunTest
/Turbine
for handling asynchronous code.
Happy testing!
Top comments (0)