OK Smart Guy, Why Should I Read This?
I'm going to show you a few techniques that will reduce the amount of unit tests you have to write, how you can leverage your compiler to "prove" aspects of your domain rather than wait for runtime validation, make your code easier to read and reason about, reducing the testing burden, and increase refactoring safety and speed as requirements change. This is a practical guide. We're going to do our best to avoid diving into theory.
What Do We Mean By Unrepresentable?
By “unrepresentable” we mean that the type system literally makes it impossible to express an invalid state in code — the compiler won’t let you construct or pass around values that violate your domain rules, so errors that would otherwise show up at runtime can’t even be constructed.
Very Brief Background
ML style languages such as F# and Haskell (more commonly known as "functional programming" languages) have for many years enjoyed a set of features that makes them particularly useful for tasks such as compiler writing, automated theorem proving, and formal verification. The same qualities that make ML languages good at modeling abstract formal systems (logic, grammars, proofs) also make them good at modeling messy real-world systems (business rules, workflows, contracts). Both of these problem domains benefits from precision, explicitness, and guaranteed invariants.
Many of the features that define ML languages are making their way into more traditional object-oriented languages, rendering those languages true OO/FP language hybrids.
How Does Kotlin stack up?
While Kotlin doesn't all the features of ML languages, we have enough of them that we can get about 90% of the way towards our goal. If you adopt a small set of Kotlin FP conventions, you’ll capture the same essence that makes ML languages great for compilers/theorem provers: precise representations and trusted invariants. You won’t get all the static guarantees an ML or dependently typed language can offer, but for business domains Kotlin + Arrow (the definitive FP library for Kotlin) delivers a sweet spot of rigor, ergonomics, and platform reach.
Kotlin 2.4 will bring us even closer
https://github.com/Kotlin/KEEP/blob/main/proposals/KEEP-0441-rich-errors-motivation.md
Rather than list all the kotlin features that will leverage, I created a cheatsheet of features to reach for when following the recommendations outlined here: https://github.com/edreyer/safe-domain/blob/main/docs/Kotlin-ML-style-cheatsheet.pdf
Let's Dive In
To illustrate an example, we'll use a familiar (hopefully) but simplified payment domain. Here's how we might model our example using traditional techniques:
data class Payment(
val amount: BigDecimal,
val method: PaymentMethod,
val status: PaymentStatus,
val paidDate: Date?,
val voidDate: Date?,
val refundDate: Date?,
)
enum class PaymentStatus {
PENDING,
PAID,
VOIDED,
REFUNDED
}
open class PaymentMethod
object Cash : PaymentMethod()
data class CreditCard(
val number: String,
val expiryMonth: Int,
val expiryYear: Int,
val cvv: String
) : PaymentMethod()
data class Check(
val routingNumber: String,
val accountNumber: String
) : PaymentMethod()
Here's a list of potential downsides to using "dumb" objects like this to model your domain:
- Invalid business states are representable (e.g., paid without paidDate, voided with paidDate, negative amounts), shifting bugs to runtime
- Heavy reliance on runtime validation and defensive checks scattered across the codebase, increasing complexity and duplication
- State transitions require exhaustive runtime guards; missing a case compiles fine but fails at runtime
- Mutable, nullable fields create temporal coupling and concurrency pitfalls (e.g., races between setting status and dates)
- Testing burden explodes to cover impossible states, boundary cases, and exception scenarios that the type system could prevent
- Refactoring risk is high (e.g., adding a new subtype to an open class can silently miss branches), leading to latent production bugs
- Validation rules are hard to centralize; changes require hunting through many call sites and conditionals
- Poor error accumulation and messaging compared to typed, composable errors; harder to provide actionable feedback
- Performance overhead from continual runtime checks, defensive copies, exception creation, and branch-heavy code paths
- Weaker compiler guidance: fewer compile-time guarantees, more “works until a specific runtime path is hit”
- Data consistency is fragile; it’s easy to construct objects that violate domain invariants and persist/report on bad data
- Harder reasoning and code review: readers must mentally reconstruct invariants and the valid lifecycle from imperative checks
- Increased coupling to sequencing/ordering of calls, which is easy to misuse and difficult to enforce
- Feature evolution costs more (new states/fields require widespread updates and re-validation logic) and is riskier
- Operational debugging is slower: stack traces point to symptoms, not the invariant that should have been impossible
- Reduced IDE/compile-time help for exhaustiveness; missing cases and invalid transitions are discovered only by tests or at runtime
- Harder to compose logic safely; combining validations and workflows tends to produce nested conditionals rather than declarative pipelines
Wow, so that's quite a list. Here's a few code examples to illustrate some of these:
Valid at compile time, but violate our business invariants. These would cause runtime problems
// All of these are valid at compile time but logically impossible
Payment(amount = BigDecimal("-100"), method = card, status = PAID, paidDate = null, voidDate = null, refundDate = null) // Negative payment
Payment(amount = BigDecimal("100"), method = card, status = PAID, paidDate = null, voidDate = null, refundDate = null) // Paid but no date
Payment(amount = BigDecimal("100"), method = card, status = VOIDED, paidDate = Date(), voidDate = null, refundDate = null) // Voided but has paid date
Payment(amount = BigDecimal("100"), method = card, status = PENDING, paidDate = null, voidDate = Date(), refundDate = null) // Pending but voided
Defensive validation galore
How many of your business methods contain input validation checks just to make sure the data you were given is valid?
fun processPayment(cardNumber: String, month: Int, year: Int, cvv: String): PaymentMethod {
// Extensive runtime validation required
if (cardNumber.isBlank()) throw ValidationException("Card number required")
if (month < 1 || month > 12) throw ValidationException("Invalid month")
if (year < LocalDate.now().year) throw ValidationException("Invalid year")
if (cvv.length < 3 || cvv.length > 4) throw ValidationException("Invalid CVV")
if (!cardNumber.all { it.isDigit() }) throw ValidationException("Card number must be digits")
if (!isValidLuhn(cardNumber)) throw ValidationException("Invalid card number")
// ... 50 more lines of validation
return CreditCard(cardNumber, month, year, cvv)
}
Testing burden is very high and tends to be defensive
Because validation of inputs tends to be distributed across any number of business logic methods, the testing burden is very high. At no point can you trust that a business object is in an internally consistent state because the types themselves offer no guarantees.
class TraditionalPaymentServiceTest {
// 15+ test cases needed to cover all edge cases and error conditions
@Test fun `handles empty card number`()
@Test fun `handles invalid card number format`()
@Test fun `handles negative amounts`()
@Test fun `handles zero amounts`()
@Test fun `validates month boundaries`()
@Test fun `prevents invalid state transitions`()
@Test fun `accumulates multiple validation errors`()
@Test fun `handles concurrent modifications`()
// ... and many more defensive tests
}
So how do we address these? Here's the list of techniques we use in Kotlin to address these shortcomings, with examples to follow:
- Sealed interfaces and algebraic data types (ADTs) to model payment states, making impossible states unrepresentable and enabling exhaustive
when
checks - Smart and private constructors and companion factory methods to enforce invariants at creation time (no “invalid” instances)
- Inline value classes (@JvmInline) for refined domain primitives (for example, positive amounts, Luhn-validated card numbers) with zero runtime allocation overhead
- Type-directed state transitions: function signatures that accept only valid source states and return valid target states (e.g.,
Pending → Paid
), eliminating runtime guards - Immutable data structures (val properties, no mutable state) to remove temporal coupling and mutation-related bugs
- Functional error handling with Arrow’s
Either
to model success/failure explicitly instead of throwing exceptions - Error accumulation via Arrow combinators (e.g.,
zipOrAccumulate
) so all validation errors are collected and reported together - Context parameters (
context(_: Raise<…>)
) to thread validation/error-raising capabilities without plumbing parameters through every call - Domain-specific refined types (e.g.,
PositiveBigDecimal
,MOD10String
) to encode business rules in types rather than ad-hoc conditionals - Non-nullability by design: avoiding nullable fields and encoding optionality/variants in ADTs instead of null checks
- Extension functions for declarative, reusable validation and construction logic close to the types they refine
- Exhaustive
when
expressions over sealed hierarchies to force handling of all cases at compile time during refactors - Pure, side-effect-free domain functions that are easy to compose, test, and reason about. Focus on business logic without all the boilerplate valiation
- Clear separation of construction/validation from usage, centralizing invariants and reducing scattered defensive checks
- Compiler-guided refactoring: adding a new state or rule surfaces compile-time errors at all affected call sites
A Modelling Guide
The goal is more type safety. Move as many of your business invariants into the type system as possible. This allows you to get instant compile time feedback when you break your business rules.
Thus, when you have an instance of a type (eg Payment
) it is guaranteed to be valid. Doing this moves all possible data errors upstream to the point of construction so that as they are passed around in the business logic for processing, they are always valid.
Start by creating a library of basic simple types based on value class
. Here's an example:
// Base type for all domain errors
sealed interface ValidationError {
val message: String
}
// a non-empty list of errors. Nel from Arrow enforces at least one error
typealias ValidationErrors = Nel<ValidationError>
@JvmInline
// Represents a String comprised ONLY of numerical digits
value class DigitString private constructor(val value: String) {
sealed interface DigitStringError : ValidationError
data class NonDigitError(val fieldName: String) : DigitStringError {
override val message: String = "$fieldName must contain only digits (spaces allowed)"
}
data class ExceedsMaxLength(val fieldName: String, val maxLength: Int) : DigitStringError {
override val message: String = "$fieldName must be at most $maxLength digits"
}
data class BelowMinLength(val fieldName: String, val minLength: Int) : DigitStringError {
override val message: String = "$fieldName must be at least $minLength digits"
}
companion object Companion {
context(raise: Raise<ValidationErrors>)
fun String.toDigitString(
fieldName: String,
maxLength: Int? = null,
minLength: Int? = null
): DigitString {
// adjust "normalization" to suit your domain
val trimmed = this.trim()
val normalized = trimmed.replace(" ", "")
val digits = normalized.all { it.isDigit() }
return raise.zipOrAccumulate(
{ ensure(digits) { NonDigitError(fieldName) } },
{ ensure(normalized.isNotEmpty()) { NonDigitError(fieldName) } },
{ if (maxLength != null) ensure(normalized.length <= maxLength) { ExceedsMaxLength(fieldName, maxLength) } },
{ if (minLength != null) ensure(normalized.length >= minLength) { BelowMinLength(fieldName, minLength) } },
) { _, _, _, _ -> DigitString(normalized) }
}
}
}
A few key points about this implementation:
context(raise: Raise<ValidationErrors>)
combines Kotlin's new context parameters feature with Arrow'sRaise
concept for type safe error handling. This allows the method signature to stay focused on the happy path, but functional error handling is still there, but not "in the way" with a bunch of boilerplate that increases cognitive load. Note: context params are an experimental feature. See https://github.com/Kotlin/KEEP/blob/context-parameters/proposals/context-parameters.mdraise.zipOrAccumulate
is a DSL that allows you to evaluate multiple validation rules, accumulating failures as they are evaluated. If all pass, a final lambda is evaluated which in this cases produces our happy pathDigitString
instance. If one or more errors occurs, the list is returned as anEither.Left
. The mechanics of this are provided by thecontext(...)
. The errors handled at a higher level in the stack. Note, this is similar to exception handling but rather than blowing the stack, like exceptions, errors are returned up the call stack to the point where they are handled. This is orders of magnitude safer, particularly when refactoring code when methods may be relocated.fun String.toDigitString(...)
- Our companion "factory" function is conveniently expressed as an extension function.The errors that can occur during validation are specific to this type, but all extend from
ValidationError
, making accumulation easier, particularly when composing simple types like this into higher order types
You will probably create a handful of types like this for your own domain: PositiveInt
, NonEmptyString
, FutureDate
, etc, etc. Once you've done that you can now define types with much stronger guarantees around the validity of their internal state.
A Higher Order Payment
sealed interface PaymentMethod
object Cash : PaymentMethod
data class CreditCard(
val number: MOD10String,
val expiryDate: ExpiryDate,
val cvv: DigitString
) : PaymentMethod {
companion object {
// Context-based factory (raises errors immediately)
context(raise: Raise<ValidationErrors>)
fun create(
number: String,
expiryMonth: Int,
expiryYear: Int,
cvv: String
): CreditCard {
return raise.zipOrAccumulate(
{ number.toMOD10String("card number", 19) },
{ ExpiryDate.toExpiryDate("expiry date", expiryMonth, expiryYear) },
{ cvv.toDigitString("CVV", maxLength = 4, minLength = 3) }
) { cardNumber, expiry, cvvCode ->
CreditCard(cardNumber, expiry, cvvCode)
}
}
}
}
Read this class carefully. It looks very similar to the more basic types it's composed of. Look at the create()
method. It takes basic types (Int
, String
) as inputs and returns a valid CreditCard
instance if they are all valid. The basic types are all converted into our safe types that have much Stronger guarantees:
data class CreditCard(
val number: MOD10String,
val expiryDate: ExpiryDate,
val cvv: DigitString
) : PaymentMethod { ... }
This class definition embodies into the type system as many data validation rules as is possible. If you have an instance of this type it is all but guaranteed to be valid. This vastly reduces the runtime checks you must perform, and then write tests for, allowing you to focus on the business logic at hand.
Take this method for example:
// Only accepts PendingPayment, returns PaidPayment - impossible to pass wrong type
fun transitionToPaid(pending: PendingPayment): PaidPayment {
// <business logic to process the payment goes here>
return PaidPayment(pending.amount, pending.method, Date())
// No runtime checks needed - compiler guarantees correctness
}
It is impossible to pass in a Payment with the wrong status.
Compare that to this signature:
fun transitionToPaid(payment: Payment): Payment {
// validate the input is in the correct state
// validation failures result in thrown Exception
// <business logic to process the payment goes here>
payment.status = Paid
payment.paidDate = new Date() // hopefully you didn't forget this
return payment
}
Now go write a bunch of unit and integration tests to make sure you never pass in data in the wrong state, and that you correctly set all
the output data. Hopefully no one every breaks that logic in the future. Hopefully you covered all the cases with tests.
With our safe example above, it's impossible to pass in a payment in an incorrect state. It's impossible to construct a PaidPayment
without all the required data. The compiler enforces these. No tests are required.
If this hasn't happened already, are you starting to see the benefits of this approach?
Refactoring Confidence at Scale
Refactoring Task | Traditional | Safe | Confidence Level |
---|---|---|---|
Add new payment state | Runtime discovery of issues | Compile-time enforcement | 95% → 100% |
Change field types | grep + hope + pray | Type system guides changes | 60% → 100% |
Rename domain concepts | Text search/replace | IDE refactoring works perfectly | 70% → 100% |
Extract validation logic | Manual verification needed | Types guarantee correctness | 50% → 100% |
Refactoring at scale feels safe because the type system aggressively encodes domain invariants and state machines, so invalid states and missing cases become compile-time errors instead of latent runtime bugs. Sealed hierarchies with exhaustive when checks, refined value types, and smart constructors turn “did we remember every branch?” into “the compiler won’t let us forget.” Type-directed APIs for state transitions constrain call sites to valid flows, reducing the blast radius of changes, while IDE-assisted refactors (renames, moves, signature changes) are reliable because usages are strongly typed and discoverable. This shifts confidence from test-only assurance to structural guarantees, letting tests focus on business behavior rather than defensive edges. In practice, you combine these compile-time guarantees with a slim, high-signal test suite and gradual rollout strategies, yielding both speed and safety as the codebase evolves. The result is predictable change: fewer surprises, faster iteration, and measurable reductions in production regressions.
In Conclusion
The safe approach moves business invariants from scattered runtime checks into the type system, making impossible states unrepresentable and state transitions explicit and compiler-verified. With refined domain types and smart constructors, validation is centralized and composable; functional error handling cleanly accumulates and reports issues without exceptions. The payoff is substantial: fewer tests focused on defensive edges, simpler and more readable code, better performance from fewer branches and allocations, and dramatically higher confidence when refactoring—exhaustive checks and type-directed APIs surface every required change at compile time. Teams ship faster with fewer regressions, onboard more easily, and spend their time on business logic rather than guarding against invalid inputs and states, all while integrating smoothly with existing frameworks and tooling.
Disclaimer: I did use AI to help me focus the examples outlined here, and even to write some of that example code. My team and I have used these techniques in production code and we have enjoyed the benefits of these approaches and can vouch for their efficacy and the dividends they pay. Do your future self a favor and invest in the time it takes to learn this.
Here's the repo where I have the sample code and tests. The README is far more comprehensive and expands on what I've written here.
https://github.com/edreyer/safe-domain
Top comments (0)