Most Java developers have written something like this. Maybe recently.
@Test
void should_categorize_new_transactions() {
final var mockRepository = Mockito.mock(CategorizedTransactionRepository.class);
final var mockMerchantDirectoryService = Mockito.mock(MerchantDirectoryService.class);
final var categorizer = new TransactionCategorizer(mockRepository, mockMerchantDirectoryService);
final var message = createSampleTransactionMessage();
Mockito.when(mockMerchantDirectoryService.getCategoryForMerchant(message.mcc()))
.thenReturn(new MerchantInfo(message.mcc(), "Transportation"));
Mockito.when(mockRepository.findByTransactionId(message.transactionId()))
.thenReturn(Optional.empty());
Mockito.when(mockRepository.save(Mockito.any()))
.thenAnswer(invocation -> {
var ct = (CategorizedTransaction) invocation.getArgument(0);
ct.setId(1L);
return ct;
});
final var result = categorizer.categorize(message);
Assertions.assertEquals(1L, result.getId());
Assertions.assertEquals(message.transactionId(), result.getTransactionId());
}
A unit test for a small piece of business logic, with mocks for the things it talks to. Mockito setup, a few when().thenReturn() calls to stub behavior, an assertion at the end. It works, and it's been the default in Java testing for almost two decades. Most teams have hundreds of these tests. Some have thousands.
Here's the same test, written differently:
@Test
void should_categorize_new_transactions() {
final var transaction = createSampleTransaction();
final var categorizedTransactionId = new CategorizedTransactionId(1L);
final var categorizer = TransactionCategorizer.create(
(categorizedTransaction) -> categorizedTransaction.withId(categorizedTransactionId),
(transactionId) -> Optional.empty(),
(mcc) -> new ExpenseCategory("Transportation")
);
final var result = categorizer.categorize(transaction);
Assertions.assertEquals(categorizedTransactionId, result.id());
Assertions.assertEquals(transaction.transactionId(), result.transactionId());
Assertions.assertEquals("Transportation", result.expenseCategory().value());
}
Same assertions, same correctness, no Mockito. Three lambdas where the original had three when().thenReturn() blocks. No @Mock, no verify(), no ArgumentCaptor. The actual behavior under test isn't buried under setup anymore; it sits in the middle of the test where it belongs.
The change isn't to the test. It's to the interface the test had to mock. That's what the rest of this article is about: where the Mockito ceremonies came from in the first place, and what kind of interface lets you delete them.
The interface that caused this
Here's the repository being mocked in the first test:
@Repository
public interface CategorizedTransactionRepository
extends JpaRepository<CategorizedTransactionEntity, Long> {
Optional<CategorizedTransactionEntity> findByTransactionId(String transactionId);
List<CategorizedTransactionEntity> findByClientIdAndExpenseCategory(
String clientId, String expenseCategory);
@Query("""
SELECT new fvf4j.demo.domain.CategoryBudget(t.expenseCategory, SUM(t.amount))
FROM CategorizedTransactionEntity t
WHERE t.clientId = :clientId
GROUP BY t.expenseCategory
""")
List<CategoryBudget> findBudgetsByCategory(@Param("clientId") String clientId);
@Query("""
SELECT DISTINCT t.expenseCategory
FROM CategorizedTransactionEntity t
WHERE t.clientId = :clientId
""")
List<String> findDistinctExpenseCategoriesByClientId(@Param("clientId") String clientId);
}
Nothing unusual about it. Four query methods plus everything Spring Data adds through JpaRepository: save, findById, findAll, delete, count, existsById, the paging variants, the example-matching variants, the batch operations. A few dozen methods in total, most of which the application doesn't use, but they come along for the ride because they're part of the contract.
This is the interface every Java codebase has some version of. It's reasonable. It's idiomatic. Spring's documentation recommends it, the Spring Boot tutorial uses it, every JPA tutorial since 2014 has shown it. Why settle for one responsibility when you can have them all in one interface?
The problem is what consumers of this interface end up depending on. TransactionCategorizer uses exactly two methods from it: findByTransactionId and save. To compile, it depends on the entire repository interface. To test, it leans on a framework that can stand in for that whole surface, which is what Mockito is doing in the first test. The two methods the consumer actually uses are the two it stubs. The dozens it doesn't use don't show up in the test setup, but they're still there at compile time. Every method on that interface is part of the consumer's surface area, even when it's silent.
This is the Interface Segregation Principle in its most concrete form. Consumers shouldn't depend on methods they don't use. When they do, the test setup pays the cost. If only that.
The interface ties the consumer to JPA. It dictates the contract from the persistence side rather than the domain. And the cost I see most often in production isn't even the testing pain. It's that dependencies like this make it easy for developers to add yet another responsibility to the same service, because the wiring is already there. I've watched a four-method service become a sixteen-method one over a year because nobody had to add a new dependency to add a new feature.
The fat interface invites the fat service.
Splitting the boundary
The repository in the previous section answers a question the persistence layer is asking: "what queries can you run against this entity?" The answer is shaped by JPA's mechanics: methods named after the database access pattern, return types tied to JPA entities, query annotations embedded in the interface.
A consumer like TransactionCategorizer is asking different questions: "given a transaction, can someone tell me whether I've already categorized it, and can someone save the result?" Two questions. Two methods. Nothing about JPA, nothing about entities, nothing about the dozens of other things the repository can do.
When the consumer's question and the persistence layer's answer are mediated by the same interface, the consumer ends up depending on the answer's full surface. The fix is to give the consumer its own interface, one shaped to its question rather than to the implementation that happens to satisfy it.
Here's one of those interfaces:
@FunctionalInterface
public interface FindByTransactionId {
Optional<CategorizedTransaction> findBy(TransactionId transactionId);
}
A single method, named after what the consumer wants to do. No JpaRepository parent, no JPA entities in the signature. The return type is the domain's CategorizedTransaction, not the persistence layer's CategorizedTransactionEntity. The interface tells you exactly what it offers and nothing else.
A few things about the naming worth pausing on. The interface is FindByTransactionId, not TransactionFinder or TransactionQueryService. It's a verb, not a noun. That's deliberate.
Noun-named interfaces describe what the type is. Verb-named interfaces describe what the type does. For functional interfaces, "what it does" is the more honest description, because there's only one thing it does. Calling it a "Service" or "Repository" or "Finder" borrows weight from those patterns to dress up what is essentially a function.
Verb names also force the question of consumer perspective. FindByTransactionId names the action the consumer wants to take. TransactionRepository names the implementation that happens to be available. The first puts the domain in charge of the contract; the second lets the persistence layer dictate it. That's the dependency-direction shift that ports-and-adapters architectures are built around, and it starts at the name.
The parameter and return types are also worth a glance. TransactionId instead of String, ExpenseCategory instead of String, CategorizedTransaction instead of an opaque entity. That's a separate architectural discipline. Making domain types unambiguous at the boundary deserves its own treatment in another post. For now, the segregation argument and the value-object argument pull in the same direction: both are about making the boundary speak the domain's language rather than the implementation's.
The rest of the segregated interfaces follow the same pattern:
public interface CategorizedTransactionPorts {
@FunctionalInterface
interface SaveCategorizedTransaction {
CategorizedTransaction save(CategorizedTransaction categorizedTransaction);
}
@FunctionalInterface
interface FindByTransactionId {
Optional<CategorizedTransaction> findBy(TransactionId transactionId);
}
@FunctionalInterface
interface FindByClientIdAndExpenseCategory {
List<CategorizedTransaction> findBy(ClientId clientId, ExpenseCategory expenseCategory);
}
@FunctionalInterface
interface FindBudgetsByCategory {
List<CategoryBudget> findBudgetsByCategory(ClientId clientId);
}
@FunctionalInterface
interface FindExpenseCategoriesByClient {
List<ExpenseCategory> findExpenseCategoriesBy(ClientId clientId);
}
}
Five interfaces where the original repository had one. Each is a single method. Each is named after the action the consumer takes, not the persistence pattern that satisfies it. None of them inherit from anything Spring-related; the dependency on JPA stays on the implementation side, which is where it belongs.
Worth noting: CategorizedTransactionPorts itself isn't a port. It's an outer interface used as a namespace, grouping ports that belong to the same boundary. Java doesn't allow multiple public top-level types in one file, and a class with a private constructor would feel heavier than this needs to be. The outer interface is just a logical container. CategorizedTransactionPorts.FindByTransactionId reads as "the find-by-transaction-id port that belongs to the categorized-transaction boundary," which is the actual relationship.
The TransactionCategorizer now declares dependencies only on the ports it actually uses: SaveCategorizedTransaction and FindByTransactionId. The other three exist for other consumers. None of them know or care about each other.
Where the implementations live
A common concern when splitting fat interfaces into many functional interfaces is that the number of implementation classes will explode. It doesn't. A single adapter can implement multiple ports, and usually does:
public class CategorizedTransactionRepositoryAdapter implements
SaveCategorizedTransaction,
FindByClientIdAndExpenseCategory,
FindByTransactionId,
FindExpenseCategoriesByClient,
FindBudgetsByCategory {
private final CategorizedTransactionJpaRepository jpaRepository;
public CategorizedTransactionRepositoryAdapter(CategorizedTransactionJpaRepository jpaRepository) {
this.jpaRepository = jpaRepository;
}
@Override
public CategorizedTransaction save(CategorizedTransaction categorizedTransaction) {
return jpaRepository
.save(CategorizedTransactionEntity.valueOf(categorizedTransaction))
.toDomain();
}
@Override
public List<CategorizedTransaction> findBy(ClientId clientId, ExpenseCategory expenseCategory) { /* ... */ }
@Override
public Optional<CategorizedTransaction> findBy(TransactionId transactionId) { /* ... */ }
@Override
public List<ExpenseCategory> findExpenseCategoriesBy(ClientId clientId) { /* ... */ }
@Override
public List<CategoryBudget> findBudgetsByCategory(ClientId clientId) { /* ... */ }
}
One adapter, five ports. The class delegates each method to the original Spring Data repository, now renamed CategorizedTransactionJpaRepository to make its role explicit. The fat interface didn't disappear in the refactor; it got demoted to an implementation detail. The adapter also handles translation between persistence-side and domain-side types. CategorizedTransactionEntity from the JPA layer becomes CategorizedTransaction in the domain, and the mapping is the adapter's responsibility, which keeps it out of the consumer's signatures.
Spring's wiring is straightforward. The adapter is registered as a bean that implements every port interface. Anywhere a domain consumer asks for a FindByTransactionId or a SaveCategorizedTransaction, Spring resolves it to this adapter because the adapter implements the port. The consumer doesn't know whether it's getting a Spring bean, a test lambda, or something else, and that's the point.
The tradeoff is real, even if the class explosion isn't. More small contracts means more names to know, more imports, more places a method might live. Whether that's a net win depends on what your team struggles with more: oversized dependencies that drag too much into every test, or navigating a wider surface of small types. For long-lived domains with multiple consumers, the segregation usually pays off. For a three-controller CRUD app, it probably doesn't.
Why the second test works
The segregated ports are all single-method interfaces. That means anywhere the consumer expects a port, you can pass a lambda. The consumer doesn't know the difference. As far as TransactionCategorizer is concerned, it received an implementation of SaveCategorizedTransaction and is calling it. Whether that implementation is a Spring bean wired up at runtime or a one-line lambda passed by a test, the call site looks the same.
That's the property the test was using:
final var categorizer = TransactionCategorizer.create(
(categorizedTransaction) -> categorizedTransaction.withId(categorizedTransactionId),
(transactionId) -> Optional.empty(),
(mcc) -> new ExpenseCategory("Transportation")
);
Three lambdas, three ports. Each lambda satisfies the contract of one port. Their order matches the order of the constructor parameters, which means a reader doesn't even need names to follow what's happening. The first lambda is the save behavior. The second is the find. The third is the merchant category lookup.
Compare what the Mockito version had to do. The most awkward part was simulating the database's "assign an id on save" behavior:
Mockito.when(mockRepository.save(expectedTransaction))
.thenAnswer(invocation -> {
var ct = (CategorizedTransaction) invocation.getArgument(0);
ct.setId(1L);
return ct;
});
Capture the argument, cast it, mutate it, return it. Five lines of dance to express what the lambda version expressed in one:
(categorizedTransaction) -> categorizedTransaction.withId(categorizedTransactionId)
Take the input, return a copy with an id assigned. No casting, no capture, no framework.
Worth being precise about what these lambdas are. They're test doubles — specifically, stubs. They return canned values for the test's purposes, nothing more. The article's argument is narrower than "stop using Mockito": it's that when ports are narrow, lambdas are a better tool than a framework for stubbing. Verification is a different question. If a test genuinely needs to assert that a method was called, with which arguments, in which order, then a mocking framework's verify(...) is the right tool — better than hand-rolling countdown latches and counters. Most tests don't need that. The ones that do still benefit from segregated ports, because the verification is scoped to a single port instead of a fat repository.
The lambda version uses result.id() and withId(...) instead of result.getId() and setId(...). CategorizedTransaction is no longer a mutable JPA entity; it's an immutable record. That's a real change in the consumer code, and it's a consequence of the boundary fix. When persistence-side types stopped leaking into the consumer's signatures, the consumer was free to use a domain type that didn't need to be a JPA entity.
The test gets shorter and the consumer code gets cleaner, but neither was the goal. Both came out of giving the boundary the right shape.
And in Kotlin
If your codebase is Kotlin, the same pattern gets a small set of upgrades worth knowing about.
fun interface replaces @FunctionalInterface and reads as a more honest declaration. Kotlin calls these SAM types, for "single abstract method." The compiler enforces the same constraint either way, but the keyword tells the reader at a glance that this type exists to be passed as a function.
fun interface SaveCategorizedTransaction {
operator fun invoke(transaction: CategorizedTransaction): CategorizedTransaction
}
Two things changed. The interface is fun interface, and the method is named invoke and marked operator. The first is just nicer syntax. The second is the more interesting choice: any object with an operator fun invoke can be called like a function. So at the consumer's call site, the port reads as a plain function call:
val saved = saveCategorizedTransaction(transaction)
Not saveCategorizedTransaction.save(transaction), not saveCategorizedTransaction.execute(transaction). Just the call. Whether the implementation is a class, a lambda, or a method reference, the call site looks identical. The fact that there's an interface in the middle becomes invisible.
There's also a small organizational difference. The Java version grouped its ports inside an outer CategorizedTransactionPorts interface as a namespace. That was a workaround for a real Java constraint: one public top-level type per file. To keep five related ports visible together, the Java version needs either a containing type or five separate files sitting next to each other in a package.
Kotlin doesn't have that constraint. A single file can declare multiple top-level public types, so all five ports live together in one file with no scaffolding:
package fvf4k.demo.domain.spi
fun interface SaveCategorizedTransaction { /* ... */ }
fun interface FindByTransactionId { /* ... */ }
fun interface FindByClientIdAndExpenseCategory { /* ... */ }
fun interface FindBudgetsByCategory { /* ... */ }
In tests this means even less ceremony. Trailing lambda syntax shrinks the construction further:
val categorizer = TransactionCategorizerService(
findByTransactionId = { Optional.empty() },
saveTransaction = { it.copy(id = testId) },
resolveExpenseCategory = { ExpenseCategory("Transportation") }
)
Named parameters, lambdas with implicit it, no (parameter) -> boilerplate. The parameter names match the port names: findByTransactionId, saveTransaction, resolveExpenseCategory. The verb-naming discipline from earlier carries through to the construction site. The same test in Kotlin is a few characters shorter than the Java version, but the deeper win is conceptual: at every call site and every test site, the port is indistinguishable from a function, because at that point it is one.
Wrapping up
The article's hook was Mockito ceremony, and the immediate payoff is real: a test that doesn't need a mocking framework, a consumer that depends only on what it actually uses, a boundary that speaks the domain's language. Those are the practical wins. They're worth having on their own.
The architectural win sitting underneath is the more durable one. Naming ports after the action a consumer wants to take, instead of after the type that happens to satisfy them, flips the dependency direction. The persistence layer stops dictating the consumer's contract; the consumer dictates its own. Functional interfaces and lambdas are the small mechanism that makes this practical in a language with classes, but the choice that matters is the one made at the name. FindByTransactionId and TransactionRepository declare different things about who's in charge of the boundary, and by extension, of the design.
Frameworks are tools. They shouldn't be the ones writing your architecture.
None of this is new. Interface Segregation has been the I in SOLID for a long time. What's worth seeing here is the principle at its limit: not just smaller interfaces, but segregation taken seriously enough to shape the boundary itself.
There's a fuller version of this argument in the project the article's examples come from. It's a refactor of the same domain done twice, in Java and Kotlin, with hexagonal architecture, ArchUnit-enforced layer rules, value objects, and explicit failure handling. The Kotlin version uses Arrow's Raise for typed errors, which is its own architectural beat and a topic I want to come back to.
tibtof
/
fun-vs-framework
Demo project for my talk Own Your Design: Functional Principles vs the Framework
Own your design: functional principles vs the framework
Demo project to showcase the refactoring of a Spring Boot application from a layered architecture to a hexagonal architecture. The main branch contains the refactored solution. There are also individual branches for each architectural approach: layered and hexagonal, with a PR that highlights the changes.
To better understand the context I recomend watching my talk and going over the slides.
The talk this code supports, Own Your Design: Functional Principles vs the Framework, walks through the full refactor with slides and animated examples. The recording and slides are at https://tibtof.dev/own-your-design/.
If you've worked on a Java or Kotlin codebase where a small number of methods on a fat repository ended up costing you a disproportionate amount of test setup, I'd be curious to hear about it in the comments. Most of us have the same scars. What worked for your team?
Top comments (0)