Picture this. It is late on a Friday afternoon. You are integrating an external API — a third-party service your application depends on. You know the call might fail, but you are not sure exactly what exceptions the client library throws, and digging through their documentation is a rabbit hole you don't have time for right now.
So you take the shortcut that lives in every codebase. You wrap the call in a try, write catch (Exception e), log the error, return false, and move on. The PR is approved. The app doesn't crash. Everyone goes home.
What nobody realises — not you, not the reviewer, not the team — is that you just introduced a silent killer into the codebase.
Not because the pattern is lazy. But because it is plausible. It looks like defensive programming. It feels like resilience. It will pass code review, pass QA, and pass every test you throw at it — right up until the moment a half-executed database transaction quietly corrupts your data, and nobody can figure out why because the logs just say "Something went wrong".
This is the Pokémon Pattern. catch (Exception e). Gotta catch 'em all.
The Illusion of Resilience
The core mistake of the Pokémon Pattern is that it treats two fundamentally different categories of problems as if they were the same thing.
The first category is expected business failures. The user typed the wrong password. The account has insufficient funds. The item is out of stock. These are not bugs. They are normal, anticipated branches of your application's logic — outcomes the business has already thought about and has rules for handling.
The second category is system panics. The database connection died mid-transaction. A third-party API returned malformed JSON. A NullPointerException was thrown halfway through processing an order. These are not business outcomes. They are the application telling you something has gone structurally wrong.
When we write catch (Exception e), we throw a blanket over both. Here is what that blanket looks like in practice:
try {
checkoutService.processOrder(order);
return true;
} catch (Exception e) {
logger.error("Something went wrong", e);
return false;
}
This looks safe. But consider what actually happens when processOrder throws a NullPointerException halfway through — say, after deducting inventory but before recording the transaction.
If this method is annotated with @Transactional, the behaviour is particularly insidious. By swallowing the exception, we signal to the framework that the method completed successfully. Spring sees no exception, so it commits. The partial state — inventory reduced, transaction unrecorded — is now permanently written to the database. There is no rollback. There is no error. There is just quietly corrupted data, and a log file that says "Something went wrong".
We didn't engineer resilience. We engineered a zombie application. It is dead, but it is still walking around.
Better Logs Don't Fix Bad Architecture
When developers realise their logs are full of useless generic errors, the instinct is usually to write better messages inside the catch block. Add more context. Log the order ID. Log the user. Make the string more descriptive.
But a better string does not stop data corruption. It just makes the corruption easier to read about after the fact.
The real problem is not the log message. It is that the catch block is in the wrong place, doing the wrong job, for the wrong reason. No amount of string interpolation fixes that.
There is a better architecture — and it starts with drawing a hard line between a failure and a panic.
Handle Failures, Let Panics Crash
1. Return Failures — Don't Throw Them
An out-of-stock item is not an exceptional circumstance. It is a standard, predictable branch of business logic. Using the throw keyword to handle it is reaching for the wrong tool.
When we throw a business exception, we create an invisible GOTO statement inside our own codebase. The method signature promises nothing about what might happen. Callers have to guess — or read the implementation — or hope the documentation is accurate.
The fix is to make the failure explicit in the method signature using a Result type. Java doesn't have one natively, but a custom wrapper or sealed interfaces achieve the same effect — the compiler forces the caller to handle the failure rather than ignore it:
// Before: the signature lies — it secretly throws ItemUnavailableException
public OrderConfirmation submitOrder(User user, Cart cart) { ... }
// After: the signature is honest — the compiler forces the caller to handle it
public Result<OrderConfirmation, OrderError> submitOrder(User user, Cart cart) { ... }
When failures are returned as values rather than thrown as exceptions, they become part of the contract. The caller cannot ignore them. The try/catch block disappears from the domain logic entirely — not because we removed it, but because there is nothing left to catch.
2. Let Panics Crash
If the database goes offline, or a variable is unexpectedly null, checkoutService has no idea how to recover from that — and it should not try. Attempting to catch and absorb a panic does not resolve it. It just allows the application to execute more code on top of a broken foundation.
Let the thread crash. Let the panic bubble up immediately, before it has a chance to touch another line of business logic. A fast, loud, localised failure is always preferable to a slow, silent, system-wide one.
3. Catch Panics at the Boundary — Nowhere Else
Letting panics bubble up does not mean users see raw stack traces. It means we catch them in exactly one place: the outer edge of the application.
// The only place catch (Exception e) belongs: the absolute boundary
@ExceptionHandler(Exception.class)
public ResponseEntity<ErrorResponse> handleGlobalPanic(Exception e) {
pagerDutyService.triggerAlarm(e);
return new ResponseEntity<>(new ErrorResponse("An unexpected error occurred"), HttpStatus.INTERNAL_SERVER_ERROR);
}
This boundary catcher does three things well. It catches every unhandled panic in one predictable location. It alerts the on-call engineer immediately, with the full stack trace intact. And it returns a 500 to the caller — which is the honest and correct response. Something did go wrong on the server, and the caller deserves to know that. What we avoid is returning a cheerful 200 with a hidden error payload, which would be the HTTP equivalent of the Pokémon Pattern itself.
The catch (Exception e) pattern is not banned. It is relocated — from scattered throughout the domain logic to a single, honest, explicitly-purposed boundary.
A try/catch block is not a band-aid. It is a highly specific control flow tool — and like any tool, its value depends entirely on using it in the right place, for the right job.
When we catch generic exceptions to keep the application alive, we are making a trade we rarely intend: the illusion of uptime in exchange for the integrity of our data. We are hiding the exact stack traces we will desperately need when something goes wrong. We are teaching the system to lie about its own health.
Exceptions are for exceptional circumstances. Business rules are for business logic. Drawing a hard line between the two is not a theoretical nicety — it is what makes the difference between a system that fails loudly and honestly, and one that silently corrupts your database at 3am on a Saturday.
Top comments (0)