Most Java developers today can explain encapsulation. They will tell you it means making fields private and adding getters and setters. They can recite SOLID principles on demand. They know the vocabulary.
What most of them have never experienced is what genuine object-oriented design actually feels like in practice — and that is the real problem.
Object-oriented principles did not disappear because of technology hype or the pace of change. They were never properly learned. A generation of developers was trained on frameworks, not on design. They learned Spring before they understood objects. They learned dependency injection before they understood responsibility. They learned how to make things work before they understood how to structure things well.
The result is an industry where object-oriented vocabulary is used to justify procedural habits. The Interface Segregation Principle — which is fundamentally about keeping responsibilities separate and coherent — gets applied as a rule for how to slice Spring interfaces. Encapsulation becomes a checkbox: private fields, public getters, done. The deeper meaning, and the profound practical value behind it, is lost entirely.
What dominates instead is procedural programming in disguise. Fat service classes orchestrate anemic data bags. Logic is scattered across layers. Objects exist to hold data, not to own behavior. The goal is implementation — make it work, ship it — not design. Not structure. Not a system that remains small, simple, robust, and maintainable as it grows.
This article is about what encapsulation actually means, what it actually does, and why practicing it properly changes both the software you build and the way you think about building it.
What Encapsulation Really Means
Encapsulation means that the "how" stays completely inside the object. Clients see only the "what" — the responsibilities the object fulfills. Nothing about implementation, nothing about mechanism, nothing about technology ever surfaces in the public interface.
Private fields are the minimum. The real discipline is in the public surface of the object. If a method exposes internal data, leaks a storage detail, or forces the caller to know anything about how the object works internally, encapsulation has already failed — regardless of whether the fields are private.
This extends to the constructor. A constructor that accepts implementation details — a storage mechanism, an external resource, a configurable strategy — is already exposing the "how." The object must own its implementation completely, from the moment it comes into existence.
A helpful guiding principle is "Tell, Don't Ask": tell the object what to do. Do not ask it for its data so that you can make decisions with it elsewhere. When you find yourself pulling data out of an object to decide what to do next, that decision almost certainly belongs inside the object itself.
The Cognitive Shift: From Technology to Responsibilities
Encapsulation is more than a coding rule. Practiced properly, it becomes a thinking tool that changes how you model systems from the ground up.
When you commit to hiding the "how," you are forced to think clearly about the "what." Technical questions — how do I store this, which framework handles this, which layer does this belong to — become the wrong questions. They are about implementation, and implementation is not your concern at this level. The right questions are: what is this object responsible for? What should it be able to do? Which other objects would it naturally talk to?
In a typical Spring application this shift never happens. Developers think in layers — controller, service, repository — and the central question is always "where does this code go?" That question produces a filing system for procedural code. It does not produce a domain model. The objects that emerge from it are empty by design, because the template has already decided that behavior lives in services, not in objects.
Asking "whose responsibility is this?" produces something entirely different: a coherent network of objects that each own their behavior completely, and that together tell the story of the domain.
Example: A Well-Encapsulated Document
Consider a compliance-heavy application where documents — PDFs, scanned forms, certificates — play a central role. They get created, stored, retrieved, and checked for compliance throughout the system.
The typical Spring-influenced approach treats Document as a data bag:
java
public class Document {
private UUID id;
private String filePath; // leaks storage details
private String mimeType;
private byte[] content; // exposes raw data
public String getFilePath() { return filePath; }
public void setFilePath(String path) { this.filePath = path; }
public byte[] getContent() { return content; }
// ... more getters and setters
}
This is not an object. It is a struct with ceremony. Every implementation detail is visible and reachable. Logic that belongs to the Document — storage, compliance checking, content retrieval — lives somewhere else, in a service class, spread across layers, written procedurally. Changing the storage mechanism means hunting through the entire codebase because the entire codebase is coupled to the implementation.
Now consider a Document that actually owns its responsibilities:
java
public class Document {
private final UUID id;
private final String name;
private final String mimeType;
public Document(String name, String mimeType, InputStream content) {
this.id = UUID.randomUUID();
this.name = name;
this.mimeType = mimeType;
// becoming a Document includes taking care of its own storage
// the how is nobody else's business
}
public void writeToStream(OutputStream outputStream) {
// retrieves and writes content — fully internal
// the caller gets their bytes, nothing more
}
public boolean isCompliant() {
// compliance logic lives here, where it belongs
}
public String getName() { return name; }
public String getMimeType() { return mimeType; }
}
The Document figures out its own storage as part of coming into existence. It knows how to give its content back via writeToStream. It knows whether it is compliant. No file path is exposed. No byte array leaks out. No storage mechanism is visible to anything outside.
Usage across the system stays clean and expressive:
java
Document invoice = new Document("invoice.pdf", "application/pdf", contentStream);
transaction.attach(invoice);
invoice.writeToStream(responseStream);
Transaction knows it can attach a Document. It does not know — and has no reason to know — how the document stores itself, where it lives, or how it retrieves its content. The Document figures it out. That is the point.
Why This Matters at Scale
The benefits of this discipline are not always obvious on a small codebase. They become impossible to ignore as the system grows — and they show up most clearly when things need to change.
The core logic tells the story of the domain. When objects are modeled around responsibilities rather than technical concerns, reading the code means reading the domain. A Transaction attaches a Document. A Document knows whether it is compliant. The objects speak in business terms because they were designed in business terms. There is no framework noise, no layer indirection, no infrastructure vocabulary polluting the domain model. A new developer — or a returning one after six months — can understand what the system does by reading the objects, not by reverse-engineering a tangle of service classes and annotations.
Framework upgrades become a bounded problem. The dominant template in Java development today is well known: logic goes into services, data gets carried by DTOs, persistence is managed by repositories, and domain objects exist mainly to map to database tables. This pattern is taught as architecture. It is actually a prescription for hollowing out the domain. The objects end up empty. The behavior ends up scattered across service classes that have no natural boundary, no clear responsibility, and no reason to stay coherent as the system grows.
The consequence is that the framework and the domain become inseparable — not because of annotations on classes, but because the logic itself has been relocated into framework-managed components. Services are Spring beans. Transaction boundaries are framework concerns. The business reasoning is hosted inside the framework rather than sitting independently of it. When the framework changes, the logic has to move with it, because the logic lives inside it.
When domain objects genuinely own their responsibilities, this changes entirely. The core domain is a network of objects talking to each other in business terms, with no knowledge of the framework hosting them. The framework sits at the edges — handling HTTP, managing sessions, coordinating persistence — but it does not host the logic. Upgrading it, replacing it, or restructuring it becomes a bounded problem. The domain does not change because it was never coupled to the framework in the first place.
The five to seven year rebuild cycle is not inevitable. Most software organisations accept the full rewrite as a fact of life. After a few years, the codebase has become so entangled with its own technology choices that evolution is no longer possible — the only way forward is to start again. This cycle is expensive, disruptive, and demoralising. It is also, in large part, a consequence of building systems where business logic is hosted inside framework components rather than inside the domain itself.
When the core logic is a network of objects talking to each other in terms of responsibilities, it does not age the same way. The business rules, the domain relationships, the behavioural contracts between objects — these survive. Technology changes around them. The core endures.
Architectural evolution becomes manageable. Moving from a monolith to a distributed architecture, extracting a bounded context, splitting a service — these are genuinely difficult problems when business logic is woven through framework plumbing. When domain objects carry no framework baggage and communicate purely through their responsibilities, the same logic can move between architectural boundaries without fundamental redesign. The objects do not care whether they run in one process or ten. Their responsibilities do not change. Their interfaces do not change. The architecture is a deployment concern, not a domain concern.
The less the core depends on frameworks, the longer it survives. This is the underlying premise. Frameworks evolve, get replaced, fall out of favour, and eventually die. Business logic, when it is well modelled, does not have the same lifecycle. Keeping them genuinely separate — not just in theory, but in practice, through strict encapsulation — means the thing that actually matters, the domain model, accumulates value over time rather than accumulating debt.
Common Pitfalls
The data bag. A class whose primary purpose is to hold data with getters and setters is not an object in any meaningful sense. It is a data structure. Logic that should belong to it lives elsewhere, and that scattered logic is the source of most maintenance pain in large Java codebases.
The leaking constructor. A constructor that accepts implementation details — storage strategies, injected resources, configurable mechanisms — is already exposing the "how." This is dependency injection, and despite its near-universal adoption in Java development, it is a direct violation of encapsulation. The object should own its implementation fully. If it needs to talk to an external resource, it does so internally. That is not a variable, not a configuration point, not something the outside world participates in. It is simply what the object does. The widespread embrace of DI as a default pattern reflects an aversion to singletons and a desire for testability — both legitimate concerns — but it solves them at the cost of encapsulation, and that cost is rarely acknowledged.
Procedural code in disguise. A service class that takes data out of one object, makes decisions about it, and puts results into another object is a procedural function with a class wrapper. The behavior belongs in the objects themselves. The service class is a symptom of objects that do not own their responsibilities.
SOLID as a technical checklist. When principles like Interface Segregation or Single Responsibility are applied to framework configuration and layer boundaries rather than to object design, they produce architectural cargo cult — the appearance of structure without the substance. These principles are about responsibilities and design, not about how to wire up a Spring context.
A Note on Pragmatism
No codebase exists in a vacuum. Frameworks, ORMs, and serialization libraries are part of real-world development, and they sometimes need to know things about your domain objects. This is accidental complexity — the overhead introduced by the tools and environment you work in, as opposed to the essential complexity of the domain itself.
The key distinction is whether the accidental complexity adapts to the essential, or corrupts it.
JPA annotations on a domain object are a good example of acceptable accidental complexity. They decorate the object — they tell the framework how to map it — but they do not change what the object does, how it reasons, or how it protects its own state. The domain logic is untouched. If you removed JPA tomorrow, the object would still make complete sense. The essential complexity is intact. Accidental complexity that adapts to essential complexity without reshaping it is always acceptable — and recognising that distinction is itself a design skill.
The line is crossed when the framework starts dictating structure. A no-argument constructor that leaves the object in an invalid state. A setter that exists purely because the ORM needs to hydrate a field. A transaction boundary that forces business logic to be organised around framework sessions rather than domain responsibilities. At that point the accidental complexity is no longer adapting to the essential — it is reshaping it. The tool is now designing the domain, and the domain is losing its integrity.
The test is simple: if the accidental complexity were removed, would the core object still be coherent, valid, and complete on its own terms? If yes, the compromise is acceptable. If no, the framework has gone too far and the design needs to push back.
Conclusion: Encapsulation as a Force Multiplier
True encapsulation is strict. The object alone owns and hides everything about how it works. Clients see only responsibilities. The "how" is nobody else's business.
Practiced properly, it changes more than the code. It changes how you think about systems. You stop modeling data flow and start modeling behavior. You stop thinking in layers and start thinking in responsibilities. You stop asking "where does this code go?" and start asking "whose job is this?" The software becomes a network of objects that each know their job and do it — completely, independently, and without leaking their secrets.
That network survives in a way that layered, framework-dependent systems do not. It survives framework upgrades because the framework was never inside it. It survives architectural shifts because the objects carry no architectural assumptions. It survives time because it is organised around the domain — around what the software actually is — rather than around the technology that happens to be running it today.
Most Java developers today have never worked in a codebase built this way. That is not an accusation — it is a consequence of an industry that taught frameworks before it taught design. But it means that for many, genuinely object-oriented development would feel like a different discipline entirely.
It is. And it is worth learning.
Top comments (5)
This is a strong take. The point about systems turning into procedural code in disguise is something I keep seeing as well.
I’ve been testing system behavior under load recently, and one interesting pattern is how design issues (like scattered responsibilities) don’t show up early, but become very obvious once the system is under pressure.
Feels like encapsulation isn’t just design purity, it directly affects how systems behave when things start breaking. Curious if you’ve seen design decisions show up as failure patterns in real systems?
Many, many times.
What you are describing — design issues that stay invisible until the system is under pressure — is exactly right, and it points to something deeper than performance. Scattered responsibilities mean scattered failure. When something breaks in a fat-service system, the blast radius is unpredictable because nothing has a clear boundary. You cannot isolate the problem because the problem is everywhere.
The root cause, in my experience, is that implementing software has devolved into documenting requirements in code. Requirements come in, developers translate them into service methods and database columns as literally as possible, and the job is considered done. The design step — actually modelling the domain, discovering what the application should do and why, thinking about which concepts own which responsibilities — is skipped entirely. There is no model. There is just a requirements document expressed in Java.
Reviving those systems follows a recognisable pattern. It almost always begins by cutting away the fat services and pulling the logic back into a rich domain model. That sounds like a refactoring exercise. What it actually produces is clarity — about what the system does, about where failures originate, about what can change independently of what. Stability follows, and then productivity increases significantly, because developers stop navigating a maze and start working with something that makes sense.
The thing I love most about domain-centric systems — when they are working well — is what happens to the conversations. Developers stop talking about Spring beans and microservice boundaries and boilerplate reduction. They start talking about the domain. They ask what a user actually means when they say "I want the system to do X." They argue about responsibilities and concepts and behaviour. The technology becomes what it should always have been: a detail. The domain becomes the thing everyone is actually thinking about.
That shift in conversation is, to me, one of the clearest signals that the design is working.
This makes a lot of sense, especially the “scattered responsibilities = scattered failure” part.
I’m seeing something very similar while running crash tests. The system doesn’t fail in one clear place, it starts degrading in specific paths (like write-heavy flows), and tracing the root cause becomes messy because responsibilities aren’t clearly bounded.
What stood out to me is that the system still looks “healthy” from a high level, but internally it’s already under stress in specific areas.
That connection between design clarity and failure isolation is something I hadn’t fully appreciated before.
What people keep forgetting is that getting something working is the easy part. The hard part is keeping it working — and keeping it understandable — over time.
For small CRUD applications the procedural approach is fine. The surface area is small enough that one developer can hold the whole thing in their head. But as the system grows, that contextual overview starts to fragment. Nobody knows the full picture anymore. Changes become guesswork. You fix one path and break another, because the responsibilities that should have been separate never were.
What you are seeing in your crash tests is exactly that fragmentation made visible under pressure. The system looks healthy from the outside because it is still responding. But internally, the boundaries that should contain a problem and make it traceable simply do not exist. The failure spreads along the same paths that the logic spread along — everywhere.
A rich domain model solves this in a way that no amount of monitoring or documentation can. It serves as a contextual center. When something breaks, you trace it to the object responsible. Contradictory business rules surface quickly because the concepts that own them are explicit and visible. The model does not just make the system easier to change — it makes the system easier to understand at the moment you most need to understand it, which is when something is going wrong.
The fact that something is working does not mean it is built correctly. It just means the consequences have not fully arrived yet.
Yeah, that “working ≠ built correctly” line sums it up well.
The crash tests are basically exposing that gap in real time.