Java Should Stop Trying To Be Like Everybody Else
The Wrong Competition
Java is losing the container wars. A typical Spring Boot microservice ships as a 70-200 MB fat JAR, stuffed into a container image that easily crosses 300 MB. Go produces a 10-15 MB static binary. Rust often beats that. The industry response has been predictable: GraalVM native images, Alpine base images, multi-stage Docker builds, endless optimization of the wrong thing.
Here's what nobody talks about: the actual business logic in that 200 MB artifact - the code your developers write - typically weighs 2 to 500 kilobytes. The rest is framework code, an embedded web server, a JSON library, logging infrastructure, and a bundled JVM. All of it duplicated across every single service in your fleet.
One team using WildFly Swarm's skinny packaging took their application from 45 megabytes down to 2,243 bytes. That's not a typo. Two kilobytes. A 20,000x reduction by removing everything that wasn't their code.
The question isn't how to make Java containers smaller. The question is: why are we packaging this way at all?
Java's Forgotten DNA
Java was never designed to produce self-contained binaries. This is not a weakness - it's a fundamental design decision that the industry has been fighting against for a decade.
The original target platform wasn't servers. It was set-top boxes and web consoles - managed environments where a runtime handled lifecycle, security, and resource allocation. This heritage shows everywhere in the language:
- Dynamic class loading - classes can be loaded, unloaded, and replaced at runtime
- The module system (JPMS) - explicit dependency boundaries and encapsulation
- Hot-swapping - update code without restarting the process
- OSGi - an entire ecosystem built around dynamic module deployment
These aren't accidents or legacy baggage. They're capabilities that most languages would kill for. And we've abandoned all of them to stuff JVMs into containers like it's 2014 and we just discovered Docker.
The current deployment model actively fights Java's strengths. We take a language optimized for dynamic, managed execution and freeze it into static artifacts. We take a runtime designed to host multiple applications efficiently and spin up one instance per service. We pay the JVM's startup cost - optimized for long-running processes - on every container spawn.
Turn the Model Inside-Out
What if we stopped packaging the JVM with every service? What if we deployed just the business logic and let the environment handle everything else?
This isn't a hypothetical. It's how Java EE worked for two decades. The application server provided HTTP handling, connection pooling, transaction management, security. Your WAR file contained your code and nothing else. We called it "heavyweight" and ran away to microservices - but we replaced one 200 MB application server with forty 200 MB fat JARs, and called it progress.
The insight isn't "go back to Java EE." It's to recognize what Java EE got right: separation of application logic from infrastructure concerns.
A modern realization of this principle:
What moves to the environment:
- HTTP request handling and routing
- Authentication and authorization
- Logging, metrics, and tracing
- Service discovery and load balancing
- Connection management for databases and external APIs
- Lifecycle management and health checking
What stays in the artifact:
- Business logic
- Domain models
- That's it
Your deployment artifact becomes a thin module - kilobytes, not megabytes. The runtime environment provides everything else, configured once, updated independently, consistent across all services.
The Payoff: Instant Scaling
When your deployment unit drops from 200 MB to 200 KB, everything changes.
Sub-millisecond startup. A thin module loads in microseconds. No framework initialization, no classpath scanning, no connection pool warmup - the environment already has all that running. With some pre-warming tricks, startup becomes virtually instant.
Effortless horizontal scaling. The environment can spin up new instances as fast as it can allocate memory. Scaling from 1 to 100 instances isn't a capacity planning exercise - it's a configuration value. Scale-to-zero becomes practical because cold start doesn't exist.
Resource efficiency at fleet scale. Instead of running 40 JVMs (one per service) each consuming 200-500 MB of heap, you run a handful of environment instances that host all your modules. The shared runtime, shared libraries, and shared connection pools dramatically reduce total memory footprint.
Simplified operations. No more "works on my machine." The environment is the same everywhere - dev, staging, production. Infrastructure upgrades (JVM version, security patches, library updates) happen once, at the environment level, not per-service.
Transparent Distribution (Almost)
Here's where it gets interesting. When the environment controls service lifecycle and inter-service communication, it can provide guarantees that are impossible in conventional deployments.
Calls between modules become environment-managed. If your module instance is alive, calls to other modules succeed - the environment handles routing, retries, and failover transparently. No more debugging whether the circuit breaker configuration matches the load balancer timeout matches the container health check interval.
Location transparency. Modules don't know or care where other modules run. The environment distributes them across available resources and handles all the networking. Moving a module to a different node requires zero code changes.
Lifecycle notifications. When the environment needs to drain a node or rebalance load, it notifies affected modules. Graceful shutdown becomes a first-class concept, not something you bolt on with preStop hooks and SIGTERM handlers.
There's one constraint: externally visible entry points must be idempotent. Without this property, the environment can't safely retry operations during network hiccups or instance migration. But this isn't really a new constraint - it's already a best practice for any distributed system. You're just making it explicit.
Native CI/CD
The deployment model collapses to something surprisingly simple.
You publish. Your artifact goes to a Maven repository - local, organization-wide, or public. This is already your artifact format. No container registry, no Dockerfile, no image layers to optimize.
Environment deploys. It watches the repository (or receives a webhook), pulls the new artifact, and rolls it out according to your configured strategy - canary, blue-green, rolling, whatever you need. The deployment strategy is configuration, not pipeline code.
That's it. No docker build. No image push. No kubectl apply. No Helm chart templating. No ArgoCD sync. Just publish to Maven and watch it deploy.
The environment itself can run containerized if you need to deploy to existing Kubernetes infrastructure or cloud container services. But your applications don't know or care. They're just modules in a repository.
I've been exploring what this would look like in practice. The results are promising enough to be worth sharing - but that's a topic for another post.
The Strategic Question
Java's current trajectory is clear: keep chasing Go and Rust on their terms. Smaller native images. Faster startup through ahead-of-time compilation. More aggressive dead code elimination. Each step is a partial solution that fights the language's design.
There's another path: stop competing on binary size and cold start, and instead leverage what Java actually does well. Dynamic runtime. Hot deployment. Mature module systems. The most sophisticated JIT compiler in existence, which needs long-running processes to reach peak performance.
This isn't about nostalgia for application servers. It's about recognizing that the current model - every service is an isolated container with its own JVM - creates massive operational overhead while abandoning Java's genuine advantages.
The infrastructure to do this properly is starting to emerge. The pieces exist: Java's module system is mature, the JVM's dynamic capabilities never went away, and the operational pain of container-per-service is now obvious enough that people are looking for alternatives.
Whether this comes from a new runtime, a framework evolution, or a rethinking of how we use existing tools - the direction is clear. Java shouldn't try to be a better Go. It should be a better Java.
Top comments (2)
How exactly could you do that? You need the JVM for each instance anyway. Will you start a new one or use the same, like in the old J2EE days? Why do you need any VM at all? Write once, run anywhere - this is not relevant anymore. Today we have just two CPU architectures - x86 and ARM and one server OS - Linux. In the 90s there were much more different CPU and OS platforms - most of which are already dead. Today writing and compiling code into a native machine code is much more efficient than dealing with VMs and JITs. Java is going to be the next COBOL - legacy, old tech, old concepts, completely not interesting for the next generation of Software Engineers. I myself have already left Java and went to Go.
Take a look at OSGi; it has done exactly that since the late 90s.
Without VM, achieving dynamic loading/unloading and binary compatibility is much harder than with it. It's not about "run anywhere".
Today, writing and compiling into native code is still not as efficient as it was in the 90s, and JVM with JIT outperforms Go. Not to mention that "legacy" Java is far more advanced than "modern" Go.