DEV Community

Cover image for Why Floci is built in Java (and why that's the right call in 2026)
Hector Ventura
Hector Ventura

Posted on • Originally published at dev.to

Why Floci is built in Java (and why that's the right call in 2026)

I get this question every week:

"Why did you build Floci in Java? Why not Go, Rust, or Python like LocalStack?"

It's a fair question. And the short answer is: the constraints picked the language, not the other way around.


The constraints came first

Before I wrote a line of code, I knew what Floci had to be:

  • 24ms cold start, small memory at idle. Otherwise CI pipelines skip it.
  • One statically-linked binary in a ~90 MB image. No bundled interpreter, no runtime to install.
  • Real concurrency. Floci has to handle SQS long polling, Lambda containers, RDS proxying, DynamoDB Streams — all at the same time.
  • Maintainable for years, by contributors who aren't me.

Those five constraints knock most languages out before personal preference enters the picture.


Java in 2026 isn't the Java you remember

If your last serious Java was Spring Boot in 2015, the language has moved.

From Java 8 to Java 25 (the current LTS), the highlights that actually matter:

  • Virtual threads (Java 21, mature in 25) — millions of lightweight threads, no more thread pool tuning. For an emulator handling concurrent SQS pollers, Lambda invocations, and DB proxies, this is transformative.
  • Records, sealed classes, pattern matching — what used to take 30 lines of boilerplate is now 1. Modeling AWS request/response types becomes a pleasure instead of a chore.
  • Text blocks, switch expressions, var — the language reads like a modern language now, not an artifact of 1995.
  • Compact object headers (Java 25) — 8–12% memory savings across the board, for free.
  • Faster startup, smaller footprints every release — class data sharing, ahead-of-time profiling, generational ZGC. The JVM in 2026 is a different beast.

Java didn't stand still; the ecosystem just stopped talking about it.

Then there's the runtime story, Quarkus + GraalVM:

Quarkus changed the runtime model entirely. It does at build time what classic Java does at startup. Reflection, classpath scanning, dependency injection: all resolved when you compile, not when the container boots.

GraalVM then compiles that build-time-prepared application into a native binary. The benefits, concretely:

  • No JVM at runtime. No class loader, no warmup, no JIT pause on first request. The binary is the runtime.
  • 24ms cold start vs 1–3 seconds for a typical JVM app. ~100× faster.
  • ~13 MiB idle memory vs hundreds of MB for a JVM. ~10–30× smaller.
  • ~90 MB Docker image vs ~250 MB for a JRE-based image, often more once you add libraries.
  • No CVE surface from a runtime you didn't write. No bundled OpenSSL, no Python interpreter, no Node... just the binary.
  • Statically linked. Runs on scratch or distroless base images if you want.

That's the part most people miss when they hear "Java." Floci doesn't carry a JVM. There is no JVM in the image. There's a binary.

Underneath Quarkus sits a stack that's been battle-tested for over a decade: Netty for non-blocking I/O (the same library powering gRPC, Cassandra, and most of the Java networking world), Vert.x for reactive routing, and Jakarta EE standards (JAX-RS, CDI) — boring in the best sense, with tooling and patterns everyone already knows.

This isn't a bet on a trendy stack. It's a bet on infrastructure that AWS, Netflix, and LinkedIn have been running in production for years.


Why not Python (LocalStack's choice)?

Python is great. I use it for scripts, tooling, and ML work all the time. But for an emulator that needs to be small, fast, and concurrent, the language's strengths and Floci's needs don't quite line up.

The numbers tell the story:

Floci LocalStack
Image size ~90 MB ~1 GB
Idle memory ~13 MiB ~500 MB
Cold start ~24 ms 15–30 s

That's roughly 10× smaller, 40× less RAM, and 1000× faster startup. Even a thoughtfully built Python emulator pays for the interpreter, the standard library, and the dependency tree — on disk and at startup. A GraalVM native binary doesn't carry any of that. The runtime is the binary.

There's also the concurrency story. Floci has to handle SQS long polling, Lambda containers, RDS proxying, and DynamoDB Streams simultaneously. In Java that's the default model. In Python it means navigating async/multiprocess gymnastics around the GIL — workable, but more friction.

LocalStack made a reasonable choice for what Python is good at: fast iteration on AWS API surface coverage. The price shows up downstream in the auth-token-walled "Pro" tier that pays for the engineering needed to keep a Python runtime fast and stable at that scope.

Floci goes the other way. The runtime is small because the language let me make it small.


And yes, I know Java well

Last point, because it's honest: I've been writing Java for 18+ years. When I'm debugging a wire-protocol mismatch at 11pm on a Sunday, I'm not fighting the language. I'm reading the AWS SDK source and writing the deserializer. The language is invisible.

A maintainer who fights their own tools doesn't ship. Floci ships because the stack and I have been working together for nearly two decades.


TL;DR

  • Java + Quarkus + GraalVM → 24ms startup, 13 MiB idle, single 90 MB binary
  • Built on Netty, Vert.x, and Jakarta EE — boring, proven infrastructure
  • Python's strengths (rapid iteration, scripting) aren't Floci's needs
  • 18 years of Java experience means I ship features instead of debugging language weirdness

The constraints picked the stack. The stack has held up.


🔗 github.com/floci-io/floci
📚 floci.io/floci
💬 floci.slack.com

Top comments (0)