I've just kicked off a spike within my team on what framework we should use going forward for our microservices by applying a bit of science. But, as I'm known to have somewhat strong opinions on these things, I'll be posting this post after we, as a team, have picked a way forward to not skew the results (too much!). Even so, I'll try and be objective. More of that later, first the problem statement...
We no longer want to manage our own infrastructure so we're packing our bags and heading to the cloud. We currently use Spring Boot but is there something more cloud native? We're predominantly a Java team, and it's enough upskilling to cloud without throwing in a new language into the mix.
So, in a nutshell, our requirements are:
- Java,
- Container friendly (lean, starts/stops quickly),
- Easy to pickup,
- ...and unit test,
- Promotes integration with cloud native services (observability, databases, networking, etc).
The Contenders
There are probably many, many more, but I think I've captured the front runners here (including their marketing tag line):
- Dropwizard - "Java framework for developing ops-friendly, high-performance, RESTful web services."
-
Helidon - "Lightweight. Fast. Crafted for Microservices.". This comes in two flavours:
- SE - Lean reactive first framework,
- Microprofile (MP) - Implements the Microprofile specification for standards based API,
- Micronaut - "A modern, JVM-based, full-stack framework for building modular, easily testable Microservice and Serverless applications."
- Quarkus - "Supersonic Subatomic Java"
-
Spring Boot - "Spring Boot makes it easy to create stand-alone, production-grade Spring based Applications that you can "just run".". Two flavors will be trialled here:
- WebMVC: A more imperative style leveraging Servlet API under the hood,
- WebFlux: Reactive ReST API leveraging Netty,
- Vert.x - "Reactive applications on the JVM."
Frankly, all of these tick the majority of requirements listed above and with some tweaks could get all of them without too much pain. So, we must pick another way of separating the wheat from the chaff:
- Relative tests:
- Build speed - You might be doing this a lot so can be a real time drain,
- Start-up speed - Important for scalability and recovery,
- Image size - Small images generally have lower attack surface and moved around the K8s estate faster,
- Request Round Trip Time (RTT) - It's a ReST service so we need prompt responses,
- Subjective test: How easy is it to develop with?
To aid for each one I'll develop a simple ReST service that has two endpoints and matching unit test:
-
/hello/{name}
- This just simply returnsHello {name}!
-
/hello/error
- This throws a customRuntimeException
which must be handled as a 400 Bad Request and returnOh no!
.
This will build into an image using Jib and the same base adoptopenjdk/openjdk16:alpine
. Where possible this will use the recommended testing unit approach to verify the endpoints.
Relatively Scientific
My methodology:
- Build speed: just run maven, including
jib:dockerBuild
, 10 times and get the mean average. This will include a test for each of the endpoints using approach recommended in their documentation, - Image size: the reported size of the image from the above build from
docker images
, - Start-up speed: start the images 10 times and take the mean average,
- RTT: call the
/hello/bob
endpoint using JMH using average time.
⚠️ As ever, take microbenchmarks with a pinch of salt. There are so many factors affecting performance that although these results may be indicative, they may also no be true for the types of workloads you have. Therefore, your mileage may vary.
Results:
Test | Dropwizard | Helidon MP | Helidon SE | Micronaut | Quarkus | Spring WebFlux | Spring WebMVC | Vert.x |
---|---|---|---|---|---|---|---|---|
Build Speed (s) | 11.954 | 9.574 | 11.790 | 11.322 | 22.605 | 14.364 | 11.840 | 9.345 |
Image Size (MBy) | 382 | 376 | 368 | 376 | 375 | 381 | 383 | 370 |
Start Speed (ms) | 1,665 | 1,573.4 | 424.9 | 753.5 | 665.8 | 1,557.8 | 1,774.5 | 374.6 |
RTT (us/op) | 1856.903 ± 47.301 |
1986.689 ± 34.877 |
1690.168 ± 80.618 |
1730.723 ± 64.688 |
1821.905 ± 124.261 |
1857.136 ± 82.583 |
2045.205 ± 78.650 |
1513.232 ± 74.767 |
Lets start with build times: Quarkus is just slow. I suspected it was the unit test (their recommended approach) or offline mode but even without either of them it, it was consistently slower than all the rest. It might also be their Jib wrapper, but as you're forced to use it not much can be done. The rest are all within a few of seconds of each other so not much to be garnered from that.
ℹ️ Quarkus was the only framework that mandated using it's own wrapper of Jib. I tried using Jib directly but kept complaining about missing
pom.xml
. This also meant that I had issues configuring offline mode causing build failures.
Next to build images: these all has the same base image and tool to try and get some comparable results and the winner was Helidon SE, which did surprise me as I was expecting Vert.x which did come in as runner up. In the middle with very similar results were Helidon MP, Quarkus and Micronaut with both Spring builds and Dropwizard at the tail. Frankly, they're all within a few MB so no clear winner. Maybe looking at using other tooling such as JLink or GraalVM would be beneficial to really slim them down, but not all of the frameworks support this.
Now to the start up times: Again, lean reactive frameworks were expected to be fastest here as they're avoiding bean lifecycle management... and they were here too: Vert.x and Helidon SE getting the two stop spots. For a framework that manages bean lifecycle Quarkus does a great job of starting quickly and picks up 3rd place, that might be due to their own CDI implementation: ArC. This is closely followed up by Micronaut. At the tail Spring WebFlux, Helidon MP and Dropwizard with Spring WebMVC at the rear but all within ~200ms between the also-rans. However, with none of them being over 2 seconds, we've have come a long way [Toto] from full-fat application servers!
Finally, lets review the RTT times; they are somewhat inconclusive. As anticipated the smaller, more functional reactive based APIs (Vert.x & Helidon SE) tend to be quicker than the more imperative approaches. The only outlier seems to be Spring Boot WebFlux, but I suspect that is due to the ApplicationContext
managing the bean lifecycle. Either way from difference between the best and worst was just 531 microseconds (0.5 milliseconds). For ReST traffic, you're unlikely to notice any of difference unless you're serving HUUUUUGE amounts of traffic.
Subjectively Anecdotal
First and foremost: Quarkus is just a pain to work with compared to the others. The code is pretty clean and simple, but at every turn you need to take special steps to get it to build or bundle into an image and it's just painful. A few of the issues:
- I hit the dreaded Docker Rate Limit during this, and for Jib is was easy to work around (i.e.
--offline
or using mirrors). However, with Quarkus, with their own Jib wrapper, that's not possible, - It behaves very differently between IDE and container, which makes debugging inconsistent. Their debug mode is their way of addressing it but just seems like an overengineered 'fix' to a problem that shouldn't exist,
- The test harness is just plain slow compared to the rest,
- Why do I need to use the maven plugin to add features/extensions? And it reformatted the
pom.xml
to use spaces (yes, I'm that Maverick Renegade that uses tabs!), - Why do I need to use custom tooling to create projects? Why isn't there Maven archetypes for this like everyone else.
The next most difficult was Vert.x, largely due to the test harness. It does concern me that you have to reach out to a 3rd party to get another library that works around all the problems created by your own test fixtures. Once up and running is was fairly straight forward to use but certainly not developer friendly at the start.
Functional/Reactive patterns have somewhat of a steep learning curve so a graceful, easy to follow API massively helps adoption. I couldn't call Spring Boot WebFlux graceful... in fact, quite the contrary, it's a total dog's dinner! Really verbose, confusing naming, difficult error handling and just looks horrible at the end. It took a fair while to be up and running, but I don't envy the person that has to maintain this.
The rest of them were all pretty straightforward and I couldn't materially differentiate between them.
Conclusion
ℹ️ Code for this is available here. If I've made some glaring omissions, please do point them out.
Lets start at the weaker end: The two that get the most column inches within blogs and news sites are the two I'd avoid:
- Quarkus - It's just maddening and therefore frustrating to use. I don't understand why they need to engineer solutions to problems of their own making!? If I can't trust the behaviour locally will be the same as in test or in a container I can't trust it in production. About its only benefit is it starts quicker than other frameworks that implement Microprofile,
- Spring Boot - They need to just put it in the bin and start again; WebFlux is difficult to use, WebMVC is a poor version of JAX-RS, XML config should have been dropped years ago and Spring Boot AutoConfiguration Magic is just non-sensical for anything other than a very basic application. I have had issues where totally unrelated things are initialising because it found a library on the classpath... this is too easy to do and a huge security concern.
I know many will disagree with me on this, especially if they've already bought into either of these, but I see no benefit of using them over the other options that are far more straightforward to use. This would probably be a different story if they had some significant benefit (e.g. simpler to use or significantly faster), but they don't.
I'm struggling to pick an outright winner here. At the upper end we have:
- Helidon - Near the top in every test and it's flexible SE and MP modes (which can mix and match too) are a great feature,
- Micronaut - Just really well thought out and simple to use,
- Dropwizard - Biggest surprise here. Few bells and whistles, but gets the job done with little fuss,
- Vert.x - Although somewhat difficult to test Vert.x gets credit for being really lean and just plain damn fast!
With all of these, you'll be up and running in very little time and require no special fixtures to build them into container images, but if I had to pick one it would be Helidon. I have been a fan for a while (and, full disclosure, also contributed code) and doing these tests has shown that it's certainly up there with the best. Project Loom and io_uring
support are exciting which should generate some serious performance improvements. However, in the meantime I'm going to look much closer at Micronaut and Dropwizard, unless I raw performance which Vert.x would be high on my list.
As for my team, what did they pick? Micronaut.
Top comments (5)
A little background why/how I got here:
I try to find a Framework to do a hobby project, at work I use SpringBoot, some old JEE applications I inherited and we are thinking to go MicroProfile to leverage the existing JEE know-how, all this is running in containers on kubernetes.
I hit this post because Quarkus and Micronaut are on my list.
What I don't get is: Your benchmarks are size, performance, startup time and your goal is to be "Container friendly (lean, starts/stops quickly)" but then you don't look at native images... why? How does this fit together?
Also for Quarkus: "It behaves very differently between IDE and container, which makes debugging inconsistent" - Can you give any examples?
With the (in my eyes way too simple) example I'd be really surprised if you'd hit any walls.
I'd be really interested in knowing what walls you hit with quarkus because currently it is on the top of my list and I just started with my first examples....
Sorry for slow reply. Regarding native images, frankly anything I've tried that is slightly complex results in Graal blowing up and it wastes so much time in getting it working. I appreciate measuring by those metrics Native Image would be a good fit, but life is too short!
As for Quarkus, really can't remember it now. But I remember the pain! Shudders
I'm interested in hearing your experience with Quarkus these few days, @raudi
Nice article!
At our company, we'll be writing a new backend from scratch and we'll have to take this same decision.
A couple additional factors I'd like to evaluate are the ease and performance interacting with DBs (specially using reactive stacks) and the GraalVM native image alternatives.
I'm curious on how has been your experience with Micronaut in these months.
I wouldn't use it on another project:
As for reactive... ignore it and wait for Java 19 with Virtual threads. This all but eliminates the scaling benefits of reactive but much easier to diagnose errors.