DEV Community

Artem Peshkov
Artem Peshkov

Posted on

Micronaut Native vs JAR: How Going Native Saves Costs and Improves Scalability on AWS

Introduction

When running applications in the cloud, performance is not just a developer concern - it directly impacts cost, scalability, and customer experience.

Traditional JVM-based applications (running as JAR files) rely on Just-In-Time (JIT) compilation, which often results in higher memory and CPU consumption, longer startup times, and increased infrastructure costs.

During our migration to the cloud, our team proposed building a native image using Ahead-Of-Time (AOT) compilation. In this article, I’ll show the advantages and disadvantages of both approaches from a cloud environment perspective.

Although the most popular Java framework — Spring — already provides excellent cloud capabilities, our team decided to switch to the newer and rapidly evolving Micronaut framework, which was designed from the ground up for cloud-native applications.

This article summarizes my real-world findings backed by metrics, monitoring data, and Grafana visualizations.


The Context

  • Framework: Micronaut 4.8.0
  • Cloud target: AWS (ECS, Fargate, or EKS)
  • Monitoring stack: Prometheus + Grafana
  • Goal: Compare the same application built as a JAR versus a native executable.

The key metrics observed:

  • Startup Time
  • Memory Usage (Heap + Non-Heap)
  • CPU Usage

Note: The comparison was performed using a real, production-ready application under equivalent load conditions, running locally on an Apple M1 Pro (16 GB RAM).


Results

🚀 Startup Time

"Hello world" project:

JAR build: ~500–600 milliseconds

Native build: ~100 milliseconds

In real-world projects, the difference in startup time is smaller, but it can still reach 2–3 times. For instance, in the project I’m currently working on, the JAR version starts in about 20–25 seconds, while the native image starts in under 10 seconds.

This difference is critical in cloud environments: faster startup means faster scale-out during traffic spikes, and better cold-start behavior for serverless scenarios (Lambda, Fargate).

Note: Measurement was taken multiple times, with caches cleared before each run to simulate a cold start scenario.
The difference in startup times strongly depends on the complexity of the application. For smaller and simpler services — such as microservices — the startup time improvement can reach up to 10×.
These results highlight another important advantage of the Micronaut framework and native builds — they are particularly well-suited for lightweight, cloud-native microservice architectures.

💾 Memory Usage

JAR build: ~192 MiB

Native build: ~32–64 MiB

Memory Usage of Jar and Native build

Native executables drastically reduce the baseline memory footprint.
In AWS ECS/Fargate, this means you can run more containers per EC2 instance or use smaller task sizes, directly reducing infrastructure costs.

⚡ CPU Usage

JAR build: peaks up to 2%

Native build: stays below 0.5%

CPU Utilization

During the benchmark, both builds processed the same workload under identical conditions.

The JAR build exhibited short but noticeable CPU spikes during initialization and request bursts.

The Native build ran much smoother, without significant peaks or instability.
This stability matters in cloud environments where auto-scaling rules are often tied to CPU utilization.
With fewer spikes and more predictable performance, the native build handles load gracefully, avoiding unnecessary scale-out events and achieving more efficient resource usage.

Why This Matters for AWS

Cost Optimization

Lower baseline memory → fewer resources needed per container.
Lower CPU usage → reduced vCPU billing in Fargate or EC2.

Scalability

Native builds start almost instantly → AWS Auto Scaling reacts faster to load.
Perfect fit for burst workloads and microservices.

Sustainability

Fewer resources = lower energy consumption.
Aligns with AWS’s sustainability goals and green IT initiatives.

⚠️ Disadvantages

Every approach has its trade-offs, and the native build is no exception.
Despite the excellent runtime performance, native images require additional configuration to handle features like reflection, dynamic class loading, or serialization — which are handled automatically in traditional JVM builds.
Without proper reflection configuration, the build process may fail or certain parts of the application may not work as expected at runtime.
This configuration typically involves creating a reflection configuration file or using Micronaut’s reflection metadata generation during compilation.

Another important limitation is that not all third-party libraries fully support native image compilation. Libraries that rely heavily on reflection, bytecode generation, or dynamic classpath scanning may require custom configuration or may not work at all without adaptation.

Finally, native image compilation takes significantly longer than packaging a standard JAR. The GraalVM AOT compiler performs deep static analysis and optimization, which increases build time and memory usage during compilation. This can slow down CI/CD pipelines or local iteration cycles — although build caching and incremental compilation can help mitigate this issue.

While these challenges introduce some complexity, they are usually one-time setup tasks. Once configured correctly, native builds provide long-term benefits in startup time, memory efficiency, and deployment cost.

Reproducibility

Metrics were visualized in Grafana using the JVM Micrometer dashboard, with a few custom panels for memory and CPU comparison.

The following steps allow you to reproduce the experiment locally and compare the metrics between the JVM and the native image versions.

1) Clone the project from the repository.

2) Build the JAR and run it (by default on port 8081):

mvn clean package
java -jar ./target/*.jar
Enter fullscreen mode Exit fullscreen mode

3) Build the native executable with GraalVM and run it (by default on port 8082).
(Make sure GraalVM is installed and added to your JAVA_HOME and PATH environment variables beforehand: https://www.graalvm.org/downloads/
)

mvn clean package -Dpackaging=native-image
./target/* -Dmicronaut.config.files=src/main/resources/application_native.properties
Enter fullscreen mode Exit fullscreen mode

To ensure that metrics-related classes and methods remain accessible at runtime, it’s recommended to include them in a reachability-metadata.json file under src/main/resources/META-INF/native-image/. If a third-party library doesn’t provide its own reachability metadata, this configuration can help prevent runtime errors caused by missing reflective access or class loading. Before adding a new dependency, it’s also worth checking whether it already includes native image metadata on the GraalVM Reachability Metadata Repository.

reachability-metadata.json:

{
  "reflection": [
    {
      "name": "com.sun.management.OperatingSystemMXBean",
      "methods": [
        { "name": "getProcessCpuTime" },
        { "name": "getSystemCpuLoad" },
        { "name": "getProcessCpuLoad" },
        { "name": "getCpuLoad" },
        { "name": "getCommittedVirtualMemorySize" },
        { "name": "getFreePhysicalMemorySize" },
        { "name": "getTotalPhysicalMemorySize" },
        { "name": "getFreeSwapSpaceSize" },
        { "name": "getTotalSwapSpaceSize" }
      ]
    },
    {
      "name": "java.lang.management.MemoryMXBean",
      "methods": [
        { "name": "getHeapMemoryUsage" },
        { "name": "getNonHeapMemoryUsage" }
      ]
    },
    {
      "name": "java.lang.management.ThreadMXBean",
      "methods": [
        { "name": "getThreadCount" },
        { "name": "getPeakThreadCount" },
        { "name": "getDaemonThreadCount" }
      ]
    },
    {
      "name": "java.lang.management.GarbageCollectorMXBean",
      "methods": [
        { "name": "getCollectionCount" },
        { "name": "getCollectionTime" }
      ]
    }
  ]
}
Enter fullscreen mode Exit fullscreen mode

3) Configure Prometheus scrape targets (prometheus.yml)

global:
  scrape_interval: 5s

scrape_configs:
  - job_name: 'micronaut-jar'
    metrics_path: '/prometheus'
    static_configs:
      - targets: ['host.docker.internal:8081']

  - job_name: 'micronaut-native'
    metrics_path: '/prometheus'
    static_configs:
      - targets: ['host.docker.internal:8082']
Enter fullscreen mode Exit fullscreen mode

4) Run Prometheus and Grafana as containers.

docker-compose.yaml:

version: '3.8'

services:
  prometheus:
    image: prom/prometheus:latest
    container_name: prometheus
    ports:
      - "9090:9090"
    volumes:
      - ./prometheus.yml:/etc/prometheus/prometheus.yml:ro
    networks:
      - monitoring

  grafana:
    image: grafana/grafana:latest
    container_name: grafana
    ports:
      - "3000:3000"
    environment:
      - GF_SECURITY_ADMIN_USER=admin
      - GF_SECURITY_ADMIN_PASSWORD=admin
    networks:
      - monitoring

networks:
  monitoring:
    driver: bridge
Enter fullscreen mode Exit fullscreen mode
docker-compose up
Enter fullscreen mode Exit fullscreen mode

5) Open Grafana at http://localhost:3000 (credentials are specified in docker-compose.yaml) and add Prometheus (http://prometheus:9090) as a data source. Copy the UID of the added data source.

6) Replace all occurrences of "PASTE_UID_HERE" in dashboard.json (available in the project) with your Prometheus data source UID, then import the dashboard via the Grafana UI.

7) Observe and compare metrics.

Note: Application startup time is not included in the default set of Micrometer metrics.
It can be exposed as a custom metric by using the gauge() method of
io.micrometer.core.instrument.MeterRegistry during the application startup phase (see main() method of Application class)

Conclusion

Migrating from a traditional JAR to a Micronaut Native build with GraalVM can:

Reduce startup time by over 95%,

Cut memory usage by 3–6×,

Lower CPU usage significantly.

For applications running on AWS ECS, EKS, or Fargate, these improvements translate into cost savings, faster scalability, and better resilience.

Native images are not just a developer curiosity — they’re a strategic optimization choice for modern cloud-native Java applications.

Top comments (0)