DEV Community

JNBridge
JNBridge

Posted on • Originally published at jnbridge.com

Java + .NET in Docker & Kubernetes: 3 Architecture Patterns That Actually Work

If you've ever tried to deploy a system where Java and .NET need to talk to each other inside containers, you know the pain. Do you cram both runtimes into one image? Split them into sidecars? Go full microservices with gRPC?

I've been working through each of these approaches for a polyglot trading platform, and there are real trade-offs that most "just use Kubernetes" advice glosses over. Here's what I've learned — with actual Dockerfiles and K8s manifests you can steal.


Why Containerize Java/.NET Integration at All?

Before the how — the why:

  • Environment parity: Java and .NET versions, runtime configs, and native dependencies are locked into the image. No more "works on my machine" across the JVM and CLR.
  • Independent scaling: Java and .NET components scale independently — critical when one side is compute-heavy and the other is I/O-bound.
  • Resource isolation: Kubernetes resource limits prevent a misbehaving JVM from starving the .NET runtime (or vice versa).
  • Cloud portability: Same images run on EKS, AKS, GKE, or on-prem.

The 3 Architecture Patterns

Pattern 1: Single Container with In-Process Bridge

Both the JVM and .NET runtime run inside a single container, with a bridging technology like JNBridgePro enabling direct in-process method calls.

# Multi-runtime single container
FROM mcr.microsoft.com/dotnet/aspnet:9.0 AS base

# Install JDK alongside .NET runtime
RUN apt-get update && apt-get install -y \
    openjdk-21-jre-headless \
    && rm -rf /var/lib/apt/lists/*

ENV JAVA_HOME=/usr/lib/jvm/java-21-openjdk-amd64
ENV PATH="$JAVA_HOME/bin:$PATH"

WORKDIR /app
COPY --from=publish /app/publish .
COPY java-libs/ ./java-libs/

ENTRYPOINT ["dotnet", "MyIntegrationApp.dll"]
Enter fullscreen mode Exit fullscreen mode

Best for: Tight coupling, sub-millisecond method calls, shared state scenarios.

Trade-offs: Larger image (~400-600MB), both runtimes compete for same resource limits, no independent scaling.

Pattern 2: Sidecar Container

Java and .NET in separate containers within the same Kubernetes pod. They share the pod's network namespace (localhost) and can share volumes.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: integration-service
spec:
  replicas: 3
  selector:
    matchLabels:
      app: integration-service
  template:
    metadata:
      labels:
        app: integration-service
    spec:
      containers:
      - name: dotnet-app
        image: myregistry/dotnet-app:latest
        ports:
        - containerPort: 8080
        resources:
          requests:
            memory: "512Mi"
            cpu: "500m"
          limits:
            memory: "1Gi"
            cpu: "1000m"
        livenessProbe:
          httpGet:
            path: /health
            port: 8080
          initialDelaySeconds: 10
      - name: java-service
        image: myregistry/java-service:latest
        ports:
        - containerPort: 9090
        resources:
          requests:
            memory: "768Mi"
            cpu: "500m"
          limits:
            memory: "1.5Gi"
            cpu: "1000m"
        env:
        - name: JAVA_OPTS
          value: "-XX:MaxRAMPercentage=75.0 -XX:+UseG1GC"
Enter fullscreen mode Exit fullscreen mode

Best for: Independent container images/build pipelines with low-latency localhost communication (0.1-0.5ms per call).

Trade-offs: Pod scheduling treats both containers as a unit. Startup ordering requires init containers or retry logic.

Pattern 3: Separate Services

Completely separate K8s deployments, communicating via REST, gRPC, or message queues.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: dotnet-frontend
spec:
  replicas: 5
  template:
    spec:
      containers:
      - name: dotnet-app
        image: myregistry/dotnet-frontend:latest
        env:
        - name: JAVA_SERVICE_URL
          value: "http://java-backend:9090"
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: java-backend
spec:
  replicas: 3
  template:
    spec:
      containers:
      - name: java-app
        image: myregistry/java-backend:latest
Enter fullscreen mode Exit fullscreen mode

Best for: Loosely coupled, independently scaling systems where call frequency is low.

Trade-offs: Higher latency (1-10ms with serialization), explicit API contracts needed.

How to Choose

Factor Single Container Sidecar Separate Services
Call latency <0.01ms (in-process) 0.1-0.5ms (localhost) 1-10ms (network)
Image size Large (both runtimes) Smaller (separate) Smallest
Independent scaling No No (same pod) Yes
Object sharing Direct references Via bridge protocol Serialization only
Call volume sweet spot 10,000+ calls/sec 1,000-10,000/sec <1,000/sec

Rule of thumb: If your Java and .NET components make more than 1,000 cross-language calls per second, Pattern 1 or 2 is the way to go.

Docker Best Practices for Dual Runtimes

Multi-Stage Builds

# Stage 1: Build .NET
FROM mcr.microsoft.com/dotnet/sdk:9.0 AS dotnet-build
WORKDIR /src
COPY src/DotNetApp/ .
RUN dotnet publish -c Release -o /app/publish

# Stage 2: Build Java
FROM eclipse-temurin:21-jdk AS java-build
WORKDIR /src
COPY src/JavaLib/ .
RUN ./gradlew shadowJar

# Stage 3: Production
FROM mcr.microsoft.com/dotnet/aspnet:9.0-noble
RUN apt-get update && apt-get install -y \
    openjdk-21-jre-headless \
    && rm -rf /var/lib/apt/lists/*
ENV JAVA_HOME=/usr/lib/jvm/java-21-openjdk-amd64
WORKDIR /app
COPY --from=dotnet-build /app/publish .
COPY --from=java-build /src/build/libs/*.jar ./java-libs/
ENTRYPOINT ["dotnet", "MyIntegrationApp.dll"]
Enter fullscreen mode Exit fullscreen mode

Production image: ~350-450MB (vs 1GB+ with full SDKs).

JVM Container Tuning

ENV JAVA_OPTS="\
  -XX:MaxRAMPercentage=75.0 \
  -XX:+UseG1GC \
  -XX:MaxGCPauseMillis=200 \
  -XX:+UseStringDeduplication \
  -XX:+ExitOnOutOfMemoryError"
Enter fullscreen mode Exit fullscreen mode

The key: MaxRAMPercentage=75.0 — leaves 25% for .NET + OS overhead. And ExitOnOutOfMemoryError ensures K8s detects OOM via exit code instead of a zombie process.

Health Checks for Dual-Runtime Pods

When both runtimes are in the same pod, your health check must verify both:

public class BridgeHealthCheck : IHealthCheck
{
    private readonly IJavaBridge _bridge;

    public async Task<HealthCheckResult> CheckHealthAsync(
        HealthCheckContext context,
        CancellationToken ct = default)
    {
        try
        {
            var result = _bridge.InvokeJavaMethod(
                "com.company.HealthService", "ping");
            return result == "pong" 
                ? HealthCheckResult.Healthy("Bridge active")
                : HealthCheckResult.Unhealthy("Bridge unresponsive");
        }
        catch (Exception ex)
        {
            return HealthCheckResult.Unhealthy("Bridge failed", ex);
        }
    }
}
Enter fullscreen mode Exit fullscreen mode
# Use startupProbe — JVM can take 10-30s to init
startupProbe:
  httpGet:
    path: /health/ready
    port: 8080
  initialDelaySeconds: 15
  periodSeconds: 5
  failureThreshold: 12  # Allow 60s for JVM + bridge startup
livenessProbe:
  httpGet:
    path: /health/live
    port: 8080
  periodSeconds: 15
  failureThreshold: 3
Enter fullscreen mode Exit fullscreen mode

Pro tip: Use startupProbe instead of a large initialDelaySeconds on liveness. It prevents premature restarts during JVM init while still catching failures fast once running.

Resource Limits — The Common Mistake

resources:
  requests:
    memory: "1.5Gi"   # JVM 768MB + .NET 512MB + OS 256MB
    cpu: "1000m"
  limits:
    memory: "2.5Gi"   # Room for GC spikes
    cpu: "2000m"
Enter fullscreen mode Exit fullscreen mode

Both the JVM and .NET CLR allocate beyond their heap — native memory, thread stacks, code caches, GC overhead. Rule: limit = 1.5× (JVM heap + .NET heap). I've seen too many teams set tight limits and get mysterious OOMKills.

Performance: Cutting Startup Time

JVM startup is the biggest bottleneck. How to fix it:

  1. AppCDS (Class-Data Sharing): Pre-generate a shared archive → JVM startup drops from 10-30s to 3-8s
  2. GraalVM native images: If you don't need full JVM features → startup under 1 second
  3. .NET ReadyToRun: Publish with -p:PublishReadyToRun=true to eliminate JIT at startup
  4. Warm-up endpoints: Pre-load frequently accessed Java classes during the startup probe phase

5 Pitfalls That'll Bite You

  1. No MaxRAMPercentage → JVM tries to use more memory than the container allows → OOMKill
  2. Hardcoded hostnames → Container IPs change on every restart. Use env vars or K8s service names.
  3. No startup probe → K8s kills pod before JVM initializes → infinite restart loop
  4. Single replica → Integration services are critical path. Always 2+ replicas with a PodDisruptionBudget.
  5. Debug config in prod → JMX ports and debug logging should never reach production. Use ConfigMaps.

Real-World Example

A financial services company running a .NET trading platform with a Java risk calculation engine:

  1. Phase 1: Containerize as-is in a single container. In-process bridging with JNBridgePro maintains sub-millisecond latency for thousands of risk calcs/second.
  2. Phase 2: Split to sidecar. Java gets its own container with dedicated memory, preventing GC pauses from impacting the .NET frontend.
  3. Phase 3: Separate deployments. K8s HPA scales Java pods based on CPU during market hours.

The key: JNBridgePro supports all three patterns with config changes, not code rewrites.


TL;DR

  • >10K calls/sec? Single container with in-process bridge
  • 1K-10K calls/sec? Sidecar pattern with localhost communication
  • <1K calls/sec? Separate services with gRPC/REST
  • Always set MaxRAMPercentage for the JVM
  • Always use startupProbe (not just liveness)
  • Always run 2+ replicas with PodDisruptionBudget

What pattern are you using for polyglot containers? I'd love to hear about your setup in the comments.

Top comments (0)