Introduction
In a microservices' architecture, a request may pass through 10+ services. Without distributed tracing, debugging latency issues is nearly impossible.
With telemetry data, you can correlate:
- Traces → where the request spent time.
 - Metrics → how the system is performing overall.
 - Logs → detailed debugging info, tied to the same trace/span IDs.
 
This “three pillars of observability” approach lets teams quickly detect, triage, and resolve production issues.
What is OpenTelemetry (OTel)?
OpenTelemetry is a set of API, SDKs, libraries, and integrations aiming to standardize the generation, collection, and management of telemetry data(logs, metrics, and traces).
OpenTelemetry is a CNFC (Cloud Native Computing Foundation) project created after the merger of OpenCensus(from Google) and OpenTracing(From Uber). It is rapidly emerging as the industry standard for observability.
In short:
telemetry data = (logs, metrics, and traces)
OpenTelemetry  = a set of API, SDKs, libraries, and integrations aiming to standardize the generation, collection, and management of telemetry data.
Before OpenTelemetry
Traditional APM Tools (2000s-2010s):
- New Relic (2008) - One of the first SaaS APM platforms
 - AppDynamics (2008) - Enterprise APM with deep code-level visibility
 - Dynatrace (1998, evolved from Compuware) - AI-powered full-stack monitoring
 - Splunk (2003) - Log analysis and machine data platform
 - DataDog (2010) - Cloud-scale monitoring platform
 
Open Source Solutions:
- Zipkin (2012) - Distributed tracing system by Twitter
 - Jaeger (2016) - Distributed tracing by Uber
 - Prometheus(2012)/Micrometer(2017) - Metrics monitoring
 - Grafana (2014) - Visualization and dashboards
 
Competing Standards:
OpenTracing and OpenCensus - Two separate projects that were created to solve the same problem: the lack of a standard for how to instrument code and send telemetry data.
Problems with the Pre-OpenTelemetry Landscape:
1. Vendor Lock-in
- Each APM vendor had proprietary agents and APIs
 - Switching tools meant rewriting instrumentation code
 - Organizations became dependent on specific vendors
 
2. Fragmented Standards
- OpenTracing focused on distributed tracing APIs
 - OpenCensus provided both tracing and metrics
 - No unified approach across the ecosystem
 
3. High Costs
- Enterprise APM tools were extremely expensive
 - Per-host/per-transaction pricing models
 - Limited flexibility in data export
 
4. Limited Interoperability
- Data couldn't be easily moved between tools
 - Each vendor used different data formats
 - Difficult to use best-of-breed solutions together
 
5. Complex Instrumentation
- Manual instrumentation was vendor-specific
 - Inconsistent approaches across languages
 - High maintenance burden
 
Why OpenTelemetry Was Created
- Standardization: OTLP provides a consistent way to export observability data, ensuring compatibility and reducing integration complexity. Unified model- Traces, metrics, and logs are captured using the same SDKs and standards, so you don’t need separate instrumentation for each vendor.
 - Flexibility: OTel doesn’t lock you into a single APM (like Datadog, New Relic, or Grafana). You can export to multiple backends simultaneously (Jaeger, Tempo, Zipkin, Prometheus, OTLP, etc.).
 - Future-Proofing: As the observability landscape evolves, OTLP enables your tracing data to remain accessible and usable across different systems.
 - Cost Efficiency Open source instrumentation and Flexibility in choosing storage/analysis tools. Also, Reduced vendor dependency
 
Different Ways to Integrate OpenTelemetry
Here are the various options to include OpenTelemetry in your applications:
1. Auto-Instrumentation (Agent-based)
- Java Agent: Download and attach OTel Java agent JAR
 - No code changes required
 - Automatically instruments popular libraries
 - Run with: java -javaagent:opentelemetry-javaagent.jar -jar myapp.jar
 
2. SDK Integration (Manual)
- Add OTel SDK dependencies to your project
 - Manual instrumentation in code
 - Full control over what gets traced
 - Requires code modifications
 
3. Spring Boot Starter
- Spring-specific auto-configuration
 - Combines auto and manual approaches
 - Easy integration with Spring ecosystem
 - Dependency: opentelemetry-spring-boot-starter
 
4. Framework-Specific Integrations
- Micrometer Bridge: Connect existing Micrometer metrics
 - Zipkin/Jaeger Bridge: Migrate from existing tracing
 - Actuator Integration: Spring Boot health/metrics endpoints
 
Tutorial: Tracing a Spring Boot App with OpenTelemetry Java Agent (No Code Changes Needed) & Jaeger (Dockerized)
In this example, we’ll take a look at how to enable OpenTelemetry in a Spring Boot application using the OTel Java Agent (No Code Changes Needed). With just a simple startup configuration, the agent can automatically instrument common libraries and frameworks without requiring any code changes. We’ll then configure the setup to forward traces and logs to a backend like Jaeger, so you can visualize request flows, latency, and errors in a distributed system.
Dockerfile
# -------- Stage 1: Build with Maven--------
# Use Eclipse Temurin JDK 17 with Alpine Linux
FROM eclipse-temurin:17-jdk-alpine AS builder
# Set working directory
WORKDIR /app
# Copy pom.xml and maven wrapper download dependencies
COPY ./pom.xml ./pom.xml
COPY ./mvnw ./mvnw
COPY ./.mvn ./.mvn
# Make Maven wrapper executable and download dependencies
RUN chmod +x ./mvnw && ./mvnw dependency:go-offline
# Copy source files and build
COPY src ./src/
# Build the application
RUN ./mvnw clean package -DskipTests && mv target/docker-demo-0.0.1.jar docker-demo.jar && rm -rf target
# Download agent with verification
RUN wget -q https://github.com/open-telemetry/opentelemetry-java-instrumentation/releases/latest/download/opentelemetry-javaagent.jar \
    -O opentelemetry-javaagent.jar
# -------- Stage 2: Runtime --------
FROM eclipse-temurin:17-jre-alpine AS runtime
# Set the working directory and make it writable by the non-root user
WORKDIR /app
# Define build arguments for user and group
ARG USER_ID=1001
ARG GROUP_ID=1001
ARG USERNAME=springuser
ARG GROUPNAME=springuser
# Create group and user using ARGs
RUN addgroup -g ${GROUP_ID} ${GROUPNAME} \
    && adduser -u ${USER_ID} -G ${GROUPNAME} -s /bin/sh -D ${USERNAME}
# Copy built JAR from builder stage
COPY --from=builder --chown=springuser:springgroup /app/docker-demo.jar docker-demo.jar
COPY --from=builder --chown=springuser:springgroup /app/opentelemetry-javaagent.jar opentelemetry-javaagent.jar
# Switch to non-root user
USER ${USERNAME}
# Expose application port
EXPOSE 8080
# Alternative using wget (no additional package needed)
HEALTHCHECK --interval=30s --timeout=10s --start-period=60s --retries=3 \
    CMD wget --no-verbose --tries=1 --spider http://localhost:8080/actuator/health || exit 1
ENTRYPOINT ["java","-javaagent:/app/opentelemetry-javaagent.jar","-jar", "/app/docker-demo.jar"]
Build docker image with below command:
 docker build -t docker-demo .
docker-compose.yml
version: '3.8'
services:
  # Jaeger - Tracing Backend
  jaeger:
    image: jaegertracing/all-in-one:1.51
    container_name: jaeger
    ports:
      - "16686:16686"    # Jaeger UI
      - "14250:14250"    # Jaeger gRPC
      - "4318:4318"     # OTLP HTTP
    environment:
      - COLLECTOR_OTLP_ENABLED=true
    networks:
      - app-network
  # Your Spring Boot Application
  docker-demo:
    image: docker-demo:latest
    container_name: docker-demo
    ports:
      - "8080:8080"
    environment:
      # OpenTelemetry Configuration
      - OTEL_SERVICE_NAME=docker-demo
      - OTEL_EXPORTER_OTLP_ENDPOINT=http://jaeger:4318
      - OTEL_TRACES_EXPORTER=otlp
      - OTEL_METRICS_EXPORTER=none
      - OTEL_LOGS_EXPORTER=none
      - OTEL_TRACES_SAMPLER=always_on  # Sample ALL traces
      - OTEL_INSTRUMENTATION_COMMON_DEFAULT_ENABLED=true
      - OTEL_INSTRUMENTATION_HTTP_ENABLED=true
      - OTEL_INSTRUMENTATION_SPRING_WEB_ENABLED=true
      - OTEL_LOG_LEVEL=DEBUG
      # Application Configuration
      - JAVA_OPTS=-javaagent:/app/opentelemetry-javaagent.jar
    depends_on:
      - jaeger
    networks:
      - app-network
networks:
  app-network:
    driver: bridge
Build and start your containers (Spring Boot app + Jaeger) with:
 docker-compose up  --build
You should see logs from both services. The app logs will include lines like:
Which confirms that OpenTelemetry is sending traces to Jaeger.
Open the Jaeger UI
Once everything is up, head to the Jaeger UI in your browser:
http://localhost:16686/search
This UI is where you can search and explore your traces. Initially, the service list may be empty — because traces are only sent once you actually call your Spring Boot app.
Call your API Now hit your Spring Boot endpoint:
http://localhost:8080/api/customers
This request will:
- Be intercepted by the OpenTelemetry Java Agent.
 - Generate a trace for the request.
 - Export the trace via OTLP to the Jaeger collector.
 
View your traces in Jaeger
Go back to the Jaeger UI http://localhost:16686/search
In the Service dropdown, you should now see your application (e.g., docker-demo).
Select it and click Find Traces.
You’ll see traces corresponding to the requests you made to /api/customers.
Click on a trace to expand and see spans (individual operations), timings, and dependencies.
Summary
At this point:
- Your Spring Boot app is running with OpenTelemetry Java Agent attached.
 - Traces are exported via OTLP to Jaeger.
 - You can interact with your app and instantly see distributed traces in Jaeger UI.
 
Next Steps:
- Add more endpoints or services to see how traces propagate.
 - Connect additional backends (Prometheus, Tempo, Zipkin, etc.) by just changing environment variables — thanks to OpenTelemetry’s vendor-agnostic design.
 
References & Credits
AI tools were used to assist in research and writing but final content was reviewed and verified by the author.
opentelemetry
              

    
Top comments (0)