DEV Community

Alex Spinov
Alex Spinov

Posted on

Grafana Tempo Has a Free API — Distributed Tracing Without Sampling

Grafana Tempo is an open-source distributed tracing backend that stores 100% of traces at massive scale. Unlike Jaeger or Zipkin, Tempo requires only object storage (S3/GCS) — no complex databases.

Free, open source, by Grafana Labs. Pairs perfectly with Loki and Prometheus.

Why Use Tempo?

  • 100% trace capture — no sampling, store every trace
  • Cheap storage — uses S3/GCS/Azure Blob, not databases
  • TraceQL — SQL-like query language for traces
  • Multi-protocol — accepts Jaeger, Zipkin, OTLP, and OpenCensus
  • Grafana integration — seamless dashboards and trace-to-log correlation

Quick Setup

1. Install

# Docker
docker run -d --name tempo -p 3200:3200 -p 4317:4317 grafana/tempo:latest -config.file=/etc/tempo.yaml

# Helm
helm install tempo grafana/tempo --set tempo.storage.trace.backend=s3
Enter fullscreen mode Exit fullscreen mode

2. Send Traces (OTLP)

# Using curl with OTLP HTTP
curl -X POST http://localhost:4318/v1/traces \
  -H "Content-Type: application/json" \
  -d '{
    "resourceSpans": [{
      "resource": {"attributes": [{"key": "service.name", "value": {"stringValue": "web-scraper"}}]},
      "scopeSpans": [{
        "spans": [{
          "traceId": "5B8EFFF798038103D269B633813FC60C",
          "spanId": "EEE19B7EC3C1B174",
          "name": "scrape-page",
          "kind": 1,
          "startTimeUnixNano": "1711612800000000000",
          "endTimeUnixNano": "1711612801500000000",
          "attributes": [{"key": "url", "value": {"stringValue": "https://example.com"}}]
        }]
      }]
    }]
  }'
Enter fullscreen mode Exit fullscreen mode

3. Query Traces

TEMPO="http://localhost:3200"

# Get trace by ID
curl -s "$TEMPO/api/traces/5B8EFFF798038103D269B633813FC60C" | jq '.batches[].scopeSpans[].spans[] | {name: .name, duration_ms: ((.endTimeUnixNano | tonumber) - (.startTimeUnixNano | tonumber)) / 1e6}'

# Search with TraceQL
curl -s -G "$TEMPO/api/search" \
  --data-urlencode 'q={span.service.name="web-scraper" && duration > 1s}' | jq '.traces[] | {traceID: .traceID, rootServiceName: .rootServiceName, duration: .durationMs}'

# Search tags
curl -s "$TEMPO/api/search/tags" | jq '.tagNames'
curl -s "$TEMPO/api/search/tag/service.name/values" | jq '.tagValues'
Enter fullscreen mode Exit fullscreen mode

Python Example

from opentelemetry import trace
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor
from opentelemetry.exporter.otlp.proto.grpc.trace_exporter import OTLPSpanExporter

# Setup
provider = TracerProvider()
provider.add_span_processor(BatchSpanProcessor(
    OTLPSpanExporter(endpoint="http://localhost:4317", insecure=True)
))
trace.set_tracer_provider(provider)
tracer = trace.get_tracer("web-scraper")

# Create spans
with tracer.start_as_current_span("scrape-site") as span:
    span.set_attribute("site", "example.com")
    with tracer.start_as_current_span("fetch-page"):
        pass  # HTTP request here
    with tracer.start_as_current_span("parse-html"):
        pass  # Parsing here

import requests
TEMPO = "http://localhost:3200"

# Query traces
result = requests.get(f"{TEMPO}/api/search", params={
    "q": '{span.service.name="web-scraper"}'
}).json()

for t in result.get("traces", []):
    print(f"Trace: {t['traceID']} | Service: {t['rootServiceName']} | Duration: {t['durationMs']}ms")
Enter fullscreen mode Exit fullscreen mode

Key Endpoints

Endpoint Description
/api/traces/{traceID} Get trace by ID
/api/search Search traces (TraceQL)
/api/search/tags List available tags
/api/search/tag/{tag}/values Tag values
/v1/traces (4318) OTLP HTTP receiver
gRPC :4317 OTLP gRPC receiver
/ready Readiness check
/metrics Prometheus metrics

Need custom data extraction or scraping solution? I build production-grade scrapers for any website. Email: Spinov001@gmail.com | My Apify Actors

Top comments (0)