DEV Community

Cover image for What is LLM Observability? The ML Engineer's Practical Guide (2026)
Ayub Shah
Ayub Shah

Posted on • Originally published at mlopslab.org

What is LLM Observability? The ML Engineer's Practical Guide (2026)

Originally published at mlopslab.org/llm-observability — updated weekly. 0 sponsors, 0 affiliate links.


⚡ Quick answer: LLM observability is the practice of collecting metrics, traces, and logs from large language model applications to monitor behavior, catch failures, control costs, and improve output quality — in real time. Unlike traditional APM, it handles non-deterministic outputs, prompt/response pairs, token costs, hallucination rates, and multi-step agent chains that standard monitoring tools were never built for.


Table of Contents

  1. LLM observability: the actual definition
  2. Why traditional APM fails for LLMs
  3. Why it matters in 2026
  4. The three pillars: metrics, traces, logs
  5. Key LLM observability metrics
  6. Best LLM observability tools (2026)
  7. How to implement it in Python — step by step
  8. RAG observability: what's different
  9. Common mistakes to avoid
  10. FAQ

1. LLM observability: the actual definition

LLM observability is the ability to understand what your large language model is doing, why it's doing it, and whether it's doing it well — while it's running in production.

The formal definition: it's the process of instrumenting LLM applications to collect structured data (metrics, traces, logs) about inputs, outputs, latency, token usage, and downstream behavior — then making that data queryable and actionable.

But here's the part most definitions skip: LLMs are non-deterministic. The same prompt can produce different outputs. That single fact breaks every assumption traditional application monitoring was built on.

💡 Note: "Observability" comes from control theory — a system is observable if you can infer its internal state from its outputs. For LLMs, the "internal state" is opaque by design. Observability is how you compensate for that opacity.

A complete LLM observability setup lets you answer questions like:

  • Why did this prompt return garbage output on Tuesday at 3pm?
  • How many tokens did we burn last week, and on which features?
  • Is our retrieval step actually finding relevant context, or just noise?
  • Which user flows are generating the most hallucinations?
  • Did our prompt change last Wednesday improve or hurt response quality?

Without observability, you're guessing at all of the above.


2. Why traditional APM fails for LLMs

You might already have Datadog, New Relic, or Prometheus running. They're great tools. They will not help you monitor an LLM application properly. Here's why:

Traditional APM vs LLM Observability

Dimension Traditional APM LLM Observability
Output nature Deterministic — same input → same output Non-deterministic — same prompt → different outputs
Failure mode Binary (HTTP 200 vs 500) Output can be grammatically correct but factually wrong
Performance definition Speed + uptime Relevance, factual accuracy, coherence
Quality Not applicable First-class concern with dedicated metrics
Tracing Fixed execution paths Spans across prompt → retrieval → generation → re-ranking
Cost tracking Not needed Token cost per request is critical (it's your AWS bill)
Errors Clear: stack traces, exceptions "Silent failures" — plausible-sounding wrong answers

The most dangerous failure mode in LLM production is the silent failure: the model returns a 200 OK with a confident, fluent, completely wrong answer. Your APM sees green. Your users are getting misinformation. You have no idea.

That's the problem LLM observability is built to solve.


3. Why it matters in 2026

1. You're paying per token — and it adds up fast

GPT-4o charges ~$5 per million input tokens. Claude Opus is $15. If you're running a RAG pipeline that sends 3,000-token prompts for every user query, and you have 10,000 daily active users, you're burning through tokens fast.

Without observability, you have zero visibility into:

  • Which features are expensive
  • Which prompts are bloated
  • Which retrieval chunks are redundant

A 40% cost reduction is realistic just from instrumenting your token usage and trimming waste.

2. Hallucinations don't throw exceptions

When a SQL query fails, you get an error. When an LLM confidently fabricates a legal clause, a medical dosage, or a product spec — you get a 200 OK.

The only way to catch this is output evaluation: either automated (LLM-as-judge, assertion checks) or via user feedback signals — both of which require an observability layer to collect and route.

3. LLM apps are increasingly multi-step

A modern RAG agent might do:

query rewriting → vector search → reranking → generation → post-processing → tool calls
Enter fullscreen mode Exit fullscreen mode

Any step can fail silently. Without distributed tracing across all those steps, you have no way to know which node in the chain is degrading your quality.

Tip: If you're already logging prompts and responses to a database, you have the raw material for LLM observability. The difference is structure, aggregation, and making that data queryable — which is what proper tooling does.


4. The three pillars: metrics, traces, logs

LLM observability, like traditional observability, rests on three data types. But each has LLM-specific meaning:

📊 Metrics — aggregated numbers over time

Latency percentiles, token consumption per day, error rates, hallucination rate, TTFT (time to first token), user thumbs-up/down ratio.

These are your dashboards — the signals that tell you whether the system is healthy at a glance.

🔍 Traces — the execution path of a single request

A trace for an LLM request spans every step:

input received → prompt constructed → retrieval triggered → chunks fetched → LLM called → response parsed → returned
Enter fullscreen mode Exit fullscreen mode

Traces tell you where time and tokens were spent on a specific request and let you drill into failures.

📋 Logs — raw structured records of events

Every prompt sent, every response received, every retrieved chunk, every tool call. Logs are the ground truth — unsampled, timestamped, filterable.

They're what you reach for during incident investigation when metrics tell you something is wrong but not exactly what.

A mature LLM observability setup collects all three and links them: a metric spike points you to a trace, a trace links to the logs of that specific exchange.

⚠️ Warning: Logging raw prompts and responses raises data privacy and compliance considerations. If users send PII, it ends up in your logs. Make sure you have a redaction or anonymization strategy before you log at full fidelity in production.


5. Key LLM observability metrics

These are the metrics that actually matter — not the generic list you'll find everywhere, but the ones that show up when something goes wrong.

⏱️ Latency metrics

Metric What it measures Why it matters
TTFT (Time To First Token) Latency before streaming starts User-perceived speed — low TTFT feels fast even if total is high
TPS (Tokens Per Second) Generation speed Degrades under load — track p50, p95, p99
End-to-end latency Total request time including retrieval + generation What SLAs are measured against

💸 Cost metrics

Metric What it measures Why it matters
Input tokens/request Prompt tokens per call Where cost bloat hides — long system prompts, noisy chunks
Cost per request Input+output tokens × model price Unit economics for your feature
Daily token burn rate Total tokens across all requests Set alerts here — a loop bug shows up here before your bill does

🎯 Quality metrics

Metric What it measures Why it matters
Faithfulness Does answer stay grounded in retrieved context? Unfaithful answers are hallucinations
Relevance score Is the answer relevant to what was asked? Factually correct but wrong-topic answers still fail
User feedback rate Thumbs up/down, ratings, correction events Highest-signal quality metric — direct from users

💡 Note: Quality metrics are the hardest to collect automatically. Start with user feedback signals (explicit) and retry/abandon rate (implicit). Then layer in automated evaluation once you have a baseline.


6. Best LLM observability tools (2026)

Honest breakdown. I've tested all of these. No affiliate links, no vendor bias.

🦜 Langfuse — Best open-source default

Self-hostable, developer-first LLM tracing. Best OSS option if you want full data control and a clean SDK.

Pros:

  • Self-hostable via Docker (free)
  • SDKs for Python, JS, LangChain, LlamaIndex
  • Prompt management + version tracking
  • Dataset + evaluation workflows

Best for: Most teams. Start here.


🔥 Arize Phoenix — Best for embedding analysis

ML observability platform with strong LLM support.

Pros:

  • OpenInference tracing standard
  • Embedding drift & cluster visualization
  • Built-in evals (hallucination, toxicity)
  • Works fully offline / local

Best for: Teams already using Arize for traditional ML monitoring.


⚡ Helicone — Fastest to set up

Proxy-based approach — zero SDK changes. One header = instant logging.

Pros:

  • One-line integration (proxy URL swap)
  • Real-time cost dashboard
  • Request caching (reduces cost)
  • 10k req/month free

Best for: Cost tracking, teams that want zero implementation overhead.


🌊 W&B Weave — Best if you're already on W&B

Weights & Biases' LLM observability layer.

Pros:

  • Native W&B integration
  • Automatic function tracing via decorator
  • Evaluation pipelines built-in
  • Free for individual use

Best for: Teams using W&B for experiment tracking.


📡 OpenTelemetry — Most flexible, most work

Vendor-neutral observability standard. Build your own pipeline.

Pros:

  • Vendor-neutral (ship to any backend)
  • OpenLLMetry SDK for LLM spans
  • Works with Jaeger, Tempo, Datadog

Best for: Enterprise, multi-backend infrastructure.


🐕 Datadog LLM Observability — Enterprise grade, enterprise price

Pros:

  • Unified with existing Datadog APM
  • Auto-instrumentation for OpenAI/Anthropic
  • Cluster analysis for prompt patterns

Best for: Existing Datadog shops with budget.


Quick comparison table

Tool Open Source Self-hostable RAG support Evals built-in Best for
Langfuse Most teams — best OSS default
Arize Phoenix Embedding analysis, ML teams
Helicone ⚠️ ⚠️ ⚠️ Cost tracking, fastest setup
W&B Weave W&B users, experiment correlation
OpenTelemetry Enterprise, multi-backend
Datadog LLM Obs Existing Datadog shops

Recommendation: Start with Langfuse. Open source, self-hostable with Docker in 5 minutes, clean Python SDK, covers 90% of what you need. Graduate to OpenTelemetry when you need unified tracing across complex multi-service infra.


7. How to implement it in Python — step by step

Enough theory. Here's how you actually do it. We'll use Langfuse — the best open-source option — for the full flow from a simple LLM call to a RAG pipeline with spans, scores, and cost tracking.

Step 1: Set up Langfuse (self-hosted via Docker)

# Clone and start Langfuse locally
git clone https://github.com/langfuse/langfuse.git
cd langfuse
docker compose up -d

# Langfuse UI will be at http://localhost:3000
# Create a project and grab your API keys

# Install the Python SDK
pip install langfuse openai
Enter fullscreen mode Exit fullscreen mode

Step 2: Basic LLM call with full tracing

from langfuse import Langfuse
from langfuse.openai import openai  # drop-in replacement
import os

# Init — reads LANGFUSE_PUBLIC_KEY, LANGFUSE_SECRET_KEY, LANGFUSE_HOST from env
langfuse = Langfuse(
    public_key="pk-lf-...",
    secret_key="sk-lf-...",
    host="http://localhost:3000"  # or https://cloud.langfuse.com
)

# This single import swap gives you automatic tracing
# of every OpenAI call: prompt, response, tokens, latency, cost
response = openai.chat.completions.create(
    model="gpt-4o",
    messages=[
        {"role": "system", "content": "You are a helpful MLOps assistant."},
        {"role": "user", "content": "What is MLflow used for?"}
    ],
    # Optional: tag this trace for filtering in the UI
    name="mlops-qa",
    metadata={"feature": "chat", "user_id": "u_123"}
)

print(response.choices[0].message.content)
# All trace data is now visible in Langfuse UI — zero extra code needed
Enter fullscreen mode Exit fullscreen mode

The import swap is the key. from langfuse.openai import openai patches the OpenAI client and captures everything automatically: token counts, cost, latency, the full prompt and response.

Step 3: Custom spans for multi-step pipelines

from langfuse import Langfuse
from langfuse.openai import openai
from langfuse.decorators import langfuse_context, observe

langfuse = Langfuse()

# @observe creates a span for this function automatically
@observe()
def retrieve_chunks(query: str, top_k: int = 5) -> list:
    """Simulated vector store retrieval"""
    # In production: call your Chroma / Pinecone / Weaviate here
    chunks = [
        {"text": "MLflow is an open source platform for ML lifecycle management...", "score": 0.92},
        {"text": "MLflow Tracking logs parameters, metrics, and artifacts...", "score": 0.87},
    ]
    # Log retrieval metadata to the span
    langfuse_context.update_current_observation(
        input=query,
        output=chunks,
        metadata={"top_k": top_k, "chunk_count": len(chunks)}
    )
    return chunks

@observe()
def build_prompt(query: str, chunks: list) -> str:
    """Assemble the final prompt from query + retrieved context"""
    context = "\n\n".join([c["text"] for c in chunks])
    return f"""Answer using only the context below.

Context:
{context}

Question: {query}
Answer:"""

@observe()  # The root trace — wraps the whole pipeline
def rag_answer(query: str) -> str:
    # Step 1: retrieve — traced as a child span
    chunks = retrieve_chunks(query)

    # Step 2: build prompt — traced as a child span
    prompt = build_prompt(query, chunks)

    # Step 3: generate — traced via patched OpenAI client
    response = openai.chat.completions.create(
        model="gpt-4o-mini",
        messages=[{"role": "user", "content": prompt}]
    )
    answer = response.choices[0].message.content

    # Step 4: score the output quality (0-1 scale)
    langfuse_context.score_current_trace(
        name="answer_quality",
        value=1.0,  # replace with your eval logic
        comment="Auto-scored: retrieval found relevant chunks"
    )

    return answer

# Run it
result = rag_answer("What is MLflow used for?")
print(result)

# Flush traces before script exits
langfuse.flush()
Enter fullscreen mode Exit fullscreen mode

Step 4: Automated quality scoring (LLM-as-judge)

from openai import OpenAI

raw_client = OpenAI()  # unpatched — don't trace the judge calls

def evaluate_faithfulness(question: str, context: str, answer: str) -> tuple[float, str]:
    """
    LLM-as-judge: score whether the answer is faithful to the retrieved context.
    Returns a score from 0.0 (hallucination) to 1.0 (fully grounded).
    """
    judge_prompt = f"""You are evaluating an AI assistant's answer for faithfulness.

RETRIEVED CONTEXT:
{context}

QUESTION: {question}

ANSWER: {answer}

Task: Score whether the answer is ONLY based on the retrieved context (not hallucinated).
Respond with JSON only: {{"score": 0.0-1.0, "reason": "brief explanation"}}
0.0 = completely hallucinated | 1.0 = fully grounded in context"""

    response = raw_client.chat.completions.create(
        model="gpt-4o-mini",
        messages=[{"role": "user", "content": judge_prompt}],
        response_format={"type": "json_object"}
    )

    import json
    result = json.loads(response.choices[0].message.content)
    return result["score"], result["reason"]

# Post the score back to Langfuse for any trace
score, reason = evaluate_faithfulness(question, context, answer)

langfuse.score(
    trace_id="your-trace-id",  # from langfuse_context.get_current_trace_id()
    name="faithfulness",
    value=score,
    comment=reason
)
Enter fullscreen mode Exit fullscreen mode

Step 5: Capture user feedback signals

from langfuse import Langfuse

langfuse = Langfuse()

def handle_user_feedback(trace_id: str, thumbs_up: bool, comment: str = None):
    """Record user feedback against the trace that generated the response"""
    langfuse.score(
        trace_id=trace_id,
        name="user_feedback",
        value=1 if thumbs_up else 0,
        comment=comment
    )

# Example: in your FastAPI endpoint
# @app.post("/feedback")
# async def feedback(trace_id: str, positive: bool, comment: str = None):
#     handle_user_feedback(trace_id, positive, comment)
#     return {"status": "recorded"}
Enter fullscreen mode Exit fullscreen mode

After this implementation, your Langfuse dashboard shows: every trace, constituent spans (retrieval → prompt build → generation), token counts, latency by step, faithfulness scores, and user feedback — all correlated.

Pro tip: Get the current trace_id inside any @observe-decorated function with langfuse_context.get_current_trace_id(). Store this in your response payload so you can link user feedback back to the exact trace.


8. RAG observability: what's different

RAG pipelines have unique failure modes that generic LLM observability doesn't capture.

RAG-specific metrics to track

Metric What it measures Good range Bad signal
Context precision Are retrieved chunks actually relevant? > 0.8 Low → noisy retrieval, poor embedding
Context recall Did retrieval find all needed chunks? > 0.75 Low → answer is incomplete
Faithfulness Is the answer grounded in context? > 0.85 Low → hallucination
Answer relevance Does the answer address what was asked? > 0.8 Low → model answering wrong question
Retrieval latency Time spent in vector search < 200ms High → index needs optimization
Chunk token count Avg tokens per retrieved chunk 200–600 Too high → inflated cost, diluted signal

The RAG failure nobody talks about: context stuffing

The most common undetected RAG failure: retrieval returns chunks that look semantically similar to the query but don't contain the actual answer. The model then either hallucinates or returns a plausible-sounding non-answer.

Context precision catches this. Track it per query, and set an alert if it drops below 0.6 for more than 5% of requests.

Measuring RAG quality with RAGAS

from ragas import evaluate
from ragas.metrics import (
    faithfulness,
    answer_relevancy,
    context_precision,
    context_recall
)
from datasets import Dataset

# Collect your RAG pipeline outputs
data = {
    "question":  ["What is MLflow used for?"],
    "answer":    ["MLflow is used for experiment tracking..."],
    "contexts":  [["MLflow is an open source platform...", "MLflow Tracking logs..."]],
    "ground_truth": ["MLflow manages the ML lifecycle including tracking..."]
}

dataset = Dataset.from_dict(data)

# Run RAGAS evaluation — gives you all 4 RAG metrics at once
result = evaluate(
    dataset=dataset,
    metrics=[faithfulness, answer_relevancy, context_precision, context_recall]
)

print(result)
# {'faithfulness': 0.92, 'answer_relevancy': 0.88,
#  'context_precision': 0.94, 'context_recall': 0.81}

# Then post these scores to Langfuse for the corresponding trace
Enter fullscreen mode Exit fullscreen mode

9. Common mistakes to avoid

❌ Logging everything with no retention policy

Storing every raw prompt and response forever will balloon your storage costs. Set a 30–90 day retention window. Sample high-volume low-value traces (e.g., 1 in 10 for healthy routine calls), and keep 100% of error traces and scored traces.

❌ Treating latency as the only quality signal

Fast bad answers are worse than slow good ones. Build quality metrics from day one — even if it's just a user thumbs-up/down. Don't let "it's fast" become your proxy for "it's working."

❌ Adding observability as an afterthought

If you retrofit tracing into a production system with no span structure, you'll get a flat blob of logs with no actionable signal. Instrument at the architecture level — define your spans (retrieval, generation, eval) from the first prototype.

❌ Not separating judge calls from production traces

If you're using an LLM to evaluate your LLM's outputs, those evaluation calls must use an unpatched client. Otherwise: recursive tracing, inflated token counts, meaningless cost data.

❌ Ignoring PII in logs

Users will send email addresses, names, medical info into your LLM app. In production, run a PII redaction pass before writing traces to storage. This is not optional if you're handling EU users (GDPR).


10. FAQ

"What's the difference between LLM monitoring and LLM observability?"

Monitoring tracks predefined metrics (latency, error rate) and alerts when they cross thresholds.

Observability is broader — it's the ability to ask arbitrary questions about your system's behavior from its outputs, including things you didn't anticipate when you set up the system.

In practice: monitoring tells you something is wrong, observability helps you figure out why and what.

"Can I use Prometheus and Grafana for LLM observability?"

Yes, for system-level metrics (latency, throughput, error rate, token counts). Expose these via a /metrics endpoint and scrape with Prometheus.

But you'll still need a purpose-built tool like Langfuse or Phoenix for prompt/response tracing, RAG-specific metrics, and quality evaluation. Prometheus doesn't understand the semantic content of LLM outputs.

"How do you detect hallucinations automatically?"

Three main approaches:

  1. Faithfulness scoring — use an LLM judge to check if the answer is grounded in retrieved context
  2. Assertion checks — programmatic rules for your domain (e.g., "answer must not contain dates before 2020")
  3. Semantic similarity — compare answer embedding to context embedding; low similarity suggests "off-context" generation

None of these are perfect. Start with LLM-as-judge faithfulness scoring combined with user feedback signals.

"Is LLM observability the same as MLOps?"

MLOps is the broader practice of operationalizing machine learning — including training pipelines, experiment tracking, model deployment, and monitoring.

LLM observability is a specific subset focused on monitoring LLM-powered applications in production. It overlaps with MLOps but has different tooling: token costs, prompt management, output quality evaluation vs. model drift, retraining pipelines.

"What's the cheapest way to start?"

Self-host Langfuse via Docker (free). Use the Python SDK with the OpenAI import swap (5 lines of code). You'll have full tracing, token tracking, and a queryable UI for $0.

Your only cost is the server running Langfuse — a $5/month DigitalOcean droplet is enough for early-stage projects.

"Does LLM observability work with open-source models (Llama, Mistral)?"

Yes. Langfuse and Phoenix work with any model via their generic SDK (you manually log inputs/outputs). For models served via vLLM or Ollama with an OpenAI-compatible API, the OpenAI import swap works directly.

Token cost tracking requires manual calculation since open-source model servers don't report costs.


Conclusion

LLM observability isn't optional at production scale. The "it works in testing" mindset breaks fast when real users send unexpected inputs, when retrieval quality degrades silently, when a token-hungry prompt pattern starts inflating your inference bill.

The stack to start with:

  • Langfuse for tracing
  • RAGAS for RAG quality metrics
  • User feedback signals for ground truth

That combination gives you 80% of what you need with maybe a day of implementation work.

Don't build the perfect observability system before shipping. Instrument as you build. Add quality metrics when you have baseline data to compare against. The value compounds.

🔗 Next step: Set up Langfuse locally → instrument one LLM call → check the trace in the UI. That's the first 20 minutes. Everything else follows from having that first trace visible.


Related articles on MLOpsLab


References

  1. Dong, L., Lu, Q., & Zhu, L. (2024). AgentOps: Enabling Observability of LLM Agents. arXiv. https://arxiv.org/abs/2411.05285
  2. Es, S., et al. (2023). RAGAS: Automated Evaluation of Retrieval Augmented Generation. arXiv. https://arxiv.org/abs/2309.15217
  3. Langfuse Documentation. https://langfuse.com/docs
  4. OpenTelemetry Semantic Conventions for LLM systems. https://opentelemetry.io/docs/specs/semconv/gen-ai/
  5. Vesely, K., & Lewis, M. (2024). Real-Time Monitoring and Diagnostics of ML Pipelines. Journal of Systems and Software, 185, 111136.

Written by Ayub Shah — ML Engineering student, MLOps enthusiast. Testing every tool so you don't have to. No sponsors, no affiliate links.

→ More at mlopslab.org

Top comments (0)