DEV Community

# observability

Gaining deep insights into system behavior through metrics, logs, and traces.

Posts

đź‘‹ Sign in for the ability to sort posts by relevant, latest, or top.
Wire OpenTelemetry Around Your Anthropic Python Calls

Wire OpenTelemetry Around Your Anthropic Python Calls

Comments
6 min read
Why I Built a Self-Hosted Alternative to Helicone (and What I Learned)

Why I Built a Self-Hosted Alternative to Helicone (and What I Learned)

Comments
5 min read
Two-Thirds of AI-Agent Security Incidents Share One Pattern

Two-Thirds of AI-Agent Security Incidents Share One Pattern

1
Comments
8 min read
The Vercel Breach Began at a Compromised AI Tool. Here's the Lesson.

The Vercel Breach Began at a Compromised AI Tool. Here's the Lesson.

Comments
7 min read
Google's TurboQuant: 6x KV Cache Compression Without Retraining

Google's TurboQuant: 6x KV Cache Compression Without Retraining

Comments
8 min read
Claude Code's Prompt Cache TTL Dropped From 1h to 5m

Claude Code's Prompt Cache TTL Dropped From 1h to 5m

Comments
6 min read
The 5 Distributed System Failures That Show Up in 80% of Postmortems

The 5 Distributed System Failures That Show Up in 80% of Postmortems

Comments
8 min read
eBPF for SREs: Observability Without Agents

eBPF for SREs: Observability Without Agents

Comments
3 min read
97% Expect a Major AI Agent Incident This Year. Are You in the 3%?

97% Expect a Major AI Agent Incident This Year. Are You in the 3%?

Comments
8 min read
Inside Datadog's Log Pipeline: How "Logging without Limits" Actually Works

Inside Datadog's Log Pipeline: How "Logging without Limits" Actually Works

Comments
3 min read
Your RAG Eval Set Is Probably Wrong. The Test That Catches It.

Your RAG Eval Set Is Probably Wrong. The Test That Catches It.

Comments
7 min read
Stop Caching the Whole LLM Response. Cache the Embedding.

Stop Caching the Whole LLM Response. Cache the Embedding.

Comments
8 min read
The 3 Alerts Every LLM Team Should Have Set Up by Tomorrow

The 3 Alerts Every LLM Team Should Have Set Up by Tomorrow

Comments
7 min read
The 6-Line Postgres Migration That Halved a Team's LLM Bill

The 6-Line Postgres Migration That Halved a Team's LLM Bill

Comments
7 min read
The 100-Line LLM Cache That Pays For Itself in a Week

The 100-Line LLM Cache That Pays For Itself in a Week

Comments
8 min read
đź‘‹ Sign in for the ability to sort posts by relevant, latest, or top.