DEV Community

Cover image for How We Cut LLM API Costs by 94%: A 3-Layer Caching Strategy
Anil Prasad
Anil Prasad

Posted on • Originally published at open.substack.com

How We Cut LLM API Costs by 94%: A 3-Layer Caching Strategy

Last month, our LLM API bills hit $47,000.

This month: $2,800.

Same product. Same user experience. Same performance.

94% cost reduction without sacrificing quality.

Here's the architecture that made it possible.


The Wake-Up Call

CFO's message: "Fix this or we shut down the AI features."

We had 90 days.

Most teams would panic and start cutting features. We treated it as an architecture problem, not a budget problem.


The Solution: 3-Layer Caching + Intelligent Routing

Cost Optimization Architecture


Layer 1: Prompt Caching (68% hit rate)

Problem: Every request pays for the same tokens repeatedly.

Standard system prompts, documentation, static context—all charged every time.

Solution: Claude's native prompt caching.

import anthropic

client = anthropic.Anthropic(api_key="your-key")

# Mark cacheable content with cache_control
message = client.messages.create(
    model="claude-sonnet-4-20250514",
    max_tokens=1024,
    system=[
        {
            "type": "text",
            "text": "You are a helpful AI assistant for our healthcare platform...",
            "cache_control": {"type": "ephemeral"}  # Cache this
        },
        {
            "type": "text", 
            "text": f"Current user context: {user_context}"  # Don't cache (changes per user)
        }
    ],
    messages=[{"role": "user", "content": query}]
)
Enter fullscreen mode Exit fullscreen mode

Economics:

  • Input tokens: $3.00 / 1M tokens
  • Cached input tokens: $0.30 / 1M tokens (10x cheaper!)
  • Cache write: $3.75 / 1M tokens (one-time cost)

Example:

First request (cache write):

5,000 token system prompt
Cost: $0.01875 (5K tokens × $3.75/1M)

Next 100 requests (cache hit):

Same 5,000 token system prompt
Cost: $0.0015 (5K tokens × $0.30/1M × 100)

Total: $0.02025 for 101 requests
Without caching: $1.515 (5K × $3/1M × 101)
Savings: 98.7%

Our hit rate: 68%


Layer 2: Semantic Caching (15% hit rate)

Problem: Vector search doesn't catch similar queries.

"How do I reset my password?" vs "Password reset help?" are semantically identical but literally different.

Solution: Semantic similarity matching.

from sentence_transformers import SentenceTransformer
import numpy as np

class SemanticCache:
    def __init__(self, similarity_threshold=0.95):
        self.model = SentenceTransformer('all-MiniLM-L6-v2')
        self.cache = {}  # {embedding: (query, response, timestamp)}
        self.threshold = similarity_threshold

    def get(self, query: str):
        """Check if semantically similar query exists in cache"""
        query_embedding = self.model.encode(query)

        for cached_embedding, (cached_query, response, timestamp) in self.cache.items():
            similarity = np.dot(query_embedding, cached_embedding)

            if similarity >= self.threshold:
                print(f"Cache HIT: '{query}''{cached_query}' (similarity: {similarity:.3f})")
                return response

        return None

    def set(self, query: str, response: str):
        """Store query-response pair with embedding"""
        embedding = self.model.encode(query)
        self.cache[tuple(embedding)] = (query, response, time.time())

# Usage
cache = SemanticCache(similarity_threshold=0.95)

# First query
response = llm.complete("How do I reset my password?")
cache.set("How do I reset my password?", response)

# Similar query (cache hit!)
cached_response = cache.get("Password reset help?")
# Returns the cached response, no LLM call
Enter fullscreen mode Exit fullscreen mode

Additional 15% cache hit rate on top of prompt caching.


Layer 3: Result Caching (10% hit rate)

Problem: Identical queries hit the LLM multiple times.

Solution: Cache complete responses with smart TTL.

import redis
import hashlib
import json

class ResultCache:
    def __init__(self):
        self.redis = redis.Redis(host='localhost', port=6379, db=0)

    def get_cache_key(self, query: str, context: dict) -> str:
        """Create deterministic cache key"""
        cache_input = json.dumps({
            'query': query,
            'context': context
        }, sort_keys=True)
        return hashlib.sha256(cache_input.encode()).hexdigest()

    def get(self, query: str, context: dict):
        """Get cached response if exists"""
        key = self.get_cache_key(query, context)
        cached = self.redis.get(key)
        return json.loads(cached) if cached else None

    def set(self, query: str, context: dict, response: str, ttl: int = 3600):
        """Cache response with TTL

        TTL strategy:
        - Stable content: 24 hours (86400s)
        - Dynamic content: 1 hour (3600s)
        - Real-time data: 5 minutes (300s)
        """
        key = self.get_cache_key(query, context)
        self.redis.setex(
            key,
            ttl,
            json.dumps(response)
        )

    def invalidate(self, pattern: str):
        """Invalidate cache on data updates"""
        for key in self.redis.scan_iter(pattern):
            self.redis.delete(key)

# Usage
cache = ResultCache()

# Check cache first
cached = cache.get(query, context)
if cached:
    return cached  # Cache hit!

# Cache miss - call LLM
response = llm.complete(query, context)

# Cache the result
cache.set(query, context, response, ttl=3600)

# Invalidate on data update
cache.invalidate("user:123:*")  # Clear all caches for user 123
Enter fullscreen mode Exit fullscreen mode

Final 10% cache hit rate.

Combined: 73% cache hit rate (68% + 15% + 10% with some overlap)


Intelligent Model Routing

Caching alone isn't enough.

67% of our queries work perfectly with Haiku. That's a 60x price difference vs Opus.

from enum import Enum

class ModelTier(Enum):
    HAIKU = "claude-haiku-4-20250514"    # $0.25/1M input
    SONNET = "claude-sonnet-4-20250514"  # $3/1M input
    OPUS = "claude-opus-4-20250514"      # $15/1M input

def route_to_model(query: str, context: str) -> ModelTier:
    """
    Route based on complexity

    Indicators for Haiku (simple):
    - Short queries (<50 tokens)
    - FAQ-style questions
    - Retrieval tasks

    Indicators for Sonnet (analysis):
    - "analyze", "compare", "evaluate"
    - Multi-step reasoning
    - Longer context (>2K tokens)

    Indicators for Opus (complex):
    - "design", "architect", "strategy"
    - Creative tasks
    - Critical business decisions
    """
    tokens = len(query.split())

    # Simple queries → Haiku
    if tokens < 50 and not any(word in query.lower() for word in ['analyze', 'compare', 'design']):
        return ModelTier.HAIKU

    # Analysis tasks → Sonnet
    if any(word in query.lower() for word in ['analyze', 'compare', 'evaluate', 'explain']):
        return ModelTier.SONNET

    # Complex reasoning → Opus
    if any(word in query.lower() for word in ['design', 'architect', 'strategy', 'create']):
        return ModelTier.OPUS

    # Default to Sonnet
    return ModelTier.SONNET

# Usage
model = route_to_model(user_query, context)
response = llm.complete(user_query, model=model.value)
Enter fullscreen mode Exit fullscreen mode

Our distribution:

  • 67% Haiku ($0.25/1M)
  • 28% Sonnet ($3/1M)
  • 5% Opus ($15/1M)

The Complete System

class OptimizedLLMClient:
    def __init__(self):
        self.prompt_cache = PromptCache()      # Layer 1
        self.semantic_cache = SemanticCache()  # Layer 2
        self.result_cache = ResultCache()      # Layer 3
        self.client = anthropic.Anthropic()

    def complete(self, query: str, context: dict):
        # Layer 3: Check result cache
        cached_result = self.result_cache.get(query, context)
        if cached_result:
            return cached_result

        # Layer 2: Check semantic cache
        semantic_result = self.semantic_cache.get(query)
        if semantic_result:
            return semantic_result

        # Layer 1: Prompt caching + model routing happens in LLM call
        model = route_to_model(query, context)

        response = self.client.messages.create(
            model=model.value,
            max_tokens=1024,
            system=[{
                "type": "text",
                "text": context.get('system_prompt'),
                "cache_control": {"type": "ephemeral"}  # Prompt caching
            }],
            messages=[{"role": "user", "content": query}]
        )

        # Cache the result
        self.result_cache.set(query, context, response.content, ttl=3600)
        self.semantic_cache.set(query, response.content)

        return response.content

# Usage
llm = OptimizedLLMClient()
answer = llm.complete("What's my account balance?", context)
Enter fullscreen mode Exit fullscreen mode

The Results

Before:

  • $47K/month API costs
  • P95 latency: 2.1s
  • No optimization strategy

After:

  • $2.8K/month (-94%)
  • P95 latency: 340ms (67% faster!)
  • 73% cache hit rate

Key Insights

1. Infrastructure > Model Selection

Opus with naive setup: $47K/month
Haiku with optimization: $2.8K/month

A well-architected system with Haiku outperforms naive Opus at 1/16th the cost.

2. Cache Hit Rate Math

Without caching: 100% requests hit LLM
With 73% cache hit: 27% requests hit LLM
Cost reduction: 73% from caching alone
Additional savings: 67% of remaining 27% uses cheap Haiku
Total: 94% cost reduction

3. Speed as Side Effect

Caching doesn't just save money. It's faster:

  • Cache hit: 50ms (Redis lookup)
  • LLM call: 2,100ms (P95)

42x faster for cached requests.


Implementation Checklist

  • [ ] Enable prompt caching (10x savings on repeated context)
  • [ ] Add semantic similarity cache (15% additional hits)
  • [ ] Implement result caching with smart TTL
  • [ ] Route queries to appropriate model tier
  • [ ] Monitor cache hit rates and adjust thresholds
  • [ ] Set up cache invalidation on data updates

Monitoring Dashboard

def get_cache_metrics():
    return {
        'prompt_cache_hit_rate': 0.68,
        'semantic_cache_hit_rate': 0.15,
        'result_cache_hit_rate': 0.10,
        'combined_hit_rate': 0.73,
        'model_distribution': {
            'haiku': 0.67,
            'sonnet': 0.28,
            'opus': 0.05
        },
        'cost_per_1k_requests': 2.80,
        'p95_latency_ms': 340
    }
Enter fullscreen mode Exit fullscreen mode

Track these weekly. Optimize based on data, not assumptions.


What's Next

We're open-sourcing our cost optimization framework:

  • Complete caching implementation
  • Model routing logic
  • Monitoring dashboards
  • Cost calculation tools

Follow @anilsprasad or Ambharii Labs for the release.


Your Turn

What's your LLM API bill?

Drop it in the comments and I'll tell you which optimization would have the highest ROI for your use case.

Common wins:

  • Prompt caching: 10x savings on repeated context
  • Model routing: 60x price difference (Haiku vs Opus)
  • Semantic caching: 15% additional hits

Let's make LLMs affordable for everyone. 💰


Tags: #ai #performance #optimization #tutorial

Top comments (0)