DEV Community

interviewgpt
interviewgpt

Posted on

High-Throughput GPU Inference Batching System Design

High-Throughput GPU Inference Batching System

Abstract: How do you build a system that supports high-concurrency requests against an API you cannot change? This article walks through a complete infrastructure design for a GPU inference batching system — one that optimizes GPU utilization via a server-side batching mechanism that intelligently balances latency and throughput. From clarifying questions to deep trade-off analysis, this is a FAANG-level deep dive into one of the hardest infrastructure problems in applied ML.


Table of Contents

  1. Clarifying Questions
  2. Crash Strategy & Key Points
  3. Elite Bonus Points (FAANG Rubrics)
  4. Functional Requirements
  5. Non-Functional Requirements
  6. Back-of-Envelope Estimation
  7. High-Level Design
  8. Low-Level Design
  9. Trade-offs, Alternatives & Optimizations

1. Clarifying Questions

Before designing anything, you need to nail down assumptions. Here are the key questions — and the assumptions we'll carry forward:

Question Assumption
What is the peak QPS and the target latency SLO? 10,000 QPS with a p99 latency requirement of < 500ms
What is the maximum batch size supported by the fixed inference API? Max batch size is 64 requests
What is the payload size for input and output? Text-based, ~2KB per request
Is client communication synchronous or asynchronous? Clients expect a synchronous-like experience; we use an async-polling or long-polling pattern internally
Do we need to handle request priorities (e.g., premium vs. free users)? No — FIFO for the MVP

Clarifying questions are not just a formality. Each assumption here directly shapes an architectural decision downstream. The batch size cap (64) determines our batcher's flush trigger. The 500ms SLO sets our tolerable wait window. Always clarify before drawing boxes.


2. Crash Strategy & Key Points

The Core Bottleneck

When requests arrive individually at a GPU worker, two problems emerge: under-utilization (GPUs thrive on parallelism) and memory overhead (per-request context switching). The naive design — one request in, one inference out — kills throughput.

The Key Strategy: Dynamic Batching

The solution is a Dynamic Batching Service that acts as a buffer between your high-concurrency HTTP API and the fixed GPU workers. It's a traffic shaper: absorb the spikes, group requests intelligently, dispatch in bulk.

Progressive Problem Decomposition

Think through this layer by layer:

  1. How do we ingest 10k+ requests without blocking?
    Use a distributed message queue. Each incoming request is a lightweight enqueue operation.

  2. How do we group them efficiently?
    The Batcher implements "Wait-or-Full" logic — flush when batch size hits 64, or when 50ms elapses, whichever comes first.

  3. How do we deliver results back to the user?
    A Result Store (Redis) holds completed inference outputs. Clients poll with a task_id.

  4. How do we scale the Batcher itself?
    Partition-based batching — each Batcher instance consumes from a dedicated partition of the queue, eliminating global locks and contention.


3. Elite Bonus Points (FAANG Rubrics)

These are the insights that separate a good design from a great one:

Adaptive Batching

Dynamically adjust the wait_time based on current traffic volume. During low-traffic periods, flush quickly to minimize latency. During spikes, extend the wait window to fill batches and maximize GPU throughput. A simple EWMA (Exponentially Weighted Moving Average) on the arrival rate drives this.

Zero-Copy Serialization

Use Protobuf or Apache Arrow for internal data transfer. This reduces CPU overhead during batch construction and deconstruction — critical when you're processing millions of requests per hour.

GPU Backpressure Propagation

Implement a feedback loop: if the GPU Worker's internal queue or memory utilization exceeds 90%, the Batcher slows ingestion. This prevents the queue from becoming a buffer for an already-saturated backend. Your system should degrade gracefully, not catastrophically.

Locality-Aware Batching

At global scale, ensure batching happens at the edge or within the same Availability Zone. Cross-region data transfer costs are real, and cross-AZ latency adds up quickly under a 500ms SLO.


4. Functional Requirements

Core Use Cases

  • Users submit inference requests via a REST API.
  • Requests are batched and processed by the GPU model.
  • Users retrieve the inference result via polling.

Scope Control

In-Scope Out-of-Scope
API Gateway Model training
Request Queue Model optimization (TensorRT/ONNX)
Batching Logic User authentication service
Result Storage

Scope control is not laziness — it's clarity. Defining the boundary prevents scope creep and lets you go deep on what matters.


5. Non-Functional Requirements

Dimension Requirement
Scale Handle 10k QPS, scale horizontally
Latency Batching overhead < 50ms; total E2E latency < 500ms
Availability 99.9% uptime; requests must not be lost on worker failure (at-least-once delivery)
Consistency Eventual consistency for results; strict ordering within a batch is not required
Fault Tolerance Dead-letter queues (DLQ) for failed inference attempts

6. Back-of-Envelope Estimation

Let's do the math to validate our architecture can actually work.

Traffic

  • 10,000 requests/sec

Storage

  • 10,000 req/s × 2KB/req = 20 MB/s
  • For 1 hour of retention: ~72 GB

Bandwidth

  • Ingress: 10,000 × 2KB = 20 MB/s
  • Egress (Results): ~20 MB/s

GPU Worker Count

  • One batch of 64 requests takes ~200ms to process
  • One GPU worker handles: 64 / 0.2s = 320 QPS
  • Workers needed: 10,000 / 320 ≈ 32 GPU workers

This gives us a concrete target — 32 GPU workers — and validates that the batching approach is load-bearing, not cosmetic.


7. High-Level Design

Design Summary

A high-throughput pipeline using a distributed queue to decouple request ingestion from GPU execution, featuring a dedicated Batcher Service for optimal GPU utilization.

Major Components

Component Purpose
API Gateway Entry point — SSL termination, request validation, rate limiting
Request Queue (Redis Streams) Fast, in-memory buffer for incoming inference tasks
Batcher Service Core logic — aggregates N messages or waits T milliseconds before calling the GPU API
Result Store (Redis) Short-lived storage for finished inference results

System Architecture Diagram

┌────────┐      ┌─────────────┐      ┌───────────────┐      ┌──────────────────┐      ┌───────────────┐
│ Client │ ───► │ API Gateway │ ───► │ Request Queue │ ───► │ Batcher Service  │ ───► │ GPU Worker API│
└────────┘      └─────────────┘      │ (Redis Stream)│      │  (Wait-or-Full)  │      └───────┬───────┘
     ▲                ▲              └───────────────┘      └──────────────────┘              │
     │                │                                                                        ▼
     └────────────────┴──────────────────────────────────── Result Store (Redis) ◄────────────┘
Enter fullscreen mode Exit fullscreen mode

Simplicity Audit

This architecture intentionally avoids complex stream-processing frameworks like Apache Flink. A lightweight consumer-group-based batcher is easier to deploy, debug, and scale for an MVP — and can be upgraded later if needed.


8. Low-Level Design

8.1 Edge Layer

  • Traffic Routing: A Global Server Load Balancer (GSLB) routes traffic to the nearest regional API Gateway.
  • Security: The API Gateway handles JWT validation and rate limiting — 1,000 requests per user per minute — to protect the GPU cluster from abuse.

8.2 Service Layer

Topology: Stateless API instances deployed in Kubernetes, auto-scaling on CPU (70% threshold).

API Schema:

POST /v1/inference
Body: { "input": "...", "client_id": "..." }
Response: { "task_id": "abc-123" }

GET /v1/result/{task_id}
Response: { "status": "PENDING" | "SUCCESS", "output": "..." }
Enter fullscreen mode Exit fullscreen mode

Resilience: 3 retries with exponential backoff for the Batcher when calling the GPU API.

8.3 Storage Layer

Access Pattern: High write/read (1:1 ratio). Data is transient — results expire after 10 minutes.

Result Table (Redis Hash):

  • Key: task_id
  • Fields: status, output, timestamp

Distribution: Partitioning by task_id using Redis Cluster to handle 20k+ operations per second.

8.4 Cache Layer

Deduplicate identical inference requests (e.g., the same prompt submitted multiple times) to save GPU cycles.

  • Key: SHA256(input_payload)
  • Value: task_id or cached_result
  • TTL: 5 minutes
  • Failure Handling: If Redis fails, bypass the cache and go straight to the queue.

8.5 Messaging Layer

Topic Schema — inference-requests:

{
  "task_id": "abc-123",
  "payload": "...",
  "ts": "2026-03-29T19:50:55Z"
}
Enter fullscreen mode Exit fullscreen mode

Throughput: Redis Streams with 16 shards to allow parallel Batcher consumers.

Why Redis Streams? Low latency, built-in consumer groups for at-least-once delivery, and operational simplicity compared to Kafka for this scale.

8.6 Data Processing Layer — The Batcher (Core Design)

This is the heart of the system. The Batcher Service uses a hybrid trigger model:

Trigger Condition
Size Trigger 64 messages accumulated
Time Trigger 50ms elapsed since the first message in the current window

Whichever fires first wins. This guarantees that no request waits longer than 50ms regardless of traffic volume.

Processing DAG:

Read from Stream
      │
      ▼
Accumulate in Memory (per-partition buffer)
      │
  Size=64 OR Time=50ms
      │
      ▼
Call Fixed GPU Worker API (one HTTP/gRPC call with full batch)
      │
      ▼
Disperse Results → Write each result to Redis by task_id
      │
      ▼
ACK Stream (mark messages as processed)
Enter fullscreen mode Exit fullscreen mode

Scalability: Multiple Batcher instances consume from different partitions of the Redis Stream — no shared state, no global locks.

8.7 Infrastructure & Observability

Key Metrics to Track:

  • batch_size_distribution — Are we consistently filling batches? Or flushing early on timeouts?
  • gpu_worker_latency — Are workers saturating?
  • queue_depth — Leading indicator of system stress.

Distributed Tracing: Jaeger for end-to-end tracing from the API Gateway through the Batcher to the GPU API response. Essential for diagnosing latency regressions.

Technology Stack: Redis Streams, Redis Cluster, Kubernetes, Envoy, gRPC, Prometheus, Jaeger, JWT, mTLS.


9. Trade-offs, Alternatives & Optimizations

The Fundamental Trade-off: Latency vs. Throughput

We accept up to 50ms of additional latency per request in exchange for dramatically higher GPU utilization. A single request arriving at an idle batcher window waits up to 50ms. In return, the system can sustain 10x the load of a naïve pass-through design.

This is the right trade-off for batch inference workloads, where throughput matters more than tail latency for individual requests.

Reliability: At-Least-Once Delivery

Redis Streams' Consumer Groups ensure durability. If a Batcher instance crashes mid-processing, unacknowledged messages are automatically re-delivered to another healthy instance (NACK mechanism). No request is silently dropped.

Bottleneck Analysis

The Fixed GPU API is the ultimate bottleneck. If it slows down, the Request Queue depth grows. Two mitigations:

  1. TTL on the Queue: Drop stale requests before they age out of user tolerance.
  2. Backpressure: Signal upstream components to slow ingestion when GPU memory > 90%.

Security

  • All internal communication between the Batcher and GPU API uses mTLS.
  • Input sanitization is performed at the API Gateway to prevent prompt injection attacks.

The "Hot Key" Problem: Request Collapsing

When many users submit the same inference request (e.g., a trending query), the Batcher can implement request collapsing: identify duplicate payloads within the same batch using a hash, send only one to the GPU, then replicate the result to all matching task_id entries. This is a powerful optimization at scale.


Summary

Layer Technology Key Design Decision
API Gateway Envoy / K8s Ingress Rate limiting, JWT validation
Request Queue Redis Streams (16 shards) Partitioned for parallel consumers
Batcher Custom service, per-partition Wait-or-Full hybrid trigger (N=64, T=50ms)
GPU Worker Fixed external API Called with full batch payload
Result Store Redis Cluster TTL=10min, keyed by task_id
Observability Prometheus + Jaeger Batch size distribution, queue depth, GPU latency

The architecture centers on one insight: GPUs are expensive, and idle GPUs are waste. Every design decision — the queue, the batcher, the partitioning strategy, the cache — serves the goal of keeping GPU workers saturated while keeping the user experience fast and reliable.


Designed to handle 10,000 QPS with a p99 latency under 500ms. 32 GPU workers. Zero request loss. One core idea: batch everything.

For more system-design articles, check the InterviewGPT blog.

Top comments (0)