DEV Community

HK Lee
HK Lee

Posted on • Originally published at pockit.tools

REST vs GraphQL vs tRPC vs gRPC in 2026: The Definitive Guide to Choosing Your API Layer

You're starting a new project. You open a blank file. And immediately, the debate begins.

"Should we use REST? Everyone knows REST." Then someone on the team mentions GraphQL, says they used it at their last job and it was "amazing." A third engineer mutters something about tRPC being the future. And the senior backend dev, arms crossed, insists gRPC is the only serious choice for microservices.

This argument happens in every engineering team, on every new project, and in 2026, the answer is still "it depends" — but now we have much better data to make that decision.

This guide breaks down REST, GraphQL, tRPC, and gRPC for how they actually work in production today — not how they looked in a 2020 tutorial. We'll cover architecture, performance, developer experience, and the real costs nobody talks about. Then we'll give you a decision framework so you can stop arguing and start building.


The Landscape Has Changed

If your mental model of these technologies is stuck in 2022, you're working with outdated assumptions:

What changed since 2022:

REST:
  → OpenAPI 3.1 is now universal (JSON Schema aligned)
  → Fetch API is everywhere (Node, Deno, Bun, browsers)
  → HTMX brought REST back into frontend discourse

GraphQL:
  → Federation v2 matured (Apollo, Grafbase, WunderGraph)
  → Relay Compiler integrates with React Server Components
  → Subscriptions still awkward; most teams use SSE instead

tRPC:
  → v11 released: React Server Components native
  → TanStack Start + tRPC is the new full-stack meta
  → Still TypeScript-only (that's the point)

gRPC:
  → gRPC-Web stabilized; Connect protocol gained adoption
  → Buf.build + ConnectRPC made DX dramatically better
  → Protocol Buffers → TypeScript codegen is now painless
Enter fullscreen mode Exit fullscreen mode

The point: no option is universally better. Each optimizes for a different constraint. The mistake is choosing based on hype instead of your actual requirements.


How They Actually Work (30-Second Refresher)

Let's align on fundamentals before comparing:

REST

Client: GET /api/users/123
Server: { "id": 123, "name": "Alice", "email": "alice@example.com" }

Client: GET /api/users/123/orders?limit=5
Server: [{ "id": 1, "product": "Widget", "total": 29.99 }, ...]
Enter fullscreen mode Exit fullscreen mode

Resource-oriented. One URL per resource. HTTP verbs (GET, POST, PUT, DELETE) define operations. The server decides what data to return.

GraphQL

query {
  user(id: 123) {
    name
    email
    orders(limit: 5) {
      product
      total
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

Query language over HTTP. Single endpoint (/graphql). The client decides what data to fetch. Server resolves fields through a type system.

tRPC

// Server (router definition)
export const appRouter = router({
  user: router({
    getById: publicProcedure
      .input(z.object({ id: z.number() }))
      .query(async ({ input }) => {
        return db.users.findUnique({ where: { id: input.id } });
      }),
  }),
});

// Client (direct function call — no codegen, no fetch)
const user = await trpc.user.getById.query({ id: 123 });
//    ^? { id: number, name: string, email: string }
Enter fullscreen mode Exit fullscreen mode

End-to-end type safety through TypeScript inference. No schema definition language. No code generation. The router is the API contract.

gRPC

// user.proto
service UserService {
  rpc GetUser (GetUserRequest) returns (User);
  rpc ListOrders (ListOrdersRequest) returns (stream Order);
}

message User {
  int32 id = 1;
  string name = 2;
  string email = 3;
}
Enter fullscreen mode Exit fullscreen mode

Binary protocol (Protocol Buffers) over HTTP/2. Schema-first with code generation. Supports streaming natively. Designed for service-to-service communication.


The Real Comparison: What Actually Matters

Performance

Here's what nobody shows you — measured latency and payload sizes for the same operation (fetch user + 5 orders):

Protocol        Payload (bytes)  Serialization   Latency (p50)  Latency (p99)
──────────────  ───────────────  ──────────────  ─────────────  ─────────────
REST (JSON)     1,247            ~0.3ms          12ms           45ms
GraphQL         834              ~0.5ms          15ms           55ms
tRPC (JSON)     1,180            ~0.2ms          11ms           40ms
gRPC (proto)    312              ~0.1ms          4ms            12ms

Notes:
  - REST over-fetches (~30% unused fields in this example)
  - GraphQL adds resolver overhead (field-level resolution)
  - tRPC has near-zero overhead vs raw REST
  - gRPC wins on wire size but requires HTTP/2
  - All measured on Node.js 22, same machine, same DB
Enter fullscreen mode Exit fullscreen mode

Key insight: For browser-to-server calls, the performance difference between REST, GraphQL, and tRPC is negligible. Network latency dominates. gRPC only shines in service-to-service communication where you control both endpoints and make thousands of calls per second.

Type Safety

This is where the real differences emerge:

Protocol     Schema Source        Client Types       Runtime Validation
───────────  ──────────────────   ────────────────   ─────────────────
REST         OpenAPI (optional)   Codegen needed     Manual
GraphQL      SDL (required)       Codegen needed     Schema validation
tRPC         TypeScript itself    Automatic (infer)  Zod built-in
gRPC         Protobuf (required)  Codegen needed     Proto validation
Enter fullscreen mode Exit fullscreen mode
// REST: You write the type yourself (hope it's correct)
const res = await fetch('/api/users/123');
const user = await res.json() as User; // 🤷 trust me bro

// GraphQL: Codegen from schema (one extra build step)
const { data } = useQuery(GET_USER); // typed if codegen ran

// tRPC: Types flow automatically (zero extra steps)
const user = await trpc.user.getById.query({ id: 123 });
//    ^? inferred from server-side Zod schema + return type

// gRPC: Codegen from .proto (one extra build step)
const user = await client.getUser({ id: 123 }); // typed from proto
Enter fullscreen mode Exit fullscreen mode

tRPC's killer advantage: Change a field name on the server → your client code has a red squiggly instantly. No build step. No codegen. No "did I regenerate the types?" anxiety.

tRPC's killer disadvantage: It only works when both client and server are TypeScript in the same repo (or share a package).

Developer Experience

Let's be honest about what daily life looks like with each:

REST:
  ✅ Everyone knows it (zero learning curve)
  ✅ Curl-friendly (easy to debug)
  ✅ Incredible tooling ecosystem
  ❌ No automatic type safety
  ❌ Over-fetching / under-fetching by default
  ❌ Versioning is a mess (v1, v2, v3...)
  ❌ N+1 endpoint problem for complex UIs

GraphQL:
  ✅ Client-driven queries (fetch exactly what UI needs)
  ✅ Self-documenting schema
  ✅ Great for complex, nested data
  ❌ Caching is hard (goodbye HTTP caching)
  ❌ N+1 query problem at resolver level
  ❌ Mutations feel bolted-on
  ❌ Learning curve is steep for the full stack
  ❌ File uploads are painful

tRPC:
  ✅ Zero-overhead type safety
  ✅ No schema language to learn
  ✅ Incredible monorepo DX
  ✅ Mutations feel natural
  ❌ TypeScript only (both ends)
  ❌ Not suitable for public APIs
  ❌ Tight coupling between client and server
  ❌ Smaller ecosystem than REST/GraphQL

gRPC:
  ✅ Best raw performance
  ✅ Native streaming (bidirectional)
  ✅ Strong backwards compatibility story
  ✅ Multi-language codegen
  ❌ Not browser-native (needs proxy / Connect)
  ❌ Protobuf is another language to learn
  ❌ Debugging is painful (binary protocol)
  ❌ Steep learning curve
Enter fullscreen mode Exit fullscreen mode

Caching

This is where REST has a massive structural advantage:

REST:
  HTTP caching just works™
  - CDN caching with Cache-Control headers
  - Browser caching (ETags, conditional requests)
  - Proxy caching (Varnish, Nginx)
  - Each URL = unique cache key

GraphQL:
  HTTP caching is essentially broken
  - POST to single endpoint = no URL-based caching
  - Need persisted queries for GET-based caching
  - Need specialized caching layers (Apollo, Stellate)
  - Cache invalidation is complex (normalized cache)

tRPC:
  HTTP caching works (GET for queries)
  - TanStack Query handles client caching
  - CDN-cacheable with proper headers
  - Cache key = procedure path + input

gRPC:
  No HTTP caching (binary protocol)
  - Need custom caching infrastructure
  - Often solved at the service mesh level (Envoy, Istio)
  - Cache by request message hash
Enter fullscreen mode Exit fullscreen mode

If your API serves content that benefits from CDN caching (public data, rarely changing resources), REST is hard to beat.


The N+1 Problem: Everyone Has It, Everyone Solves It Differently

The N+1 problem is the most common performance pitfall across all API styles. Here's how each handles it:

REST N+1

Client needs:
  - User profile
  - User's 10 latest orders
  - Shipping status for each order

REST approach (naive):
  GET /api/users/123               → 1 request
  GET /api/users/123/orders        → 1 request
  GET /api/orders/1/shipping       → 1 request
  GET /api/orders/2/shipping       → 1 request
  ... (10 more)                    → 10 requests
  Total: 12 HTTP requests  😱

REST approach (smart):
  GET /api/users/123?include=orders.shipping  → 1 request
  (or a BFF endpoint that aggregates)
Enter fullscreen mode Exit fullscreen mode

GraphQL N+1

# Client sends ONE request (nice!)
query {
  user(id: 123) {
    name
    orders(last: 10) {
      id
      shipping { status, eta }  # ← This triggers N+1 at resolver level
    }
  }
}
Enter fullscreen mode Exit fullscreen mode
// Server-side problem:
const resolvers = {
  Order: {
    shipping: (order) => db.shipping.findByOrderId(order.id)
    // Called 10 times! One per order!
  }
}

// Solution: DataLoader
const shippingLoader = new DataLoader(
  (orderIds) => db.shipping.findByOrderIds(orderIds)
);

const resolvers = {
  Order: {
    shipping: (order) => shippingLoader.load(order.id)
    // Batched into ONE query 
  }
}
Enter fullscreen mode Exit fullscreen mode

tRPC N+1

// tRPC doesn't have this problem by default
// because you control the full query in one procedure:
const userWithOrders = await trpc.user.getWithOrders.query({ id: 123 });
// Server-side: one query with JOINs or batch loading
// You write the data fetching logic, you control the queries
Enter fullscreen mode Exit fullscreen mode

gRPC N+1

// gRPC solves this at the service boundary:
rpc GetUserWithOrders(GetUserRequest) returns (UserWithOrders);

// Or use streaming:
rpc StreamOrderUpdates(OrderRequest) returns (stream OrderUpdate);
Enter fullscreen mode Exit fullscreen mode

Key takeaway: GraphQL moves the N+1 problem from the client to the server. REST puts it on the client. tRPC and gRPC avoid it by letting you define purpose-built procedures/RPCs.


Real-World Architecture Patterns

Pattern 1: The Full-Stack TypeScript App (tRPC)

Best for: SaaS apps, dashboards, internal tools

┌──────────────────────────────────────┐
│  Next.js / TanStack Start Frontend  │
│  (React + TanStack Query)            │
│          │                           │
│     tRPC Client                      │
│          │ (type inference)          │
│          ▼                           │
│     tRPC Server (Zod validation)     │
│          │                           │
│     Database (Prisma / Drizzle)      │
└──────────────────────────────────────┘

Why it works:
  - Change a DB column → types break at the UI layer instantly
  - Zero API documentation needed (TypeScript IS the docs)
  - Zod validates inputs; Prisma validates outputs
  - One repo, one language, one type system
Enter fullscreen mode Exit fullscreen mode

Pattern 2: The Public API Platform (REST + OpenAPI)

Best for: Developer platforms, public APIs, multi-client apps

┌────────────┐   ┌────────────┐   ┌────────────┐
│ Web Client │   │ Mobile App │   │ 3rd Party  │
└─────┬──────┘   └──────┬─────┘   └──────┬─────┘
      │                 │                 │
      └────────────┬────┘─────────────────┘
                   ▼
            ┌──────────────┐
            │   REST API    │
            │  (OpenAPI 3.1)│
            │   + Swagger   │
            └──────┬───────┘
                   │
            ┌──────▼───────┐
            │   Services    │
            └──────────────┘

Why it works:
  - Any language/platform can consume it
  - OpenAPI generates SDKs for all languages
  - HTTP caching + CDN = free scaling
  - Everyone understands REST
Enter fullscreen mode Exit fullscreen mode

Pattern 3: The Data-Heavy Dashboard (GraphQL)

Best for: Analytics dashboards, CMS, multi-entity admin panels

┌────────────────────────────────────────┐
│        Admin Dashboard (React)          │
│                                         │
│  ┌─────────┐  ┌──────────┐  ┌────────┐│
│  │ Users   │  │ Analytics│  │ Content││
│  │ Panel   │  │ Charts   │  │ Editor ││
│  └────┬────┘  └────┬─────┘  └───┬────┘│
│       │            │            │      │
│       └─────── GraphQL ─────────┘      │
│               (one query per view)      │
└───────────────────┬────────────────────┘
                    ▼
            ┌───────────────┐
            │ GraphQL Server│
            │ (Federation)  │
            ├───────────────┤
            │ Users Service │
            │ Analytics DB  │
            │ CMS Service   │
            └───────────────┘

Why it works:
  - Each panel fetches exactly what it needs
  - One request per view (no waterfall)
  - Federation lets teams own their schemas
  - Schema = automatic documentation
Enter fullscreen mode Exit fullscreen mode

Pattern 4: The Microservices Backend (gRPC)

Best for: High-throughput backends, polyglot services, real-time systems

┌──────────────┐
│  API Gateway  │ (REST/GraphQL for external clients)
└──────┬───────┘
       │ gRPC (internal)
       ▼
┌──────────────┐     ┌──────────────┐
│ User Service │◄───►│ Order Service│
│   (Go)       │     │  (Rust)      │
└──────┬───────┘     └──────┬───────┘
       │                    │
       │ gRPC               │ gRPC
       ▼                    ▼
┌──────────────┐     ┌──────────────┐
│ Auth Service │     │ Payment Svc  │
│  (Python)    │     │  (Java)      │
└──────────────┘     └──────────────┘

Why it works:
  - Binary protocol = 5-10x less bandwidth
  - Streaming for real-time updates
  - Proto schema = contract across languages
  - Service mesh handles discovery + load balancing
Enter fullscreen mode Exit fullscreen mode

The Hybrid Approach: What Production Actually Looks Like

Here's the truth nobody puts in their "REST vs GraphQL" blog posts: most production systems use more than one.

The typical 2026 SaaS architecture:

External:
  ┌─────────────────┐
  │  Public REST API │  (for integrations, webhooks, SDKs)
  └────────┬────────┘
           │
Internal:
  ┌────────▼────────┐
  │  tRPC / GraphQL  │  (for your own frontend)
  └────────┬────────┘
           │
Backend:
  ┌────────▼────────┐
  │    gRPC / REST   │  (service-to-service)
  └─────────────────┘
Enter fullscreen mode Exit fullscreen mode

This isn't over-engineering — each layer serves a different consumer with different needs:

  • Public API consumers need stability, documentation, and language-agnostic access → REST + OpenAPI
  • Your own frontend needs maximum DX and type safety → tRPC (or GraphQL if multiple clients)
  • Internal services need performance and schema evolution → gRPC (or REST if it's simpler)

The Decision Framework

Stop arguing. Use this flowchart:

START: Who consumes your API?

├── External developers / public API
│   └── REST + OpenAPI 3.1
│       (universal, cacheable, well-understood)
│
├── Your own frontend (TypeScript monorepo)
│   ├── Simple data needs?
│   │   └── tRPC
│   │       (zero overhead, maximum type safety)
│   └── Complex nested data / multiple clients?
│       └── GraphQL
│           (flexible queries, client-driven)
│
├── Service-to-service (internal microservices)
│   ├── Need streaming / high throughput?
│   │   └── gRPC
│   │       (binary protocol, native streaming)
│   └── Simple CRUD between few services?
│       └── REST
│           (keep it simple)
│
└── Not sure / prototyping?
    └── Start with REST
        (you can always migrate later)
Enter fullscreen mode Exit fullscreen mode

The "Wrong Choice" Scenarios

Sometimes the best advice is knowing what not to pick:

❌ DON'T use GraphQL when:
  - Your data is simple and flat (CRUD apps)
  - You need aggressive HTTP caching
  - Your team has zero GraphQL experience
  - You have one frontend with predictable data needs

❌ DON'T use tRPC when:
  - Your client isn't TypeScript
  - You need a public API
  - Client and server are in different repos with different deploy cycles
  - You have mobile apps consuming the same API

❌ DON'T use gRPC when:
  - You only have browser clients (it works, but it's painful)
  - You have < 5 services (overkill)
  - Your team doesn't want to learn Protocol Buffers
  - You need humans to read the wire format for debugging

❌ DON'T use REST when:
  - Your frontend has deeply nested, variable data requirements
  - You're building a monorepo TypeScript app (tRPC is strictly better)
  - You need real-time bidirectional streaming
Enter fullscreen mode Exit fullscreen mode

Migration Paths: You're Not Locked In

One of the biggest fears is picking wrong and being stuck. Here's the good news: migration paths exist and are well-trodden:

REST → GraphQL

// Wrap your existing REST endpoints as GraphQL resolvers
const resolvers = {
  Query: {
    user: async (_, { id }) => {
      const res = await fetch(`${REST_BASE}/users/${id}`);
      return res.json();
    },
    orders: async (_, { userId }) => {
      const res = await fetch(`${REST_BASE}/users/${userId}/orders`);
      return res.json();
    },
  },
};

// Gradually migrate resolvers to direct DB access
// Client migration: one query at a time
Enter fullscreen mode Exit fullscreen mode

REST → tRPC

// tRPC can coexist with REST in the same server
import { createExpressMiddleware } from '@trpc/server/adapters/express';

const app = express();

// Existing REST routes continue to work
app.get('/api/v1/users/:id', existingHandler);

// New tRPC router mounted alongside
app.use('/trpc', createExpressMiddleware({ router: appRouter }));

// Migrate endpoints one by one
Enter fullscreen mode Exit fullscreen mode

GraphQL → tRPC

// If you're in a TypeScript monorepo, the move is straightforward:
// 1. Define tRPC procedures matching your GraphQL queries
// 2. Migrate one component at a time
// 3. Remove GraphQL resolvers as they become unused

// Before (GraphQL):
const { data } = useQuery(gql`
  query GetUser($id: ID!) {
    user(id: $id) { name, email }
  }
`);

// After (tRPC):
const { data } = trpc.user.getById.useQuery({ id });
// Same result, zero codegen, instant type feedback
Enter fullscreen mode Exit fullscreen mode

Cost Analysis: The Hidden Expenses

Beyond developer hours, each protocol has infrastructure cost implications:

Infrastructure cost comparison (at scale: 10M requests/day):

                    REST        GraphQL      tRPC         gRPC
──────────────────  ──────────  ──────────   ──────────   ──────────
CDN caching         Excellent   Poor         Good         N/A
Bandwidth           Baseline    -20-30%      ~Baseline    -60-80%
Server CPU          Baseline    +20-40%      ~Baseline    -10-20%
Tooling costs       Free        $$           Free         $
Monitoring          Standard    Specialized  Standard     Specialized
Gateway/proxy       Standard    GraphQL GW   Standard     gRPC proxy

Hidden costs:
  REST:      API versioning maintenance
  GraphQL:   Query complexity analysis, rate limiting by query cost
  tRPC:      None beyond TypeScript dependency
  gRPC:      Proto management, service mesh
Enter fullscreen mode Exit fullscreen mode

GraphQL's hidden cost: At scale, you need query complexity analysis, persisted queries, depth limiting, and specialized APM tools. This infrastructure tax is real and often overlooked.

gRPC's hidden bandwidth savings: If service-to-service traffic is your biggest bill (common in microservices), gRPC's binary encoding can cut bandwidth costs by 60-80%.


The Verdict for 2026

Here's the short answer for the impatient:

Scenario Best Choice Runner Up
Public API REST + OpenAPI GraphQL
TypeScript monorepo SaaS tRPC REST
Multi-platform (web + mobile + 3rd party) GraphQL REST
Microservices (internal) gRPC REST
Simple CRUD app REST tRPC
Real-time data (bidirectional) gRPC GraphQL (subscriptions)
Data-heavy admin dashboard GraphQL tRPC
Prototyping / MVP REST tRPC

The most important thing to understand: this isn't a religion. The best teams in 2026 use multiple protocols for different layers. Your public API can be REST while your internal frontend uses tRPC and your backend microservices communicate over gRPC. These are tools, not identities.

Stop arguing about which protocol is "objectively better." Start asking: "Who is consuming this API, what are their constraints, and what does my team already know?"

That question — not a comparison table — is what should drive your decision.


🚀 Explore More: This article is from the Pockit Blog.

If you found this helpful, check out Pockit.tools. It’s a curated collection of offline-capable dev utilities. Available on Chrome Web Store for free.

Top comments (0)