DEV Community

Cover image for GraphQL vs REST: 18 Claims Fact-Checked with Primary Sources (2026)
WunderGraph
WunderGraph

Posted on

GraphQL vs REST: 18 Claims Fact-Checked with Primary Sources (2026)

What happens if we take a scientific approach to analyzing the most common claims about GraphQL vs REST? You might be thinking that you know the answer to most of these claims, and even I thought I did before I started this research.

SPOILER ALERT: GraphQL is inferior to REST because it breaks HTTP caching is actually misleading. The infamous N+1 problem? It's real, but REST has it too, say multiple high-profile sources.

Some of these claims are repeated so often that they become accepted wisdom, and people stop questioning them. I always knew that some of them were wrong, but what I didn't expect was how even the most widely repeated claims often turned out to be misleading or incomplete when I traced them back to their original sources.

Disclaimer

I run WunderGraph . We build GraphQL Federation infrastructure. I have a horse in this race, and you should know that upfront.

That said, my personal opinion is that both GraphQL and REST are valuable tools for different use cases. I would always recommend our customers to use both solutions side by side if they can complement each other, but ultimately, it's about finding the right technical solution to solve a business problem, not about picking a side in a technology debate.

I traced these commonly repeated assertions to their original sources where a verifiable primary source was available, and used secondary or vendor sources where primary evidence was not accessible.

Where only vendor benchmarks or secondary sources are available, those are labeled explicitly and treated as directional rather than conclusive.

Summary

Here is what the evidence suggests.

Part I: What the Evidence Says

This section focuses on verifiable claims and their sources.

Every claim is sourced where a verifiable reference is available. A fair comparison requires evaluating both technologies under equivalent assumptions: default vs. default and optimized vs. optimized.

In this analysis, “default” means out‑of‑the‑box behavior, while “production” means commonly adopted patterns in mature or scaled deployments.

Where an original claim is right, I say so; where the evidence disagrees, I show why.

The Origin Story: How Facebook Actually Built GraphQL

Claim 1: "Facebook built GraphQL for hundreds of internal microservices"

Verdict: Historically imprecise.

The 2012 timeline, the iOS News Feed context, and the mobile bandwidth constraints are all real.

I could not find a primary source describing Facebook's 2012 backend as a system composed of "hundreds of microservices," or stating that GraphQL was created to unify such a system. The GraphQL Foundation's own Federation page states:

Meta (formerly Facebook), where GraphQL was created, has continued to use a monolithic GraphQL API since 2012.

Facebook ran a massive PHP codebase, scaled through HipHop (a PHP-to-C++ transpiler) and later HHVM (a just-in-time compiler for PHP). Keith Adams , Facebook's Chief Architect and HHVM team lead, describes how Facebook scaled a large, unified PHP application in Software Engineering Daily interviews and InfoQ presentations .

The strategy emphasized scaling PHP through compilation rather than prioritizing early service decomposition. On top of the PHP application sat TAO , a graph-structured data layer built on MySQL and memcached. TAO provided a unified way to access social graph data. The backend included multiple systems and data sources, but primary sources do not describe it as a microservice-oriented architecture.

Meta's engineering blog describes "existing databases and business logic," rather than framing the problem as a microservice-oriented API layer. The common argument is that GraphQL was built to manage hundreds of microservices. There is no primary source clearly supporting that framing of the original problem GraphQL was designed to solve, and its later use in microservice architectures does not change that original motivation.

In short, microservices later became a use case for GraphQL, not the original design driver; saying GraphQL was “built for hundreds of microservices” reverses that causal order.

Claim 2: "Lee Byron, Dan Schafer, and Nick Schrock built GraphQL"

Verdict: Generally accurate, but slightly simplified.

Primary sources and historical accounts consistently credit Lee Byron , Dan Schafer , and Nick Schrock as the key creators of GraphQL. Nick Schrock is widely described as writing the initial prototype, reportedly called "SuperGraph." Together with Dan Schafer and Lee Byron, they developed GraphQL and brought it into production at Facebook around 2012.

Nordic APIs interviewed Lee Byron about the creation process, and Datanami covers the contributions of all three.

However, this framing simplifies the broader effort. GraphQL was developed within a larger team at Facebook, and multiple engineers contributed to its design, implementation, and adoption. The claim is accurate at a high level, but should be understood as identifying the primary contributors rather than the full set of people involved.

Claim 3: "Facebook open-sourced GraphQL in 2015"

Verdict: Accurate.

Facebook publicly released GraphQL on September 14, 2015 , following its public introduction at the React Europe conference in Paris earlier that year. In 2018, GraphQL was moved to the GraphQL Foundation , where it is now governed under the Linux Foundation .

The GraphQL N+1 Problem: Does REST Have It Too?

Claim 4: "Fetching 25 users with posts in GraphQL results in 26 database queries"

Verdict: Partially true (implementation-dependent, not inherent to GraphQL).

The arithmetic is correct for one specific scenario, and the official GraphQL.js documentation confirms it:

In GraphQL.js, each field resolver runs independently. There's no built-in coordination between resolvers, and no automatic batching.

This behavior is not defined by the GraphQL specification, but by how resolver-based servers like GraphQL.js execute queries. When you query for 25 users and their posts, the server may first execute one query to fetch all 25 users. Then, for each user, it may execute a separate query to fetch that user's posts. That's 25 + 1 = 26 database queries, which is the classic "N+1 problem": N queries for related data, plus 1 for the initial list.

For servers built on GraphQL.js without DataLoader, this is a common outcome in resolver-based implementations. In practice, many GraphQL deployments introduce batching layers, such as DataLoader, or use engines that compile queries into optimized database queries to avoid this pattern. But the picture is incomplete in two ways.

First, some GraphQL engines addressed this years ago. Tools like Hasura and PostGraphile don’t rely on per-field resolvers for database-backed queries. They compile GraphQL queries directly into optimized SQL with JOINs. In these systems, the 25-users-with-posts query can often be executed as a single database query. In its own benchmarks, Hasura reports up to higher throughput than hand-written Node.js code using DataLoader. This is based on vendor benchmarks and should be treated as directional rather than conclusive.

Second, REST can exhibit a similar N+1 pattern at the network layer, as shown in the next claim.

Claim 5: "In REST, fetching 25 users with their posts is two calls"

Verdict: Misleading.

A common claim is that REST handles this in two calls:

GET /users and GET /users/:id/posts.

But look at that second URL. The :id is a path parameter.

Without a dedicated batch endpoint, the REST client has to call GET /users/1/posts, GET /users/2/posts, ... GET /users/25/posts. Without a batch or embedding mechanism, this can result in 26 HTTP round-trips. In many systems, 26 HTTP round-trips (each requiring a full network request-response cycle) is going to be more expensive than 26 database queries behind a single HTTP call.

The REST API Tutorial documents this exact problem. Microsoft's Azure Architecture Center calls this the Chatty I/O antipattern and explicitly names it "the N+1 problem." N+1 is not a GraphQL-specific problem. It manifests at different layers.

To get it down to two calls, REST needs a purpose-built batch endpoint like GET /users/posts?ids=1,2,...,25. That endpoint either uses POST (typically uncacheable in practice) or GET with dynamic query parameters where every unique combination of user IDs produces a different URL.

And it's not just the combination that matters; it's also the order, unless it is normalized.

?ids=1,2,3 and ?ids=3,1,2 are different URLs to a CDN, which means different permutations of the same IDs can produce different cache keys. In practice, many REST architectures introduce aggregation layers or backend-for-frontend services to avoid this pattern. We'll come back to why this matters in the caching section.

The fair comparison is either naive-to-naive (26 HTTP calls vs. 26 DB queries) or optimized-to-optimized (batch endpoint vs. DataLoader or query compilation, both producing 1-2 queries). The original claim oversimplifies the design trade‑offs by mixing optimized REST with naive GraphQL.

Claim 6: "DataLoader is not built into GraphQL and is not part of the specification"

Verdict: True (but often misunderstood in practice).

The statement is technically correct at the specification level, but incomplete in how GraphQL is used in practice.

DataLoader was originally developed at Facebook, and its copyrights were later transferred to the GraphQL Foundation. It is now maintained as part of the broader GraphQL ecosystem. It batches and deduplicates data fetches within a single request, does not share results across requests, and is not a replacement for application-level caching.

When multiple resolvers request the same data (e.g., "fetch user 1", "fetch user 5", "fetch user 12"), DataLoader collects them into a single batch call and issues one combined fetch. Instead of 25 individual database queries, a single batched query can often fetch all 25 users at once.

DataLoader is generic and not specific to GraphQL, the same pattern can be used in REST APIs, microservices, or any system with N+1-style access patterns.

For example, Netflix DGS provides @DgsDataLoader annotations in Java, gqlgen documents DataLoader integration for Go, and GraphQL-Ruby has GraphQL::Dataloader built in. Query compilation engines (Hasura, PostGraphile) bypass the need for DataLoader by generating optimized SQL directly.

DataLoader reduces redundant fetches, but it introduces additional complexity in resolver design and requires careful scoping and key management.

REST does not define a standard batching mechanism at the protocol level. When batching is needed, it is handled through API design (such as bulk endpoints), infrastructure, or framework-specific solutions. Some specifications attempt to address this, such as OData’s batch format or JSON:API’s compound documents, but adoption is inconsistent.

These differences reflect where batching is handled. GraphQL solutions often address it at the execution layer, while REST implementations handle it through API design or infrastructure. DataLoader is therefore just one of several batching patterns in the GraphQL ecosystem, not the only way to avoid N+1.

GraphQL vs REST Performance: What Benchmarks Actually Show

Claim 7: "REST delivers nearly half the latency and 70% more requests per second"

Verdict: Context-dependent and often misapplied.

These numbers come from a Medium benchmark that tested a Hello World‑style endpoint and is representative of a broader class of “Hello World” REST vs GraphQL benchmarks, not necessarily the exact one cited by any particular critic:

REST at ~7.68ms latency vs. GraphQL at ~13.51ms, and ~11,972 RPS vs. ~7,085 RPS.

That kind of workload mostly measures protocol overhead on a flat response, not the multi-resource retrieval problem GraphQL was designed to address. It is also informal evidence from a single implementation, not a peer-reviewed study. For relational or multi-resource data retrieval, several studies report the opposite result under some workloads.

A peer-reviewed study by Seabra, Nazário, and Pinto, published at SBCARS '19 (proceedings in the ACM Digital Library), found that migrating from REST to GraphQL reduced latency in two of three tested applications, often by reducing the number of network round-trips. Under workloads above 3,000 requests, REST outperformed GraphQL.

A study by Jin, Cordingly, Zhao, and Lloyd at UW Tacoma , presented at WoSC10 (December 2024), evaluated a 10.9‑million‑row CMS dataset on AWS, and reported 25–67% lower average latency for GraphQL on several data-intensive operations, while REST retained an advantage at the highest concurrency levels.

A separate study by Lawi, Panggabean, and Yoshida, found REST was faster on response time and throughput in that test setup, while GraphQL used about 37-40% less CPU and memory. That supports a real resource-efficiency tradeoff, though the result is specific to that workload.

Elghazal, Aneiba, and Shahra presented at WEBIST 2025 and found REST ~3.7x faster on response time in a Go-based microservices setup, while GraphQL transferred significantly less data overall, reducing total data volume from several gigabytes to a few hundred megabytes. The 3.7x latency gap is larger than what other studies report, which suggests strong sensitivity to implementation and workload.

Across these studies, a general pattern emerges

Performance varies by workload, and neither approach is consistently faster. REST often has lower per-request overhead for simple payloads and at high concurrency. GraphQL can have lower total latency when it reduces or eliminates multiple round-trips, transfers less data, or uses fewer server resources.

The right choice depends on the workload, including schema design, query patterns, and implementation details. Citing a flat CRUD benchmark to evaluate a system designed for relational queries is technically correct, but it measures a different use case.

GraphQL Caching vs REST Caching: The Real Tradeoffs

Claim 8: "GraphQL uses POST to a single endpoint; HTTP caching does not work"

Verdict: Partially true (default vs production setups).

Standard GraphQL implementations use POST to /graphql. POST requests are not cached by browsers, CDNs, or reverse proxies by default. This is a real limitation and worth understanding.

HTTP caching relies on two things: the request URL being stable across clients, and the HTTP method being GET in most common caching setups. When every GraphQL request is a POST with a body that is not used as part of the cache key by intermediaries, caches cannot reliably match requests to stored responses.

More precisely, the issue is a lack of stable cache keys rather than POST itself, since POST responses can be cached when explicitly configured. By default, this gives REST an advantage for simple, per-resource caching, while GraphQL requires additional setup to achieve similar behavior.

But the comparison with REST is asymmetric.

As we established in Claim 5, REST serving relational data often either makes multiple individual GET requests (cacheable but N+1 round-trips) or uses a batch endpoint (efficient but less cacheable in practice). In many production systems, REST avoids this trade-off by relying on per-resource caching combined with client-side or edge aggregation (for example, a BFF or gateway), rather than batching at the CDN layer.

A REST batch endpoint like GET /users/posts?ids=1,2,3,...,25 produces a different URL for every unique combination of requested users. And as we noted in Claim 5, it's not just the combination that matters, but also the order: ?ids=1,2,3 and ?ids=3,1,2 are different cache keys to a CDN. CDN cache hit rates for these parameterized URLs can be lower in practice, depending on usage patterns.

Even if you do manage to cache batch URLs, invalidation becomes expensive. When a single entity changes (say, user 7 updates their profile), you need to invalidate every cached batch URL that contains that user's ID. Most CDNs have no built-in way to compute which URLs contain which IDs without additional tagging or indexing. A single entity change can invalidate an unpredictable number of cached responses. This comparison mixes HTTP-level caching with client-side caching, which solve different problems but are often combined in real systems.

Compare this to GraphQL's normalized client cache (Apollo Client, Relay), where invalidation happens at the entity level. This operates at the client layer and does not replace HTTP-level caching. User 7's data is stored once by its cache ID. When it changes, only that entity updates, and every view referencing it refreshes automatically. No URL permutation problem. No batch invalidation cascade.

These trade-offs can make it difficult to simultaneously optimize REST for both batch efficiency (for the N+1 comparison) and high HTTP cache reuse (which requires stable, per-resource URLs). These approaches are in tension. Optimizing for batch efficiency reduces round-trips, but tends to lower CDN cache hit rates and complicate invalidation.

A common production solution: Persisted Queries.

Also called trusted documents, persisted queries are a recommended production pattern for GraphQL APIs. Instead of sending the full query string in a POST body, the client sends a hash: GET /graphql?extensions={"persistedQuery":{"sha256Hash":"abc123"}}.

This approach is most effective for first-party APIs, where the set of allowed operations can be controlled.

It can be implemented as a GET request with a more stable URL when queries are pre-registered. This requires maintaining a registry of allowed operations and coordinating deployments between client and server. CDNs can cache it like other GET requests when configured appropriately.

However, this approach is less flexible for highly dynamic queries and is harder to apply in public or third-party API scenarios. Variables still affect cache keys, so different inputs produce different cache entries, and fragmentation can still occur depending on usage patterns. Adobe Experience Manager , for example, recommends persisted queries as the preferred way to enable CDN caching for GraphQL.

Persisted queries can help address the caching problem (GET requests, stable URLs), the security problem (only pre-registered queries execute), and the client size problem (works with plain fetch()).

While some systems allow GET with query‑string GraphQL, it is brittle in practice, which is why POST‑by‑default is the effective pattern.

Claim 9: "56% of teams report caching challenges with GraphQL"

Verdict: Partially true (source unclear, underlying concern valid).

The 56% figure is widely cited, attributed variously to an "Apollo 2024 survey" or a "JetBrains Developer Survey." I could not locate a clear primary source for this exact number. Given the lack of a verifiable primary source, this statistic should be treated as anecdotal rather than evidence‑based.

Even if the number were accurate, it would still need to be interpreted carefully. Regardless of that specific percentage, the underlying concern is real: GraphQL caching is more complex by default, because it often handles multi-resource queries that do not map cleanly to per-resource HTTP caching and pushes more responsibility onto the client and gateway layers. This means teams must make explicit architectural choices to get effective caching, rather than relying on default HTTP behavior.

REST systems can avoid some of this complexity through per-resource caching, but face their own trade-offs when aggregating data or batching requests, including reduced cache reuse and more complex invalidation. At the same time, the problem is well understood, and there are established patterns to address it: persisted queries for CDN caching (see Claim 8), and normalized client caches for application-level caching (see Claim 10).

Claim 10: "Apollo Client weighs 43 KB gzipped"

Verdict: Misleading comparison (incomplete framing).

I often see comparisons framed as Apollo Client (43 KB) versus fetch() (0 KB), implying GraphQL requires a heavy client library while REST uses native browser APIs.

Historically, measurements around 40–45 KB gzipped were reasonable for older Apollo Client bundles , but recent v4.x releases have reduced bundle size and made features more modular. Regardless, that is not a cost of GraphQL itself. The same concern exists for feature‑rich REST clients like SWR or TanStack Query . It is a misleading comparison because both REST and GraphQL can be implemented with nothing more than fetch(). Therefore, the theoretical baseline for both styles is 0 KB.

GraphQL is an HTTP request:

const data = await fetch('/graphql', {
  method: 'POST',
  headers: { 'Content-Type': 'application/json' },
  body: JSON.stringify({ query: '{ users { name } }' })
}).then(r => r.json())
Enter fullscreen mode Exit fullscreen mode

No library is strictly required.

With persisted queries, it can be simpler at runtime: a GET request with a hash parameter. This requires pre-registering operations and coordinating between client and server.

In practice, many GraphQL applications adopt libraries like Apollo Client or Relay because they solve common problems around caching, state management, and request orchestration. The additional bundle size in libraries like Apollo Client pays for these capabilities.

Similar trade-offs exist in REST clients, though GraphQL clients often include more built-in behavior due to the structure of the query model. In both cases, these libraries shift complexity from the API layer to the client. Without it, you often end up rebuilding similar logic yourself or pulling in REST-side alternatives like TanStack Query or SWR.

Apollo Client provides a normalized cache that stores entities by ID and enables automatic UI updates when cached data changes. When a mutation returns updated data, Apollo can update the cache and reflect those changes in many relevant queries without requiring manual refetching.

On top of the normalized cache, Apollo provides additional features.
These include optimistic updates, pagination helpers, request deduplication, and partial error rendering.
It also includes local state management and DevTools for inspecting cache data and queries.

Relay (by Meta) provides additional compile-time guarantees. Its ahead-of-time compiler runs at build time, catching malformed queries before the app ships. Fragment colocation lets each component declare the data it needs, and data masking ensures components only access the data defined in their fragments, preventing accidental coupling between components.

Relay includes configurable garbage collection for unreferenced records in its normalized store, helping limit memory growth in long-running applications. These features address common data consistency and state management problems.

In applications where the same entity appears in multiple places (e.g., avatar, comments, profile), a normalized cache with automatic mutation propagation reduces stale-data bugs. Building this yourself means maintaining a manual entity store with subscription and notification logic.

That's the role these libraries serve.

Real-world REST applications pay similar costs, though they can sometimes defer this complexity longer depending on access patterns. TanStack Query, Axios, and SWR introduce additional bundle size for features like caching and request management.

A fairer comparison is either naive-to-naive (no clients, just fetch()) or full-featured to full-featured: Apollo Client and tools like TanStack Query both add bundle size in exchange for caching, deduplication, and state management.

Claim 11: "fetch() ships with every browser at 0 KB"

Verdict: Confirmed.

True. The Fetch API is built into all modern browsers and is available natively in recent versions of Node.js. But as we showed in Claim 10, GraphQL also works with fetch() at 0 KB. This is often presented as a REST advantage, even though it applies equally to both.

Is GraphQL Less Secure Than REST?

Claim 12: "A 128-byte nested query can consume 10 seconds of CPU time"

Verdict: Partially true (context-dependent).

Security research shows that small, deeply nested GraphQL queries can consume disproportionate CPU time.

This applies primarily to resolver-based implementations without depth or cost controls. In a GraphQL schema with circular references (e.g., a User has friends who are also User objects), an attacker can construct a query like:

{ user { friends { friends { friends { friends { friends { ... } } } } } } }
Enter fullscreen mode Exit fullscreen mode

In a worst case scenario with an unoptimized implementation, if each user has 10 friends, 6 levels of nesting could theoretically result in up to 10^6 = 1,000,000 resolver calls. The query string itself is tiny, but the computational cost can grow quickly — in some cases exponentially — depending on schema design and resolver behavior.

The specific numbers depend on schema complexity and server hardware, but the principle is documented in security writeups and research .

Scope: first-party vs public APIs matters.

For first-party APIs, trusted documents can significantly reduce this attack vector in controlled environments, but do not address inefficient resolver behavior within allowed operations. When a GraphQL server only executes pre-registered operations, arbitrary queries, including malicious depth attacks, can be rejected before execution. The official GraphQL.org Security page confirms this.

For public APIs, this remains a real concern. GraphQL.org explicitly states that "trusted documents can't be used for public APIs because the operations sent by third-party clients won't be known in advance." Public APIs need depth limiting, cost analysis, and rate limiting. Many implementations also use batching techniques (e.g., DataLoader) to reduce redundant resolver execution.

The attack scenario is valid for public APIs. But many GraphQL deployments are first-party, and persisted queries can prevent arbitrary depth attacks by restricting execution to pre-registered operations. These risks depend heavily on whether the API accepts arbitrary queries or restricts execution to known operations.

Claim 13: "80% of GraphQL APIs are vulnerable to denial-of-service through query depth"

Verdict: Misleading framing (misinterpreted statistic).

The 80% figure is commonly misinterpreted from Escape's State of GraphQL Security 2024 .

The reported DoS vulnerability rate is 69% of scanned public GraphQL endpoints. 80% comes from a separate finding in the same report: "80% of issues could have been resolved by implementing best practices", which measures remediability, not prevalence.

The 69% and 80% figures come from different parts of the same report and measure different things; they are not interchangeable.

Public APIs accepting arbitrary third-party queries have a different security profile than first-party APIs using trusted documents. For first-party APIs, the depth-DoS attack vector can be significantly reduced when using persisted queries, though inefficient or overly expensive allowed operations can still create performance risks.

API security challenges are not unique to GraphQL.

Salt Security's 2024 report , based on a survey of 250 IT and security professionals, found that 37% experienced an API security incident in the past 12 months (up from 17% in 2023), 95% encountered security issues in production APIs, and 23% suffered an API-related breach.

These findings apply to APIs broadly and are not specific to GraphQL.

Important caveat: Escape's 69% measures vulnerability scan results on 160 public endpoints. Salt's figures measure self-reported incidents across 250 organizations. They are not directly comparable because different methodologies are being used to measure different things. Even so, the broader point is: API security is an industry-wide challenge, not specific to GraphQL.

Claim 14: "Most frameworks ship with no default depth limit"

Verdict: Partially true (implementation defaults).

Many GraphQL server frameworks do not enable depth limiting by default. The official GraphQL.org Security page recommends it, stating: "Even when the N+1 problem has been remediated through batched requests to underlying data sources, overly nested fields may still place excessive load on server resources." This is not defined at the GraphQL specification level and is left to server implementations.

This is a real concern and should be taken seriously, even with DataLoader or query compilation.

The plugin ecosystem makes it relatively easy to add these controls. Libraries such as GraphQL Armor and Envelop plugins provide depth and cost controls. But it is opt-in, not opt-out.

However, the comparison should be symmetric.

Many REST frameworks also ship with limited security controls enabled by default. Express.js , a minimal web framework, does not include rate limiting or input validation out of the box and relies on middleware for these concerns. Django REST Framework includes throttling features, but they are not enabled by default.

"Insecure defaults" is not a GraphQL-specific problem; it is a common pattern across web frameworks.

HTTP Behavior and Error Handling

Claim 15: "GraphQL returns HTTP 200 always, even when errors occur"

Verdict: Misleading framing.

This conflates GraphQL (the query language and execution engine) with one specific transport binding.

GraphQL is designed to be transport-independent. It runs over HTTP, WebSockets, Server-Sent Events, multipart responses, and other transport mechanisms. This is a deliberate architectural choice. GraphQL is similar to SQL in that it is a query language and does not dictate how queries are transported. You can run SQL over a TCP socket, a named pipe, or an HTTP API. GraphQL works the same way.

GraphQL reports errors in the response body's errors array rather than relying solely on transport-level status codes. A single GraphQL response can contain both data and errors, representing partial success.

The current working draft of the GraphQL specification defines this explicitly: a field that errors out returns null and adds an entry to the errors array, while sibling fields resolve successfully.

This is a defined behavior of the GraphQL execution model rather than a limitation. HTTP status codes do not cleanly express partial success: a request is typically represented as either 200 OK or 4xx/5xx, but not both.

GraphQL can tell a client "here's 90% of what you asked for, and here's exactly which field failed and why." In practice, many GraphQL-over-HTTP implementations return HTTP 200 responses with errors encoded in the response body. However, transport-level errors (e.g., invalid requests, authentication failures, or server errors) may still return appropriate non-200 status codes.

Under the commonly used application/json media type, the GraphQL-over-HTTP specification recommends returning 200 OK for any well‑formed GraphQL response, even when the errors field is populated. More nuanced status codes, including non-200 responses, are defined when using the application/graphql-response+json media type.

Claim 16: "The OWASP GraphQL Cheat Sheet reads like a confession of design decisions that were never made"

Verdict: Misleading framing (not unique to GraphQL).

The OWASP GraphQL Cheat Sheet documents standard security hardening practices, similar to other OWASP cheat sheets. But OWASP publishes similar cheat sheets for many technologies.

The REST Security Cheat Sheet covers input validation, output encoding, HTTPS enforcement, access control, rate limiting, and error handling. These are not defined by REST itself, but are implemented at the framework or application level. The REST Assessment Cheat Sheet goes further, documenting how to test REST APIs for vulnerabilities.

OWASP also publishes cheat sheets for Node.js , Django , Ruby on Rails , and dozens of other technologies.

Characterizing the GraphQL cheat sheet as a "confession" while ignoring equivalent guidance for REST can imply that GraphQL is uniquely insecure compared to other technologies. The existence of an OWASP cheat sheet reflects community attention to security and ecosystem maturity, rather than indicating an inherent design flaw in the technology itself.

Adoption and Tooling

Claim 17: "83% of web services use REST"

Verdict: Misleading framing (usage ≠ exclusivity).

The 83% figure is often attributed to the RapidAPI Developer Survey 2024, which other sources describe as measuring public APIs using REST, but I was not able to locate the original survey report directly. The Postman 2025 State of the API Report reports REST usage at 93% among surveyed developers, not 83%. Postman's survey allows multiple selections, so 93% reflects developers who use REST, not developers who use REST exclusively.

The same survey shows 33% of developers use GraphQL alongside REST, making it one of the most commonly used API styles in the survey. Multiple factors contribute to REST's widespread use beyond active evaluation:

  • Historical momentum: REST has been the default since the mid-2000s. Most APIs were built before GraphQL was available or widely adopted.

  • GraphQL adoption is increasing: Gartner has predicted that by 2027, more than 60% of enterprises will use GraphQL in production, up from less than 30% in 2024.

  • They coexist: Companies such as GitHub, Shopify, Netflix, Airbnb, Twitter, and The New York Times run GraphQL APIs in production, often alongside REST.

Using REST's market share as evidence against GraphQL reflects historical adoption patterns more than a direct comparison of technical fit or capability.

Claim 18: "REST with OpenAPI 3.0 offers self-documenting, typed client generation, HTTP caching built in"

Verdict: Misleading framing (omits comparable GraphQL capabilities).

Everything stated about OpenAPI is true. However, GraphQL has comparable capabilities in each area, with different trade-offs.

Self-documenting: A GraphQL server requires a GraphQL schema. The schema is not optional, not a nice-to-have, not a separate file you maintain on the side. It is a required part of the server.

GraphQL servers can expose their full type system through introspection , which lets any client query available types, fields, and arguments at runtime. Meta's engineering blog describes this as a core design goal: "Introspective: A GraphQL server can be queried for the types it supports."

With REST, an OpenAPI spec is optional. You often need to generate or maintain the spec separately and keep it in sync with your implementation. Some REST APIs still ship without a machine-readable description. An OpenAPI spec can drift from the implementation because they are separate artifacts.

In GraphQL, the schema is part of the server itself, which in practice can make drift much harder, because the schema is tied directly to the running implementation, though that is not guaranteed.

Typed client generation

GraphQL Codegen (by The Guild) and tools like genqlient produce compile-time validated, fully typed client code. GraphQL.org's blog documents that operation-based codegen produces types specific to each query, which can be more precise than endpoint-level types because you get types matching exactly the fields you requested.

OpenAPI also supports typed client generation, but typically relies on maintaining a separate specification. In many cases, OpenAPI-based clients are simpler to generate and require less build-time integration.

HTTP caching

As discussed in Claims 8-9, persisted queries can enable HTTP caching for GraphQL. And as discussed in Claim 5, REST's HTTP caching can be less effective for some relational or batch-style queries, which is the use case being compared.

By default, GraphQL does not align as directly with HTTP caching as REST.

Totals (by verdict): 3 confirmed, 6 partially true, and 9 misleading or incomplete.

In practice, REST remains a strong default for simpler, resource‑oriented services, while GraphQL shines in complex, client‑driven, multi‑resource scenarios. The best choice depends on the workload, not on which protocol is “better” in the abstract.

Part II: What the Evidence Means

This section is opinionated. Part I was about what the sources say. This part is about what I think it means for the industry based on my experience building API infrastructure for the past several years. Take it as one perspective among many.

Where REST Genuinely Wins

REST is the right choice for a lot of problems.

  • Simple CRUD with flat resources.
  • Public APIs where every resource has a stable URL.
  • Systems where HTTP caching on individual endpoints matters.
  • Scenarios where the data model is straightforward, and each client needs more or less the same response shape.

REST's ecosystem is mature, and every developer knows that. The tooling is battle-tested. OpenAPI provides typed client generation and machine-readable documentation. For these use cases, adding GraphQL would be adding complexity without proportional benefit.

If you have 12 REST endpoints serving flat resources to a single frontend, and your team is productive, keep what you have.

If you have a single frontend talking to a single backend, and both are TypeScript, do you need REST at all? Tools like tRPC give you end-to-end type safety with zero schema definition, no OpenAPI spec, and no code generation step. Just TypeScript functions on the server, typed calls on the client. For that specific architecture, tRPC is simpler than both REST and GraphQL. The "12 endpoints and a fetch() call" scenario might not need REST any more than it needs GraphQL.

Where GraphQL Genuinely Wins

The problems GraphQL was designed to solve are real, and REST still has no good answer for them.

Relational data across multiple sources. When a single client view needs data from users, orders, products, and reviews, GraphQL fetches it in one typed request. REST either makes multiple round-trips or requires purpose-built batch endpoints that are bespoke, uncacheable, and expensive to maintain.

Self-describing schemas and codegen. The GraphQL schema is the documentation. Because it is tightly coupled to the implementation, that makes drift much harder in practice. GraphQL Codegen produces compile-time validated, fully typed client code specific to each query. OpenAPI can do similar things, but requires maintaining a separate spec file.

Mobile and bandwidth-constrained clients. This is the original problem Facebook solved in 2012. Clients request exactly the fields they need. No over-fetching and no under-fetching. The peer-reviewed study by (Seabra et al. ) found that GraphQL improved performance in two out of three tested applications under moderate workloads, but it did not outperform REST consistently, especially under higher load.

GraphQL Federation Changes the Question Entirely

You may see the argument framed as "12 REST endpoints versus GraphQL complexity." That framing assumes your API surface stays small. For many organizations, it won't.

GraphQL Federation allows organizations to start with a single monolithic GraphQL API and gradually decompose it into independently owned subgraphs as teams grow. All subgraphs compose into a unified supergraph that clients consume as one schema.

Apollo Federation pioneered this pattern, and today multiple vendors provide production-ready Federation routers: Apollo Router (Rust), WunderGraph Cosmo Router (Go, open-source Apache 2.0), The Guild's Hive Gateway (TypeScript), ChilliCream's Hot Chocolate (C#/.NET), and AWS AppSync (managed service). Competition between vendors means teams have choices without being locked into a single provider.

The Composite Schemas Working Group , an official subcommittee of the GraphQL Foundation, is working to standardize how subgraph schemas compose into a unified API.

The specification defines vendor-neutral composition and distributed execution rules, so that subgraphs written against the spec work with any compliant router. This is Federation becoming an open standard, not a single vendor's product.

REST has no equivalent. When REST-based organizations need to compose multiple services behind a unified API, they choose between API Gateways (vendor-specific routing), BFF patterns (bespoke aggregation code per frontend), or OpenAPI merging (flat endpoint lists with no cross-service entity relationships). Each approach is proprietary, and none handles the question: "A User from Service A has Orders from Service B."

Federation answers that question by design.

And here is what I think many people miss: you can have Federation as the composition layer and REST as the consumption layer. Build a federated supergraph to unify APIs across teams. Then expose a REST API on top for clients that want it. This is a pattern we see growing in enterprise deployments. Federation and REST are not opposing choices. They operate at different levels of the stack.

AI Agents Need Structured, Self-Describing APIs

The API consumer is changing. Increasingly, the client calling your API is an AI agent , not a human developer reading docs.

Agents need to discover capabilities, understand types, and request specific fields, all within a limited context window. GraphQL's typed, introspectable schema was built for exactly this interaction pattern. An agent can query the schema to understand what's available, then construct a precise request for exactly the data it needs.

REST with OpenAPI can serve agents, too. But exposing thousands of REST endpoints to an agent overwhelms its context window. A structured graph lets agents search and select precisely what they need.

The industry is moving in this direction. The GraphQL AI Working Group is developing standards for how LLM agents interact with GraphQL APIs.

That said, AI agents will also call REST APIs. Tools like MCP are protocol-agnostic.

The question is how you organize your APIs so agents can find what they need. A federated supergraph provides that organization. A flat list of REST endpoints does not.

The Right Question Is Not Which One Wins

The REST vs. GraphQL debate keeps producing articles that pick one side and build the case. Some pick REST. Plenty of others pick GraphQL. Both sides select favorable benchmarks, ignore inconvenient tradeoffs, and declare the other technology dead or dying.

The evidence does not support either extreme.

REST at 93% adoption (Postman 2025 ) is not under threat from GraphQL. GraphQL, with Gartner predicting 60%+ enterprise adoption by 2027 and 33% of developers already using it alongside REST, is not a niche technology. They coexist because they solve different problems.

The right question is: what shape is your data, who are your consumers, and how will your API surface grow?

For public APIs where third-party developers need to integrate: REST. REST makes it easy to generate SDKs in any language, and every developer already knows how to call a REST endpoint. For flat resources, stable URLs, and simple CRUD: also REST. For a single TypeScript frontend and backend: maybe tRPC. For relational data, multiple consumers with different needs, and cross-team composition: GraphQL. For organizations scaling from one team to many: Federation. For all of the above at the same time: both.

The next time you read a blog post that makes specific technical claims about either technology (including one of my posts), check the sources. Read the original documentation. Look at the methodology behind the benchmarks. The evidence is out there. Use it.

This article was originally published on the WunderGraph blog.

Top comments (0)