DEV Community

Cover image for The Myth of GraphQL
ASafaeirad
ASafaeirad

Posted on

The Myth of GraphQL

It's often said that GraphQL fixes the problems of under-fetching and over-fetching. But is that really the case? In theory, it sounds promising. In practice, however, you might be trading problems for a pile of new ones that even the most sophisticated frameworks struggle to solve.

The Temptation to Put Everything in a Single Request

Imagine you're at an all-you-can-eat buffet, and someone advises you,

Just load up your plate with everything you might possibly want in one go; it'll save you trips!

Sounds efficient, right? That's akin to what GraphQL suggests:

Pack as much data as you need into a single request to avoid under-fetching, and I'll let you specify exactly what you want to prevent over-fetching.

Let's follow the advice!
To achieve this in a React application, we might hoist our data-fetching logic up to the highest level and pass down this huge data object to our presentational components.

Here is an example data schema we get for a query using Hasura and GraphQL-Codegen

export type ProjectQuery = {
  __typename?: 'query_root',
  project_by_pk?: {
    __typename?: 'project',
    id: string,
    name: string,
    description?: string | null,
    status: SchemaTypes.ProjectStatusEnum,
    start_date?: string | null,
    due_date?: string | null,
    created_at: string,
    updated_at: string,
    households: Array<{
      __typename?: 'household_project',
      household: {
        __typename?: 'household',
        id: string,
        name: string,
        status: SchemaTypes.HouseholdStatusEnum,
        severity: SchemaTypes.HouseholdSeverityEnum,
        code?: string | null,
        created_at: string,
        updated_at: string,
        members_count?: number | null
    }}>
  } | null
};
Enter fullscreen mode Exit fullscreen mode

Now, instead of neat, modular components with their own data queries, we have a massive, monolithic object—with an ad-hoc schema full of random nulls.

Step 1: We've just sacrificed co-location! But on the bright side, we have presentational components instead 🎉

The Quest for a Meaningful Schema

You're right to say: "Why is the schema ad-hoc? it's a skill issue. Can't we make some meaningful entities?"
One approach is to create fragments like ProjectStatusFragment, HouseholdIdentityFragment, and HouseholdMembersFragment, and enforce their usage across the team.

But wait—do we need all the data behind these fragments every time? Fragments are meant to be reusable, but reusability can lead to over-fetching, which contradicts GraphQL's main promise:

Query exactly what you need on the client—no more, no less.

In the real world, use cases are infinite. To create meaningful fragments without overfetching, we'd need to create an infinite number of fragments. That's neither practical nor efficient. So we default back to flexible schemas, letting each use case decide what data it needs.

This leads us back to square one with a lesson:

Every abstraction layer and reusability introduces data over-fetching, contradicting GraphQL's core promise.

The Problem of Nulls

Why are there so many random nulls in our data? The answer lies in GraphQL's design decisions regarding nullability:

TL;DR

In GraphQL, every field and every type is nullable by default. ... By defaulting every field to nullable, any of field failure may result in just that field returning "null" rather than having a complete failure for the request.

This means our schemas are riddled with optional fields, leading to a data structure filled with nulls. It's not necessarily a bad design choice, but it's the reality when we work with GraphQL.

Returning to the Core Issue

Now, without any skill issues, we're stuck with a massive data with an ad-hoc and partial schema. We need to pass this data to our presentational components, but how?

Option 1: Prop Drilling

One option is prop drilling. But is it practical to pass such a data schema without losing our sanity? Not really.

Consider the purpose of presentational components: they are free of side effects, loosely coupled, and therefore reusable and easy to test. By passing down this enormous, loosely typed object, we're tightly coupling our components to a specific query structure.

type Props = {
  households: Array<{
      __typename?: 'household_project',
      household: {
        __typename?: 'household',
        id: string,
        name: string,
        status: SchemaTypes.HouseholdStatusEnum,
        severity: SchemaTypes.HouseholdSeverityEnum,
        code?: string | null,
        created_at: string,
        updated_at: string,
        members_count?: number | null
    }}>
  } | null
}

const HouseholdList = ({ households } : Props) => {}
Enter fullscreen mode Exit fullscreen mode

Tight dependency isn't just about what a component uses or imports. In software development, dependency means "What information is this part of the code aware of?" When a piece of code is aware of specific information, it becomes responsible for reacting whenever that information changes. This means our HouseholdList component isn't just using the data; it is coupled to the exact structure of our query results. As a result, any change in the query triggers a change in our component's high-level API.

Is it tightly coupled? Absolutely.
Is it easy to test? Not at all.

Presentational components aren't free. They depend on their parent components to handle responsibilities and side effects like data fetching. By shifting this responsibility away from the components themselves, we introduce duplication. Every time we reuse these components in different contexts, we have to replicate the same data-fetching logic in their parent components.

In this scenario, we get the worst of both worlds: we don't reap the benefits of presentational components, but we still pay the costs.

And let's not forget, our data is littered with nulls. The bigger question is should our components accept nullable values just because our I/O isn't reliable?

Here's the next lesson:

Passing a raw query result as props, couples our components to the unpredictability of I/O.

Searching for Meaningful Interfaces

To untangle this mess, we might try to create meaningful, decoupled interfaces. We'll map our unwieldy data to what each component needs, embracing abstraction.

But here's the kicker: good abstraction clashes with the just query what you need approach.

Why?

Let's attempt to create a Project entity and a mapper function:

type Project = {
  id: string;
  name: string;
  dueDate?: Date;
}

function toProject(data: X): Project { /* ... */ }
Enter fullscreen mode Exit fullscreen mode

But what is X? If we assume it's the generated Project type from our GraphQL schema, we're in trouble. Consider this query:

const { data } = useQuery(gql`{ projects { id, dueDate } }`);
Enter fullscreen mode Exit fullscreen mode

This data lacks the fields needed to map to our Project entity. We can't reliably map partial data to a full entity without risking runtime errors or inconsistent state.

Option 2: Using Context

Okay, maybe crafting meaningful interfaces is off the table, but we can prevent our component interfaces from getting polluted by avoiding prop drilling altogether. "Aha! We'll use React's Context API!" We set up a provider and pass our data through context!

const Page = () => {
  const query = usePageQuery();

  return (
    <MyProvider value={query}>
      <MyChildren />
    </MyProvider>
  );
}

const MyChildren = () => {
  const { data, loading , error } = use(MyProvider);
}
Enter fullscreen mode Exit fullscreen mode

But hold on—aren't we just coupling MyChildren to usePageQuery via context? It's not transparent as we are doing it via dependency injection but we'll get to that in a second. We have a bigger problem, since ApolloClient provides a cache with ApolloProvider, we're adding redundant layers here.

Simplifying our code, we might write:

const Page = () => {
  usePageQuery();
  return <MyChildren />;
};

const MyChildren = () => {
  const { data } = usePageQuery({ fetchPolicy: "cache-only" });
  // Component logic
};
Enter fullscreen mode Exit fullscreen mode

Now you see me! Context doesn't solve our fundamental problem; it just obscures it.

The Challenge of Render-As-You-Fetch

In many cases, we don't need all the data upfront to start rendering. When we combine everything into one huge request, we make it harder to render parts of our application as soon as their data arrives.

Yes, we can use directives like @defer, but implementing them adds layers of complexity to both the client and server.

Additionally, sometimes, we need different strategies for different data. For instance, we might want to render part of the data on the server and the rest on the client. In this case, we need to break our query into at least two separate queries. (Did I just miss dynamic and static data 🤔)

const Page = () => {
  const serverQuery = useServerPageQuery();
  const clientQuery = useClientPageQuery({ ssr: false });
  /* ... */
}

Enter fullscreen mode Exit fullscreen mode

Cache Invalidation: The Hidden Beast

When we mutate data, we need to update our cache. Sometimes, optimistic updates and manual cache manipulation aren't feasible. The safest route is often to refetch.

But with our all-in-one query, refetching means fetching the entire dataset again—a heavy, inefficient operation.

Is there a solution? Perhaps, but it would require sophisticated infrastructure that goes beyond what most app developers should implement. We're talking about systems that can intelligently manage partial cache invalidation.

The Cost of Chasing Zero Over-Fetching and Under-Fetching

Let's tally up the costs of striving for zero over-fetching and under-fetching:

  • Coupled Presentational Components
  • No Co-location
  • Low Signal-to-Noise Ratio: Massive generated types and null handling clutter our codebase.
  • Complex Render Strategies
  • Cache Management Nightmares:

thanos meme

Is it worth it?

A Reality Check

In practice, many teams abandon the ideal of crafting minimal, all-encompassing queries. Instead, they opt for smaller, reusable data-fetching hooks like useUser, useComments, and useWhatever. They also leverage fragments to promote reusability and define cohesive entities within their GraphQL schemas.

But wasn't GraphQL's main selling point that it's a query language for the client—allowing us to request data in exactly the shape we need? Yet, in practice, we're using it more like a simple SDK, making straightforward data requests. Aren't we just replicating what could be achieved with RPC or REST calls, but with added complexity?

And yes, I recognize that GraphQL isn't inherently bad—it does solve certain problems more effectively than other solutions. It offers flexibility, strong typing, and a unified interface for data fetching. However, as app developers, I believe it's time to rethink what we truly gain from using GraphQL before adopting it.

If you're a tech giant like Facebook, equipped to build and maintain the sophisticated frameworks required to harness GraphQL's full potential, then by all means, leverage it.

However, for most small to medium-sized enterprises, adopting GraphQL without the necessary resources leads to complexity and frustration. Based on my experience, it often results in a tangled mess rather than streamlined data management.

Top comments (24)

Collapse
 
krd8ssb profile image
Steven Brown • Edited

I'd like to start this with noting that I absolutely love GraphQL but it has it place and that place is not everywhere. I work as a staff engineer, team lead, and subgraph owner, at one of the largest federated GraphQL implementations to date at Walmart Global Tech.

Most of what you stated is pretty accurate from a front end perspective as you primarily focused on under/over-fetching but there are definitely ways to mitigate some of the headaches. You're type generation tooling is probably one of the best friends when consuming a GraphQL API but that relies on another important factor. Schema design.

Schema design can make or break a frontend engineer. The schemas are the gatekeeper to your sanity.

  • when or when not to use nullable fields
  • how you return errors - application versus GraphQL errors.
  • how you organize your graphs (namespacing for larger graphs)
  • consistency in naming conventions
  • when to use ENUMs and when not to

Federation:

  • how and when to use reference resolvers.
  • Creating better reference resolvers for near-perfect error communication

And the list really goes on.

One of the things mentioned was the nullable fields. That is a very important part of schema design

Take an array definition in GraphQL (I'm doing this on my phone so please excuse any minor syntax issues if I miss an autocorrect)

type arrayExample {
 propName: [String]
}
Enter fullscreen mode Exit fullscreen mode

This is a horrible array design here. It's a nightmare for the front end.
propName could have the following outputs:

- null
- [null]
- []
- ["some value"]
Enter fullscreen mode Exit fullscreen mode

A better practice would be to stick to the convention of:

type arrayExample {
 propName: [String!]!
}
Enter fullscreen mode Exit fullscreen mode

Adding the ! after String and after the array means that propName must not be null (must return an array) and may be either an empty array or an array containing 1 or more strings.

That reduces us to the following potential outputs:

- []
- ["some value"]
Enter fullscreen mode Exit fullscreen mode

This is just one example but it makes a night and day difference to a front end developer.

Collapse
 
leob profile image
leob

Great response from someone with actual deep experience with this stuff - main takeaway:

"it has it place and that place is not everywhere"

I'm comparing it a bit with the NoSQL "hype" of a few years ago - once the hype was over, we were able to talk ask the question "which use cases does NoSQL fit, and which use case does it NOT fit?"

Turns out that in most cases you just want an RDBMS, and I guess the same goes for GraphQL - in most cases you just want simple REST, but that doesn't mean GraphQL is useless ... as always, use the right tool for the job!

Collapse
 
krd8ssb profile image
Steven Brown

You are 100% correct! At our scale, I couldn't imagine our front end teams being able to accomplish what we have using rest and still being able to maintain the allowable thresholds for downtime or production issues.

One of my personal projects, I opted for a combination of GraphQL and REST using NestJS with nuxt 3 for the frontend app.

Nuxt because I love working with Vue
Nest because I can get up and running quickly while keeping some modularity to easily remove one moservices create a separate service out of it (this is planned for the future)
REST because it works best for exchanging auth codes for tokens as well as receiving webhook events from my auth provider.
GraphQL for a few reasons:

  • I use it everyday, am very comfortable with it for my use case, I will be able to get to market faster
  • easier to adapt if I do end up splitting my services
  • I'm quicker at building GraphQL schemas than I am building OpenAPI docs 🤷🏼

Your NoSQL analogy is dead on!

Thread Thread
 
leob profile image
leob

Thanks ... those are great choices, I'm a Vue fan as well !

Collapse
 
moopet profile image
Ben Sinclair

My main takeaway is "I'm doing this on my phone". I abandon anything I'm answering on my phone if it looks like I'll need more than a couple of sentences because it's such a painful process compared to using a real computer. Props.

Collapse
 
thethirdrace profile image
TheThirdRace • Edited

A better practice would be to stick to the convention of:

type arrayExample {
    propName: [String!]!
}

I 100% agree with this. But you wouldn't believe how much I had to fight for the backend devs to adjust their practice accordingly.

And it's not just some random devs, I worked for a worldwide entertainment company that do business in pretty much every country in the world. Every damn backend dev I interacted with was screeching and wailing just at the idea of having to manage their types correctly in the response 🤷‍♂️

GraphQL is made to have null on every property, manage it on the frontend
-- Every. Single. Backend. Dev

So, while you bring a lot of good points, and I do agree GraphQL is definitely not the problem in all situations, I can say with confidence that I would not recommend using GraphQL; Unless of course you are extremely strict on the typings, you manage to rein in your people and get them on the program.

My experience, which is only my experience, has been that devs are lazy. They will jump at every opportunity to avoid some work. Then slowly but surely, you're now managing a ton of typing idiosyncrasies in the frontend...

Unless you're transferring an extreme amount of data, the trade-offs are not worth it.

Collapse
 
daniel15 profile image
Daniel Lo Nigro

@TheThirdFace Fields are usually nullable in GraphQL schemas mostly to handle errors - the field will be replaced with a null if an error is thrown server-side. This is finally changing with the "semantic nullability" proposal. If that goes well (it's currently an experimental feature in Relay and I think Apollo too), I think the GraphQL best practices will change.

Collapse
 
asafaeirad profile image
ASafaeirad

Hi @krd8ssb,

Thank you for taking the time to read my article and for sharing your valuable insights! I really appreciate hearing from someone with extensive experience in GraphQL, especially at the scale you're working with.

You're absolutely right—this article is heavily client-focused, and I agree that schema design is pivotal in GraphQL and can make or break the developer experience on the front end. However, some of the points I mentioned have roots in GraphQL's inherent design decisions and cannot be entirely solved through better schema design alone.

In my article, I aimed to highlight the reality of how many companies use GraphQL in practice, even when following best practices. To better illustrate this, I used some industry standard tools (like Hasura and GraphQL Code Generator) and avoided random GraphQL schemas and practices.

Also, poor schema design can indeed lead to issues like unnecessary nulls. However, even with a perfectly designed GraphQL schema, we still have to have nulls due to the inherent nature of dealing with I/O—as GraphQL's own best practices suggest.

I've tried my best to demonstrate that these problems are inherent in GraphQL's design and not merely the result of skill issues.

However, I still agree there can be way more benefits if we increase the scope beyond the client side only.

Collapse
 
krd8ssb profile image
Steven Brown

Your article was great and I don't disagree with you at all. You did a great job demonstrating a few of the pitfalls/difficulties experienced from the front end perspective. It is definitely not for everyone and every scenario. I find myself switching between REST, GraphQL, and GRPC depending on the systems or services I'm building.

IMHO, it comes down to understanding your tool kit and in knowing what tool to use for what job. The more you know about each, the better, quicker, and more confidently you can make your decisions.

Thank you for the write up!

Collapse
 
srbhr profile image
Saurabh Rai

Here, GraphQL. It does this:

Image description

Collapse
 
peshale_07 profile image
Peshal Bhardwaj

Accurate one 🤣

Collapse
 
daniel15 profile image
Daniel Lo Nigro

There's nothing in the GraphQL spec that requires this behaviour... Blame whatever server-side library you use :)

Collapse
 
anmolbaranwal profile image
Anmol Baranwal

Interesting read. I think both have their own merits (based on how huge stuff we are working on) but we often end up complicating things more than we realize. Still, companies like Medium use GraphQL for obvious reasons.

Collapse
 
frickingruvin profile image
Doug Wilson

Man, I wish this were called out more often, and the examples you provide are first rate. Thank you for sharing! Bookmarking!

Collapse
 
ricardogesteves profile image
Ricardo Esteves

Nice article @frontendmonster! True, I agree. A nice solution to get the best of the both words is EdgeDB. Tried it in the other they and it's amassing how it works to be honest.

Collapse
 
mahdi_sheibak profile image
Mahdi Sheibak

Thank you, very useful article!

Collapse
 
daniel15 profile image
Daniel Lo Nigro

Have you tried Relay? It solves most of the problems you've mentioned.

With Relay, you create one GraphQL fragment per component. The Relay compiler automatically generates either a TypeScript or Flow type for each fragment, and the useFragment hook will only return the fields that component asks for in its fragment. This means you're not passing down a loosely typed object. You still pass data via props, but each component only sees the data that it requests, not the data the other components need.

For cache invalidation, Relay will automatically update its cache based on data returned from a mutation, even if it's just partial data (for example, if the mutation only updates one field, and only that field is in the query). Relay's cache is keyed by object ID (which is expected to be globally unique across your whole app) so it know the right cache entry to update. It also has strongly-typed optimistic updates, and even strongly-typed manual cache updates in the rare case that you need it.

Regarding nulls, it's being worked on as part of semantic nullability:

Relay (and I think Apollo?) have implemented the experimental @semanticNonNull annotation, which make fields non-nullable if the only reason they can be null is due to an error (and errors will throw an exception client-side instead of nulling out the field).

Collapse
 
asafaeirad profile image
ASafaeirad

Thank you for sharing this! Yes, IMO Relay’s approach is "the" proper way to use GraphQL in components. However, for some reason, there seems to be an unwritten rule that Relay isn’t intended for the community. I’m not sure why I have this feeling, but I noticed Apollo and TheGuild solutions being used everywhere and Relay became an internal tool for Meta (like Flow).

I also wasn’t aware of @semanticNonNull. It looks like a game-changer—thanks for highlighting it!

Collapse
 
kambing86 profile image
Chua Kang Ming • Edited

I have to say, this article stated the issues that are quite valid, but I'm not seeing any solution, is it trying to say that the REST has no such issue? These are not the cost of changing from REST to GraphQL because all these issues happened in REST, so I think this is quite a bad article.

GraphQL at least gives the frontend developers a reliable data contract that we could use to do codegen with TypeScript, that single reason itself is powerful enough for every system to use GraphQL

Collapse
 
asafaeirad profile image
ASafaeirad

Thanks for your comment! The article isn’t about promoting REST or denying its issues but rather highlighting what we need to sacrifice when adopting GraphQL, particularly on the client side. While GraphQL offers benefits like introspection and reliable type generation, these aren’t exclusive to GraphQL—technologies such as OpenAPI and Orval for REST or tRPC can provide almost similar type-safety and can be utilized in appropriate scenarios.

Collapse
 
thefunk profile image
Cole

I used to work for a Fortune 500 and one of my tasks was building an API to serve current state data for IoT devices. The catches were that the devices were incredibly rich in the amount of information they would provide and creating new endpoints was a multiple team effort concerning API gateways and intermediary BFF APIs that would add context to requests.

The frontend needed something flexible to query information, layers of APIs away, that needed to be spun up in a short amount of time, support filtering, sorting, and pagination. We opted to build a REST API which ultimately behaved very similar to GraphQL. Everything was nullable because devices may not have reported any given property at any given moment. We built our backend in Java, so everything was strongly typed. We favored composition over inheritance but even so there weren't many levels of depth to the schema.

Looking back we could have definitely leveraged GraphQL, but it was a fun exercise in building something purpose built to behave like GraphQL. The hardest parts were safe dynamic query building and query complexity monitoring. In order to fetch our data we leveraged runtime reflection and a little bit of caching alongside it in order to determine if a provided filter was applicable to a given field based on the field's type. Each filter was then dynamically added to a query and provided inputs were sanitized and parameterized. Each requested property would add some level of complexity to the query which was completely tuneable. Same with filters and sorting. The more filtering and sorting requested, the higher the complexity score. At a certain threshold we would deny requests.

Overall, I'd agree with your position. GraphQL and one stop shops are hard to code and don't fit all cases, but in the cases where they make sense, boy do they save some work, and wow are they cool.

Collapse
 
thadguidry profile image
Thad Guidry • Edited

Your use case is why we created and open-sourced DB2Rest db2rest.com that automatically creates a REST API for your database and becomes an API gateway safely and securely. Responsibilities of data management are thus placed most effectively with the data team (DBA's, etc.) and querying, filtering is left to the frontend team where they don't even need to learn SQL because of DB2Rest's easy URL parameters syntax.

Collapse
 
trehak1 profile image
Tomas Rehak

Just use Node interface and proper state management/cache, like Apollo cache. Problems you have are from lack of undestanding of GraphQL in its full power. No worries, it takes time! We are organizing GraphQL fe workshop if you would like to join.