loading...

Why People Like GraphQL

smizell profile image Stephen Mizell Originally published at smizell.me on ・2 min read

People like GraphQL, but I'm unconvinced it's primarily for technical reasons. I think it's something more human-related. People choose tools because they resolve some underlying frustration, and the current implementations of GraphQL solve some of the bigger frustrations people feel working with APIs.

The beginning of the web

Tim Berners-Lee provides an answer to the question "What made you think of the WWW?":

Well, I found it frustrating that in those days, there was different information on different computers, but you had to log on to different computers to get at it. Also, sometimes you had to learn a different program on each computer. So finding out how things worked was really difficult. Often it was just easier to go and ask people when they were having coffee.

It's interesting—though not surprising—the web was born out of frustration. Berners-Lee was tired of writing software to interact with so many different systems. "Can't we convert every information system so that it looks like part of some imaginary information system which everyone can read?" he asked. He says this is what became the WWW.

GitHub's reason for GraphQL

There have been lots of writing about technical reasons to choose it, but I believe those are secondary to the frustrations people feel working with APIs. Look at why GitHub moved to GraphQL.

[We] heard from integrators that our REST API also wasn’t very flexible. It sometimes required two or three separate calls to assemble a complete view of a resource. It seemed like our responses simultaneously sent too much data and didn’t include data that consumers needed.

The technical issue described here is what's called "overfetching" or "underfetching." But if you read into the quote, you can almost hear their customers saying, "I know what I want. Why do I have to do all this work to get it?" Or better yet, "Can't you convert all of your API resources so that they look like part of some imaginary information system which everyone can read?"

Each web API today feels different. They all have their own SDK, their own client, their own generated tools, and their own nuances for exposing data over HTTP. Even within them, it can feel repetitive requesting a list of resources to solve common goals. GraphQL tries to resolve this, and GitHub saw this.

GraphQL unifies a fragmented world

GraphQL lets people think about data and how it's related. It saves developers from piecing together a list of resources to understand how to get related data. It allows consumers to forget about the nuances of HTTP methods. People ask for what they want and that's what they get. This comes with the cost of losing out on some of HTTP's time-tested superpowers, but for most people, it's worth the cost.

The next generation of web APIs

I hope the next step for web APIs will be to follow these steps and solve some of these frustrations. I think that step will be more pragmatic and more human-centered than it will be technically superior.

Posted on by:

smizell profile

Stephen Mizell

@smizell

I'm a software developer, and I spend a lot of time thinking about APIs, best practices, and culture.

Discussion

markdown guide
 

People like GraphQL, but I'm unconvinced it's primarily for technical reason.

But you end talking about technical reasons:

It allows consumers to forget about the nuances of HTTP methods.

I like more to think about the nature of the data and the evolution of the information systems. Something like the reasons you cited from Tim Berners-Lee. The world has almost surpassed already the stage on which innovation relied on building new applications to implement well known business processes. The next stage is to integrate those applications and make profit of the big amount of data collected.

The same existence of software changes the dynamic of the human /apps ecosystem, the users know more about which is possible with the information and the developers know more about the problem domains where the information is generated. That results in new requirements and applications evolving at greater speed.

Relational databases use to fit very good for database-centric applications that do not change too much. The performance relied on the advantage of the indexes and other mechanisms that favour writing less and reading more.

But that did change with the advent of social networks and other apps from this evolution stage. In Facebook, for example, the amount of information written is greater than the one is shown to the friends. They needed another stuff. If implemented with relational databases, most of the time the db engine would spent it updating indexes and the performance would be not as expected.

A quick solution did arise with NoSQL databases, but they did required to plan very good how would the queries made in the moment of designing how to store it. So it was not optimal for an constantly evolving app and internet ecosystem.

Furthermore, integrating information from sparse systems and presenting it with relation started to give shape to new business models. Think about booking systems, for example. New useful information might arise every day. Darcy DiNucci coined this stage of the web as Web 2.0 in its article Fragmented Future.

It results that graph databases fits for both cases:

  1. easy addition of information: The data schema is described in a simple structure and it is data it self. Everything did start to be internally like [nodeA]-relation->[nodeB] represented by tuplets (infoA, relation, infoB) that could be easily added in order to merge and relate info from different systems. This is useful for integrating information of spare systems minimizing the offline time.

  2. flexible querying: Once the information is in the graph, it could be queried as you need it.

For this last point is that a query language is needed.

I think that is a consequence of the evolution of the whole ecosystem of internet apps but for sure the causes are mostly technical.

I mean it was not a functional requirement, it had nothing to do with the design of the business. It has to do with the technological (non-functional) requirements imposed to the IT that needed to support all of this. Mostly the database systems but sure the Web services also.

You can check a whole example developed from the old paradigm to the new one, step by step in the book Programming the Semantic Web which I consider a most-read to understand the paradigm shift. The problem domain is related to a restaurant.

 

Wow, thanks for this comment. I was not expecting such a detailed response! I hope it's OK if respond to a few things, though I think your comment could stand on it's own. You should turn it into a post here if you haven't already :)

for sure the causes are mostly technical.

First, I believe the problems GraphQL arose from are different from the problems a majority of people face in their day to day work. Facebook needed things like GraphQL and React because they had dozens of teams working on the same page that is used by millions. This isn't the case for most. Many times it's one team working in the frontend.

But second, I think all solutions arise because of human needs, desires, frustrations, anxieties, tensions, pain points, etc. People moved to NoSQL because it's hard to evolve a database, and that skillset became valuable career-wise. Performance is important because people want what they want and they want it now. It's why I say technical issues are not the primary reason.

So I'm not arguing here that GraphQL is a non-technical choice, but rather people change to it or choose it because it makes it easier for people to work together, or it's easier for people to reason about the domain.

But you end talking about technical reasons

What I'm saying in my comment is that GraphQL allows people to forget about these nuances. In other words, they like it for what they don't have to do, and they realize they could get rid of many difficulties and frustrations with modern APIs. Being "easy" or "flexible" are the flip side of frustrations like, "This is too hard," and "I can't deliver quickly or safely change what I've built."

The world has almost surpassed already the stage on which innovation relied on building new applications to implement well known business processes. The next stage is to integrate those applications and make profit of the big amount of data collected.

I really like this thought. I've borrowed the phrase "transcend and include" where we take the things that exist and mash them together to create something new.

Thanks again for the great comment and also the book recommendation. :)

 

What I'm saying in my comment is that GraphQL allows people to forget about these nuances. In other words, they like it for what they don't have to do, and they realize they could get rid of many difficulties and frustrations with modern APIs.

As long as I understand the GraphQL community feels very good about the current state of its toolset and capabilities:

  1. expose quickly the data (system communications, HTTP and JS toolset, etc..)

  2. query and combine info ( capabilities as query language)

I just to wanted to point out that the difficulties and frustrations most likely come from 2.

I remember that 15 years ago the Java and .NET community had a very mature toolset (SOAP tools) and conceptual basis (SOA). I can ensure that I never felt such frustrations about 1. (communication stack) just about 2. (querying and integrating data over disperse system).

I did really like the XML-based messages (SOAP) that could be validated against published schemas (WSDL) that could be used to generate server and clients. I still feel that thinks like OpenAPI is not so mature as WSDL.

I think that the shift was because of Microservices did appear for IoT, they did need short messages and not so formal contracts. REST protocol was minimalist and very useful for that. Given that it was based in JSON, then the javascript community did take the control and started to play with the integration from JS in the browser, removing the need of extra code to process the response to show the results to the user.

But even with those advantages, I still think that the past was not so frustrating for old API programmers regarding 1.

Still think about advantages in 2. ;-)

Thank you for bringing this topic.

 

Won't it still be technically superior because of roundtrip/overfetching/underdelivering reasons?

 

If the main metric for success for an API is payload size, then the implementations of GraphQL are tough to beat. Though there is nothing stopping you from doing the same thing with REST (what some call sparse field sets), GraphQL already has a solid implementation of reducing those responses. And it has great tooling. That doesn't exist for most web APIs.

GraphQL also has its own complexities. It's caching story is not great. Pagination is not as nice as adding next and prev links in an API like GitHub used. Errors are lacking compared to what HTTP allows.

I suppose my long-winded point here in the context of the post is that while those are great technologies, I'm still unconvinced people are going out to search for solutions for overfetching or underfetching. And I'm not sure these technologies make it conceptually a better choice than REST. REST has a lot of ideas outlined in the original spec that most people don't use. That makes it harder to compare for me.

But if you're building a JavaScript frontend or mobile app, wow, it's hard to compete with GraphQL.

 

I think you're confusing GraphQL with a certain implementation of it. It's as flexible as you build it to be. There's a convention relay uses for pagination, but that's completely optional. If you want to add next and prev and page number, go for it. Same goes with caching. You may be using an instance that doesn't handle caching the way you want, but caching has nothing to do with GraphQL itself or any errors other than syntax. It's up to the one managing the API service to handle these things, as they define how GraphQL can interface with their systems. GraphQL isn't a javascript or php module, it's just a more abstract language specification

I’m speaking more about the entire GraphQL ecosystem in my comment. The stories around what I mentioned are fuzzy and developing over time. You are right, the implementation is up to the providers and the issues are not addressed in the spec.

The HTTP ecosystem on the other hand has several widespread specs and implementations for these things.

 

So, why you are skeptical about GraphQL?

 

I wouldn't say I'm skeptical of it. I think it's a phenomenal spec with many great implementations like Apollo. I don't think it's the solution for all API needs, but it's a great solution for many use cases. It does come with some costs when giving up on what HTTP provides.

 

Would you expand on these costs (perhaps in an additional post)? It would be really interesting to see another perspective on this.

I'm currently on the fence about GraphQL (with the drawback I see is a major adaptation work required on the server-side to map the relations - description, definition, implementation) - and would like to know more on what are the pitfalls, before jumping in.

Thanks for the post!