DEV Community

Cover image for GraphQL requests over HTTP/s are slow
Andy Richardson
Andy Richardson

Posted on

GraphQL requests over HTTP/s are slow

HTTP/s?

What year do you think this is - 2020?

It's 2021, and I'm here to tell you about a way you can make your GraphQL requests faster and more reliable - using WebSockets! 🚀

Traditional transport mechanisms

Most people familiar with GraphQL are accustomed to using HTTP/s for query and mutation operations; and there's good reason for that! HTTP requests are easy to manage and scale thanks to their simple call-and-response nature.

WebSockets on the other hand, while often more performant, require the management of many sustained connections. For "atomic" operations like queries and mutations, the runtime complexity and infrastructure costs introduced from using a long running socket has traditionally been an understandable deterrent.

What if I were to tell you there's a way to have the best of both worlds?

Beff Jezos

Managed sockets

As serverless technology has continued to evolve, stateless solutions have been introduced to address the otherwise stateful task of managing sockets.

In the case of AWS, Big Jeff's managed WebSockets was designed to solve this problem. At the time or writing, one million "connection minutes" costs a measly $0.25.

That isn't to say this is a perfect solution. The switch to a SAAS (sockets as a service) model involves requires a shift to a completely new API for working with sockets; and plug-and-play solutions are only just beginning to come about.

Improved performance (in theory)

Before considering a switch from HTTP/s to WebSocket, it's best to run through why this might be worthwhile.

Big brain moment

With some exception, every query or mutation made over HTTP/s requires a new connection to be established - and that isn't free.

Opening and closing a connection requires some overhead which can introduce latency, thus making GraphQL requests take longer.

By instead using a WebSocket to communicate with a GraphQL endpoint, the single connection is sustained for the lifetime of the client - therefore removing the per-request overhead found with HTTP/s.

You can think of it like this:
    HTTP/s: 100 queries/mutations -> 100 connections
    WebSocket: 100 queries/mutations -> 1 connection

For lower level details, check out this stack overflow discussion

Performance isn't the only advantage. WebSocket connections typically have better fault tolerance, meaning clients on unstable connections should have an easier time sending and receiving messages.

Testing the theory

While the theory makes sense, I wanted to see if a measurable difference can be seen when making requests over a single sustained connection.

In order to get a result that measures the real-world impact + feasibility, rather than just the protocol overhead alone, I created an end-to-end project and benchmarked both protocols.

Performance results

Fast connection results

Results when using a 5g connection

After testing this a number of times, I was relieved to see that there is a consistent performance improvement. But with the mere ~100ms difference in latency, I was a little disappointed.

Realizing that this is still a roughly 30% improvement in speed, I decided to explore whether the latency reduction would be more evident on slower connections

Perf results

Results when using a simulated 3g connection

At this point, the impact became much more evident! With little to no effort or additional cost, I was able to measure an over half-a-second (~600ms) improvement.

Making the switch

So your GraphQL endpoint is already on serverless infrastructure, and you want to take the leap - what needs to be done?

If you're already using GraphQL subscriptions (on serverless infrastructure) for push-based data, first off - give yourself a pat on the back you trend setter đź‘Ź! There's no work required other than configuring your client to send requests via the socket rather than HTTP/s.

The likelihood however is that your endpoint isn't using GraphQL subscriptions. In the case of AWS, the serverless socket offering has been around for a few years but the work required to integrate this into existing sub-protocols has been fairly substantial.

I've been working to change this and created Subscriptionless which is a library designed to make socket-based GraphQL (queries, mutations, and subscriptions) simpler to implement on AWS's serverless infra.

If you want to take the leap, check out the repo for a guide and example project. You can also try out the end-to-end project repo which was used to do this performance comparison.

Conclusion

So there you have it - free network performance improvements at little to no cost!

Do we even need HTTP/s for GraphQL?

Do you see yourself using WebSockets more?

Share your thoughts below đź’¬


Thanks for reading!

If you enjoyed this post, be sure to react 🦄 or drop a comment below with any thoughts 🤔.

You can also hit me up on twitter - @andyrichardsonn

Disclaimer: All thoughts and opinions expressed in this article are my own.

Oldest comments (4)

Collapse
 
amangautam profile image
Aman Gautam

Really interesting. It will be exciting to see how it impacts server resources at a decent scale. For serverless, it will play an important role.

Collapse
 
sqlrob profile image
Robert Myers

Does using persistent connections (not websockets, just Connection: keep-alive or not having Connection: close) affect the http numbers at all?

Collapse
 
andyrichardsonn profile image
Andy Richardson

Thanks for asking!

I gave it a quick shot this morning and it looks to close the gap a little, although there's still a measurable difference between the two.

Collapse
 
maartenvn profile image
Maarten Van Neyghem • Edited

100 connections for 100 requests isn't entirely true. When using HTTP/2, connections are recycled when there are multiple requests in a specific timespan. You can also not underestimate the performance benefits of HTTP caching of queries that do not change often, which is not available when using sockets.

Using serverless also comes with it's own downsides in performance, like long cold start delays.