DEV Community

Cover image for Securing GraphQL API from malicious queries
Sachin Thakur
Sachin Thakur

Posted on • Originally published at bitoverflow.net

Securing GraphQL API from malicious queries

Building graphql APIs have been easy now with all the libraries and online communities around graphql but there must be some questions in your mind. Like how do we actually secure our server, how do we restrict or whitelist only certain queries to run on our server?

Now, if you have ever used graphql you might be aware of the graphql query loop. Let’s see an example.

{
  author{
    name
    books{
        name
        author{
          books{
            name
            author{
              name
            }
          }
        }
    }
  }
}

Now, do you see any issue with the above query type? We can have an infinite loop here and if some runs this query against our server it can definitely crash our server or create a DOS kind of an attack. This is indeed a potential problem if a malicious user can create a very nested query that will hurt your backend. There are many approaches to solve this problem. Let’s look at a few of them.


Size Limiting

One very naive approach would be to limit the size of the query by raw bytes since in graphql all requests are treated as a post request and all queries are a part of the body as stringified objects. Now, this might not work in all the cases and end up hurting your instead as some of your valid queries with long field name might end up failing.

const QUERY_SIZE_ALLOWED= process.env.QUERY_SIZE_ALLOWED || 2000 
const query = req.body.query || '';
if (query.length > QUERY_SIZE_ALLOWED) {
  // logic for handling error.
}

You can run the above code before each request inside a middleware and it will run for each request that is coming into your graphql server and will validate all the queries and reject any query which is too long.


Depth Limiting

Another approach would be to limit the nesting only to a n'th level. You can define to what level you can allow execution of the query and strip our rest of the fields after the n-th level. One really good package to do so is graphql-depth-limit which limits us to define the depth of the query we want to allow on our server. graphql-depth-limit works really well with both express server and koa, and even if you are using the apollo server it can work really well with that too.

const QUERY_LIMIT= process.env.QUERY_LIMIT || 5;
app.use('/graphql', graphqlHTTP((req, res) => ({
  schema,
  validationRules: [ depthLimit(QUERY_LIMIT) ]
})))

Query Cost Analysis

Now, in depth limit, we are limiting the execution of queries to nth level but it might not be suitable for all the cases and sometimes the depth can be a lot less but the cost to compute that query can be very high. This might happen when we are fetching a lot of data in a single query and it’s putting a lot of load on our backend server or database server. These queries might look something like this.

{
  author(first:40){
    name
    books(first:40){
      similar(first:10){
        name
        author{
          name
        }
      }
      name
      id
    }
  }
}

Now even though this query is only two levels deep but you can understand the complexity of this query and the amount of data it will be requesting from the database server and computation happening on the backend server. This issue would not be resolved by either Depth Limiting or Size Limiting. So we need something robust which can handle this kind of queries. So, often in these cases we often need Query Cost Analysis where our server computes the cost of each query and decides whether to allow this query or reject. Now to this, we need to analyze each query before running them on our server and if they are too complex or too expensive we need to block them. Now there are numerous open-source libraries which have been build by some of the really smart people and one of those libraries is graphql-validation-complexity which is really helpful for doing just that. You can separately define complexity for each field like different complexity for scalar types and different complexity for objects. There is also graphql-query-complexity which calculates the complexity based on each field, unlike graphql-validation-complexity which calculates complexity based on the types. Adding query cost analysis using any one of these two libraries is pretty straight forward.

const apolloServer = new ApolloServer({
  schema,
  validationRules: [createComplexityLimitRule(1000)],
});

Now, before you start implementing query cost analysis on your server just make sure your server really needs it otherwise it will just be an overhead for your server and you will just end up wasting resources and time. If your server does not do any complex relations fetching you might be better of without query cost analysis and just add size limiting and depth limiting.


Query whitelisting

Query whitelisting is a little complicated and can be a double-edged sword sometimes. Let me explain it in simple real-world terms, whenever you go to a restaurant every restaurant has a name or number assigned to each dish so that instead of saying the whole name of the dish like “cheese pizza with a double cheeseburger with olives and fries on the side” you can just say “Number 2”, it will save you both time and effort. Now, in this case, you are just saving a few words but you are saving something. But when it comes to requests from your client to your server you can save a lot of request data if you don't send the entire query and just the hash of the query.

This is known as “persistent queries” in graphql terms and save you some data on request and protect your graphql queries against some malicious queries being executed on your server. So, what you basically need to do is compile a list of all the allowed queries ahead of time and check any query against this list. You can even generate a hash for each query and just send the hash value in the request.

https://www.somewebsite.com/graphql/query/?query_hash=ad99dd9d364ewe6cc3c0dda65debcd266a7&variables=%7B%22user_id%22%3A%22221121370912475

The request will look something like the above example. No one can actually know the schema server is running, which query or mutations are being run, it’s just a hash. If your queries are totally static and you are not using some library like relay to generate these queries dynamically this might be the most reliable approach for you. You can even automate the entire process of hashing the queries and putting it inside your production application and you won’t require the query validation on your server since you already know all the queries being run on the server.

But before you go ahead and start implementing Query whitelisting just know a few limitations and analyze will it be good for you or now.

  1. It will be really difficult for you to add, remove or modify any query on your server since now you have to communicate with all your clients and give them new hashes and if anyone runs a query which has been modified slightly will result in query failure.
  2. If you are building Public APIs which are accessible by developers other then your own team, it’s really not a good idea to go with this approach.
  3. Unexpected slight changes in your queries can cost your application to crash if there was ever poor communication between the teams.

Conclusion

To summarize everything we have discussed in this article, I would recommend using Depth Limiting as probably something every GraphQL server should have by default. And after that, you can build on top of that to add more layers and make your server more secure. Query whitelisting is the one case I feel which is for a very specific type of applications and you should analyze properly before implementing it. Other not so talked about approaches would be Query Time out so your queries don’t run infinitely and crash the server. While Query Cost analysis is a little complicated but it protects your server most against the malicious queries.

Oldest comments (0)