Learn to Architect and Test GraphQL Servers by Observing Spectrum
Photo by alan King on Unsplash
Something that has held my interest recently has been finding better ways to build and test JavaScript applications, particularly those speak GraphQL.
Say I have a GraphQL server written in Node.js, how should I arrange my folder structure? Where should I put my schema and resolvers? Should my type definitions be co-located with their respective resolvers?
What is a good way to go about testing my /graphql endpoint for all of my different queries and mutations?
Recently, spectrum.chat open sourced their whole stack. That means you and I can head on over to their repo and study their source code. My plan was to observe how they architect their JavaScript applications and steal a few ideas for my own apps. Hopefully we’ll be able to answer some of my questions posed above.
By diving into this open source classroom you can learn how to work with these technologies like a pro (shamelessly stolen from their readme):
- RethinkDB: Data storage
- Redis: Background jobs and caching
- GraphQL: API, powered by the entire Apollo toolchain
- Flowtype: Type-safe JavaScript
- PassportJS: Authentication
- React: Frontend and mobile apps
- Expo: Mobile apps (React Native)
- DraftJS: WYSIWYG writing experience on the web
Today, we’ll start by taking a look at the way they layout their GraphQL API.
GraphQL Folder Structure
The first thing we’ll take a look at is how Spectrum’s folder structure works.
server/
├── loaders
├── migrations
├── models
├── mutations
├── queries
├── routes
├── subscriptions
├── test
├── types
│ └── scalars.js
├── README.md
├── index.js # Runs the actual servers
└── schema.js
Lets begin by noting that there is already documentation in place that describes what each part of the application is handling. There, you’ll also be able to learn about the strange Greek naming convention for all of their backend services.
Loaders implement Facebook’s DataLoader for each of Spectrum’s resources in order to batch & cache. Optimization stuff, but we’re just getting started so lets not worry about it.
Migrations allow the developer to seed data in order to test the application. It contains a bunch of static default data but it also uses the faker library, allowing you to fake a whole bunch of data like users, channels, and message threads.
Models describe how the API interfaces with the database; for each resource (users, channels, etc…) there exists a set a functions that can be use to query or mutate that data in the database.
Queries holds the resolver functions that describe how to fetch data, which items, fields, and how to paginate them.
Mutations holds the resolver functions that describe how to create new data, delete, or update existing data.
Resolvers are a neat way to describe functions that call the proper services in order to fetch the data demanded by the client. For example, consider this query:
query GetChannelsByUser {
user(id: "some-user-id") {
channels {
members
}
}
}
This particular query fetches a single user by ID, while also fetching all of the channels that they are apart of and the members of those channels. To figure out how to do that, well, that is the role of the resolver functions.
In this case, there are 3 resolver functions: one to get the user, one to fetch that user’s channels, and another to fetch all of the members for each of the channels fetched. That last resolver function may even get run n-times for each channel.
You might notice that this query can get very heavy. What if there are thousands of members in multiple channels? That’s where the loaders would come in handy. But we’ll not go there today.
Subscriptions allow the server to push messages and notifications down to the users on the mobile or web clients using a WebSocket server.
Test contains tests for the queries and mutations themselves by trying the queries against the actual database. We’ll go through a couple later.
Types refer to GraphQL schema types, the fields you can query by and the relations between them. When the server is started, the schema is created by merging the types together.
Routes contains the route handlers and the middleware for the more conventional RESTful webhooks. Examples include Slack integrations and email unsubscribing.
On the same level as each of these folders is the schema.js
file, which merges together all of the type definitions and resolvers into a usable GraphQL schema.
Finally, there is the index.js
which fires up our backend API as well as the WebSocket server for handling subscriptions. This last file isn’t as interesting to me; I already know how to set up a Node.js server with middleware.
Schema-First Development
According to Facebook, you should build out your schema before you get started on any business logic. If your schema is done well, you can be more confident in executing your business logic.
Extending the Root Types
Let’s take a look at the root schema.js file, where all of the queries, mutations, and type definitions are imported into the project. I want to note the shape of the root query:
type Query {
dummy: String
}
type Mutation {
dummy: String
}
type Subscription {
dummy: String
}
schema {
query: Query
mutation: Mutation
subscription: Subscription
}
In the project owner’s comments, they merely extend the root queries when they define their types! This is amazing, because until I saw this project, I was doing something like this:
type Query {
contents(offset: Int = 0, limit: Int = 10): [Content]
tags(offset: Int = 0, limit: Int = 10): [Tag]
users(offset: Int = 0, limit: Int = 20, field: String): [User]
# And many more queries...
}
type Mutation {
createContent(text: String): Content
updateContent(id: ID!, text: String): Content
deleteContent(id: ID!): Content
createUser(username: String!): User
updateUser(id: ID!, username: String!): User
# I don't want to write all of these here...
}
As much as I like spaghetti, this schema is bound to get out of hand in a big application. This is how Spectrum extends their queries, you could probably learn this from reading the docs to the end too.
extend type Query {
channel(id: ID, channelSlug: String, communitySlug: String): Channel @cost(complexity: 1)
}
extend type Mutation {
createChannel(input: CreateChannelInput!): Channel
editChannel(input: EditChannelInput!): Channel
deleteChannel(channelId: ID!): Boolean
# ...more Channel mutations
}
Defining Input Types
Something else you may notice about the above gist is that their input types do not list out every single field they require (like mine did above 😮).
Rather, they create specific types for each different mutation that takes more arguments than a mere ID. These types are defined in GraphQL schemas as input types.
input CreateChannelInput {
name: String!
slug: String!
description: String
communityId: ID!
isPrivate: Boolean
isDefault: Boolean
}
input EditChannelInput {
name: String
slug: String
description: String
isPrivate: Boolean
channelId: ID!
}
Sure enough if I actually read all of the docs, I might have seen this. As I was writing GraphQL APIs, I thought some parts were funny, “why must I write all of these input fields here!”, I thought.
It really helps you learn when you do something with brute force a bunch of times and then find out the right way to do it later.
This applies to many things in the realm of software development and beyond. It’s like when you find out your table tennis stroke was wrong all along even though it won you a few games. Well, my stroke is still wrong but at least I’m aware of it. 😅
Connections and Edges
Well-built GraphQL APIs tend to have a sort of interface for the items in their dataset, one that helps with cursors or pagination when fetching data. For example, say we want to grab all of the members in a particular channel:
type Channel {
id: ID!
createdAt: Date!
modifiedAt: Date
name: String!
description: String!
slug: String!
memberConnection(first: Int = 10, after: String): ChannelMembersConnection! @cost(complexity: 1, multiplier: "first")
memberCount: Int!
# other fields omitted for brevity
}
By specifying that the Member type is a connection, the consumer of the API would know that they are dealing with a custom non-primitive type, one that conforms to the way their cursors work.
In the spectrum API, they use the arguments first
and after
to handle cursoring.
-
first
is just a number to tell the query how many items to fetch; some APIs use limit for this. -
after
is a string that acts as the offset, that is, if I specify a string of “some-item-id”, it will fetch the first n items after that item. Basically, except in the Spectrum API they actually encode it in base64.
The ChannelMembersConnection
type looks like this:
type ChannelMembersConnection {
pageInfo: PageInfo!
edges: [ChannelMemberEdge!]
}
type ChannelMemberEdge {
cursor: String!
node: User!
}
When one of the types we defined in GraphQL references another custom type, like how our Channel
references a member (which is just a User
), we can define types like this in order to work with those other types. The data we probably care about is inside the node
field of the edge, where edge is just a fancy term for the items that were fetched.
The connection’s pageInfo
brings back some meta data about whether there is a next or previous page in the set. Now let’s see this membersConnection in action.
Example Query: membersConnection
export default (
{ id }: DBChannel,
{ first, after }: PaginationOptions,
{ loaders }: GraphQLContext
) => {
const cursor = decode(after);
const lastDigits = cursor.match(/-(\d+)$/);
const lastUserIndex = lastDigits && lastDigits.length > 0 && parseInt(lastDigits[1], 10);
return getMembersInChannel(id, { first, after: lastUserIndex })
.then(users => loaders.user.loadMany(users))
.then(result => ({
pageInfo: {
hasNextPage: result && result.length >= first,
},
edges: result.filter(Boolean).map((user, index) => ({
cursor: encode(`${user.id}-${lastUserIndex + index + 1}`),
node: user,
})),
}));
};
When we send up a query to grab a Channel
and ask for the membersConnection
, the server will execute this resolver function.
You’ll notice that it has some strange syntax in the function arguments at the top. No need to be alarmed; they use FlowType.
This function begins by creating a cursor by encoding the after parameter and then searching for the last digits in the encoded string. It uses these digits to figure out when to begin the query.
It then calls a function from the layer that handles interactions with the database. When the database query is executed, this function takes the results and builds the pageInfo
and edges
we noted earlier.
You can also get a glimpse of how the cursor is encoded; the edges make a string out of the item’s id and the index at which they appear in the query results. That way, when the cursor is decoded, it’ll know the type and index it is looking at.
Testing GraphQL Queries
Something that has been on my mind recently was how should I go about testing my GraphQL server? Should I just unit test the resolver function or what? Looking to Spectrum, they actually test their queries by invoking the test database directly. According to their team, when the unit test suite is run,
Before running the tests this will set up a RethinkDB database locally called "testing". It will run the migrations over it and then insert some dummy data. This is important because we test our GraphQL API against the real database, we don't mock anything, to make sure everything is working 100%.
After they do this, they can utilize a request utility function that serves as the route handler for what would otherwise hit the API’s /graphql
route.
// @flow
import { graphql } from 'graphql';
import createLoaders from '../loaders';
import schema from '../schema';
type Options = {
context?: {
user?: ?Object,
},
variables?: ?Object,
};
// Nice little helper function for tests
export const request = (query: mixed, { context, variables }: Options = {}) =>
graphql(
schema,
query,
undefined,
{ loaders: createLoaders(), ...context },
variables
);
With this utility, we can now execute automated test queries against our server. Here’s an example query that could test the membersConnection
query we checked out earlier.
import { request } from '../../utils';
import { SPECTRUM_GENERAL_CHANNEL_ID } from '../../../migrations/seed/default/constants';
it('should fetch a channels member connection', async () => {
const query = /* GraphQL */ `
{
channel(id: "${SPECTRUM_GENERAL_CHANNEL_ID}") {
id
memberConnection(after: null) {
pageInfo {
hasNextPage
hasPreviousPage
}
edges {
cursor
node {
id
name
contextPermissions {
communityId
reputation
}
}
}
}
}
}
`;
expect.assertions(1);
const result = await request(query);
expect(result).toMatchSnapshot();
});
Assuming their test data is the same between executions, we can actually take advantage of snapshots here! I thought this was a really neat use case for it; given some default data set, you will always expect the query to return a specific shape of data.
If one of the resolver functions relating to that query are changed, Jest would alert us to the diff in the snapshot.
How neat is that?
That about does it for me, I definitely learned a lot about building better GraphQL servers from combing through Spectrum’s API.
There are several things I didn’t really cover, like subscriptions, directives, or authentication.
If you’re itching to learn about those subjects, perhaps check out these links:
- “Securing Your GraphQL API from Malicious Queries” by Max Stoiber
- “A Guide to Authentication in GraphQL” by Jonas Helfer
- “Reusable GraphQL Schema Directives” by Ben Newman
- “GraphQL Subscriptions in Apollo Client” by Amanda Liu
Curious for more posts or witty remarks? Follow me on Medium, Github and Twitter!
Top comments (2)
Nice article! Do you have a link for the actual repo?
Thank you! Yeah you can view it here github.com/withspectrum/spectrum