DEV Community

Cover image for Building a Job Posting Platform with FaunaDB and Apollo
Fauna for Fauna, Inc.

Posted on • Updated on • Originally published at fauna.com

Building a Job Posting Platform with FaunaDB and Apollo

Author: Tigran Terian
Date: September 26, 2019
Originally posted on the Fauna blog.


TalentHub started as a small weekend project and evolved into a full-fledged open source job posting platform. The idea came from a simple problem my friends had while they were looking for a job, which was a need for a clean, intuitive job board with location support. In the future, we plan to include applicant tracking functionality as well.

Ultimately, the mission is to make a single open source platform for employers and employees to help them solve job and talent matching challenges.

You can view the app in production here. TalentHub's code is 100% open-source on Github; you can look through it here.

TalentHub Stack

TalentHub uses Apollo Server as a thin middleware between FaunaDB and Apollo Client. The main reason for having a middleware layer was security and more control over resolvers. It hides auth tokens that would live in client otherwise, and gives more flexibility in resolvers, such as hashing passwords and generating tokens.

Alt Text

Why TalentHub chose FaunaDB

Let's start with the need. Initially, TalentHub had its own back-end server that I wrote in Ruby on Rails. It was originally hosted on Heroku, and I also tried DigitalOcean. However, it became pretty cumbersome to maintain the server side, pricing tiers, and managing the performance. Eventually, it was clear to me that having my own server slowed down the product delivery cycle and prevents shipping new features as fast as I would like.

So I started to look for database as a service (DBaaS) providers. As a bonus, I wanted it to have a GraphQL API so I could reuse a lot of components in the front-end.

The services I came across after some research were:

  • Firebase
  • graph.cool
  • FaunaDB
  • Scaphold.io

I decided against Firebase because of the incident with Parse (I mean when it got shut down). I felt there was no guarantee that Firebase would not have the same fate. Similarly, Scaphold.io was acquired by Amazon. Also, I love experimenting with new tech (early adopter), and supporting teams doing something new.

So, at the end of the day, I was left to choose between graph.cool and FaunaDB. While graph.cool looks cool (as they also claim in their naming) and has a lot of supporters, blog posts, and examples, I preferred FaunaDB's architecture, community, and pricing strategy. Specifically, the serverless nature of FaunaDB and guaranteed global transactions appeal from an architectural standpoint.

Another decision-making factor was that FaunaDB has a user friendly GraphQL interface with a visual dashboard for managing keys, databases, and sanity checking everything. It is also worth noting that FaunaDB has a nice and clean CLI. It was also pretty easy to setup and start using the system.

Finally, I discovered that the community around FaunaDB is awesome. It is still small, but I am sure that you will not get the same level of support and care anywhere else. This was definitely something I considered while choosing the service. I would recommend joining FaunaDB’s Community Slack to stay up to date and ask any questions you have.

So, I would say a friendly CLI, native GraphQL support, a nice community, and reasonable pricing were the main FaunaDB selling points.

What's next for TalentHub and FaunaDB?

While I have been very happy with FaunaDB, there are a few features I would like to see in the future. For example, the list includes stuff like:

  • Granular permission levels for access keys:
    At some point I had to share FaunaDB access keys in the front end and had a need to prevent users from taking control over the DB. So one thing I wanted to see was a flexible ABAC system. I've learned that the Fauna team recently added this feature with version 2.7, in response to requests from the community.

  • Flexible field filtering:
    Again, I had a need to perform a fuzzy matching on certain fields from the database. It's not possible to do that now, so I ended up doing filtering on the client side.

  • More control over field resolvers in GraphQL interface:
    Sometimes, one may need to enforce a field value in a mutation or have a conditional query, which is not achievable right now. A possible workaround here could be writing a thin middleware as a Netlify function that will suck raw data from FaunaDB, process it, and spit it out to the client.

An example of this kind of middleware could look like:import { ApolloServer } from 'apollo-server-lambda'

import { GraphQLClient } from 'graphql-request'
import { typeDefs } from './utils/schema'
import { customQuery } from './utils/queries'

const client = new GraphQLClient(process.env.FAUNADB_API, {
  headers: {
    authorization: `Bearer ${process.env.FAUNADB_KEY}`,
  }
})

const resolvers = {
  Query: {
    customQuery: async () => {
      const response = await client.request(customQuery)
      return response.customQuery
    }
  }
}

const server = new ApolloServer({
  typeDefs,
  resolvers
})

exports.handler = server.createHandler()
Enter fullscreen mode Exit fullscreen mode

This code snippet uses ApolloServer to build a GraphQL interface on top of Fauna. An obvious drawback here is that it will slow down the response and add up unnecessary cycles to response time.

Conclusion

In the early stages of a startup, when you still need to validate your product/market fit hypothesis, you want to ship a product as fast as possible. Ideally, you will want to spend 0 minutes on things that don’t bring direct value to the customer like setting up and maintaining back-end infrastructure.

Of course, you can scaffold a server on your favorite language and framework, but it can quickly become time-consuming and costly to maintain it. You may argue that over time it will pay out, you’ll have more control over it. Agreed, but that’s not your goal right now. You need to ship a product as fast as possible with minimal effort. You need to be lean. You need to be agile.

That’s where FaunaDB makes perfect sense to me. With this service, I can go fully production-ready with a database-as-a-service in less than an hour, while it would have taken a day or two to containerize and self-host my own data and back-end, plus the headaches of maintaining it.

FaunaDB allowed me to build my service without worrying about the data infrastructure and all the operational burdens than historically come with it.

Top comments (0)