DEV Community

Wojciech Olejnik for Netguru

Posted on

Commentable.rs - Building a Serverless Comment System in Rust

In this article I'm going to write about my journey of building Commentable.rs - a privacy oriented Disqus-like comment hosting solution written in Rust and running on AWS Lambda.

Motivation

A couple months ago I was working on our internal Software Radar - a web app showcasing the technologies we use and tracking the ones we're experimenting with or watching. One of the ideas for this project was that our employees should be able to have a discussion about a particular technology in its comment section, without having to look for an appropriate channel on Slack. Normally we'd integrate such comment system into the current backend, but at the time Radar's API was just a couple of Google Cloud Functions, acting as a proxy to Salesforce. We've decided to look at existing solutions and compare their capabilities but we couldn't find a perfect candidate. The closest to our needs was Disqus, the leading choice in this field. But Disqus has raised some major concerns among our team, such as the lack of data ownership, every comment being public (accessible by URL) and separate sign in/sign up process. Disqus also proved to be hard to customise and even though we managed to override it's stylesheet to match our website's theme it still didn't look very good.

With so much room for improvement, it's hard not to be inspired to build
something better. A comment hosting solution that is privacy oriented and easily customisable. In the spirit of our current backend architecture I've decided to try the Serverless approach. I've also made the decision to use Rust, mostly because I'm a huge fan of it, but also because of its speed and safety. 😉

Development

Getting started

I began by doing some research how to even run Rust code on AWS Lambda. Turns out it's quite easy thanks to a great library that became the backbone of this project - lambda_http. This crate is an extension of the lambda_runtime library and is effectively all we need to quickly develop HTTP-based microservices. A basic hello world example would look like this:

use lambda_http::{lambda, Response, Body, http::StatusCode};

fn main() {
  lambda!(|request, _| Ok(
    Response::builder()
      .status(StatusCode::OK)
      .body(Body::Text("Hello World!"))
      .unwrap()
  ));
}
Enter fullscreen mode Exit fullscreen mode

This incredibly small amount of boilerplate makes me happy, but unfortunately we need to do a bit more to actually run this code on the server. The next part is going to differ depending on your operating system, because we actually need to cross-compile our code to the specific architecture AWS is using for running Lambda functions (Unless you're using Linux, then you're pretty much set!). I was using a Mac and this step gave me a headache, so here's a short summary of what needs to be done:

# 1) Install the cross-compilation tools (WARNING: can take a long time):
$> brew install filosottile/musl-cross/musl-cross
# 2) Add the new target for compilation:
$> rustup target add x86_64-unknown-linux-musl
# 3) Create a soft-link for `musl-gcc`
$> ln -s /usr/local/bin/x86_64-linux-musl-gcc /usr/local/bin/musl-gcc
Enter fullscreen mode Exit fullscreen mode

The final step is to create a .cargo/config file (relative to your project directory) with the following contents:

[target.x86_64-unknown-linux-musl]
linker = "x86_64-linux-musl-gcc"
Enter fullscreen mode Exit fullscreen mode

After that you should be able to compile your project using:

$> CC_x86_64_unknown_linux_musl=x86_64-linux-musl-gcc cargo build
Enter fullscreen mode Exit fullscreen mode

If you don't like headaches (or you're using Windows), the best choice is to use Docker. It works great, but the downside is that build times can get really long - which effectively made me stick to the cross-compilation approach.

Testing and deploying

Having our code compiled we could now zip the binary and upload it to AWS Lambda through the AWS Management Console. That's not very elegant though and certainly not feasible for rapid development. This is where the second pillar of this project comes into play: the AWS Serverless Application Model (SAM). AWS SAM is a technology that allows us to describe the architecture of our application using a YAML (or JSON) file, create a local instance of the app using Docker and finally publish the app to a production environment with a single command. To make use of it we need two command line utilities: aws-cli and aws-sam-cli. If you follow these links you'll be able to install and configure these tools easily.

The final step is to create the AWS SAM template. Here's a very basic one that would work with our hello world example:

AWSTemplateFormatVersion: "2010-09-09"
Transform: AWS::Serverless-2016-10-31

Globals:
  Function:
    Runtime: provided
    Handler: rust.binary
    Timeout: 3

Resources:
  # GET /hello
  HelloWorldFunction:
    Type: AWS::Serverless::Function
    Properties:
      # I'm assuming the binary name is hello
      CodeUri: target/x86_64-unknown-linux-musl/release/hello # or just target/release/hello on Linux/Docker
      Events:
        HelloWorldEndpoint:
          Type: Api
          Properties:
            Path: /hello
            Method: get
Enter fullscreen mode Exit fullscreen mode

The most important parts are specifying the rust.binary handler, CodeUri (binary location) and endpoint properties (path and HTTP method). There's a lot more you can do with SAM templates and I recommend checking the documentation for more information. With the template ready you should be able to test the application locally (Docker required) with:

$> sam local start-api --template PATH_TO_YOUR_SAM_TEMPLATE.YML
Enter fullscreen mode Exit fullscreen mode

The output should contain a URL which you can visit to test your application. If everything works fine, we can deploy the app to the actual AWS servers with these two commands (make sure you're logged in to aws-cli first):

# assuming a bucket named "hello-rust-lambda" exists
# if not, you can create one with `aws s3 mb s3://hello-rust-lambda`
$> sam package --template-file PATH_TO_YOUR_SAM_TEMPLATE.YML --s3-bucket hello-rust-lambda --output-template-file package.yml
$> sam deploy --template-file package.yml --stack-name hello-rust-lambda --capabilities CAPABILITY_IAM
Enter fullscreen mode Exit fullscreen mode

The first command packages the application and uploads it to S3. The second command instantiates or updates the required services and copies the code from S3 into a new Lambda function. Even though compiling and deploying takes only 3 commands I ended up writing a sizeable Makefile to speed up this process. You can check it out here.

Going forward

At this point we have everything we need to implement a working API, but we still lack some kind of storage. This is where I introduce the third major component of Commentable.rs - AWS DynamoDB and the rusoto_dynamodb create. I'm not a big fan of NoSQL databases, but this was the only choice that worked very well with AWS SAM and our Serverless approach - we can specify the database schema in the same YAML template and even set the billing to only pay per the number of database requests we make.

rusoto_dynamodb makes interacting with the database a breeze. It's not an ORM though, so I wrote a handy trait called DynamoDbModel containing methods such as new, create, find, delete etc. Here's a snippet showing how it encapsulates the methods from rusoto_dynamodb:

use rusoto_dynamodb::{
  DynamoDb,
  DynamoDbClient,
  PutItemInput,
  AttributeValue,
  // ... other imports
};

pub trait DynamoDbModel where Self: Sized + Serialize {
 // ...
  fn create(db: &DynamoDbClient, attributes: IntoDynamoDbAttributes) -> Result<Self, DbError> {
    let attributes: DynamoDbAttributes = attributes.into();
    db.put_item(PutItemInput {
      item: attributes.clone(),
      table_name: COMMENTABLE_RS_TABLE_NAME.to_string(),
      ..Default::default()
    }).sync()
      .map_err(|err| DbError::Error(err.to_string()))
      .and_then(|_| Self::new(attributes))
  }
 // ...
}
Enter fullscreen mode Exit fullscreen mode

We can implement this trait for any struct in order to save it in the database. In our case we only need three models: User, Comment and Reaction. Here's the Comment model for example:

#[derive(Serialize, Debug)]
pub struct Comment {
  pub primary_key: CommentableId,
  pub id: CommentId,
  pub user_id: Option<UserId>,
  pub replies_to: Option<CommentId>,
  pub body: String,
  pub is_deleted: Option<bool>,
  pub created_at: DateTime<Utc>,
}

impl DynamoDbModel for Comment {
  fn new(mut attributes: DynamoDbAttributes) -> Result<Self, DbError> {
    Ok(Self {
      primary_key: attributes.string("primary_key")?,
      id: attributes.string("id")?,
      user_id: attributes.optional_string("user_id"),
      replies_to: attributes.optional_string("replies_to"),
      body: attributes.string("body")?,
      is_deleted: None,
      created_at: attributes.timestamp("created_at")?,
    })
  }
}
Enter fullscreen mode Exit fullscreen mode

That's all there really is to DynamoDB from the Rust perspective. All we need to do now is to specify the database schema in our SAM template:

# ...
Resources:
  # ... Lambda functions definitions
  CommentableRsTable:
    Type: AWS::DynamoDB::Table
    Properties:
      TableName: CommentableRsTable
      BillingMode: PAY_PER_REQUEST
      AttributeDefinitions:
        - AttributeName: primary_key
          AttributeType: S
        - AttributeName: id
          AttributeType: S
      KeySchema:
        - AttributeName: primary_key
          KeyType: HASH
        - AttributeName: id
          KeyType: RANGE
Enter fullscreen mode Exit fullscreen mode

We only have to specify the required keys so that's enough. We can also create indexes here, but for that I recommend reading the documentation.

Implementing a Create Comment endpoint

At this point there's nothing left to do apart from actually implementing the comment system logic. I'm going to show some snippets from the AddComment Lambda function but I recommend checking the full source code on GitHub - there's a lot of glue code and handy utilities that I won't be able to show here.

Let's start with the entrypoint:

fn main() {
  lambda!(|request, _|
    AddComment::respond_to(request)
      .or_else(|error_response| Ok(error_response))
  );
}
Enter fullscreen mode Exit fullscreen mode

This minimal piece of code immediately delegates execution to the custom AddComment service. I use this pattern for every Lambda function to help with code organisation and state management. The struct itself looks like this:

// HTTP request parameters
#[derive(Deserialize)]
struct Params {
  auth_token: AuthToken,
  replies_to: Option<CommentId>,
  body: String,
}

struct AddComment {
  db: DynamoDbClient,
  commentable_id: CommentableId,
  params: Params,
  current_user: Option<User>,
  comment: Option<Comment>,
}
Enter fullscreen mode Exit fullscreen mode

The real action takes place in the impl block. I used a "pseudo" builder pattern to chain method calls on the AddComment struct. I like the readability of this approach and the bubbling error handling, but there are also some disadvantages: most methods have to take a mutable reference to self, and the methods have to be invoked in a certain order to work correctly.

impl AddComment {
  pub fn respond_to(request: Request) -> Result<Response<Body>, HttpError> {
    if let Some(commentable_id) = request.path_parameters().get("id") {
      Self::new(request, commentable_id.to_string())?
        .validate()?
        .fetch_current_user()?
        .check_reply()?
        .save()?
        .serialize()
    } else {
      Err(bad_request("Invalid params: 'id' is required."))
    }
  }

  pub fn new(request: Request, commentable_id: CommentableId) -> Result<Self, HttpError> {
    if let Ok(Some(params)) = request.payload::<Params>() {
      Ok(Self {
        db: DynamoDbClient::new(Region::default()),
        comment: None,
        current_user: None,
        commentable_id,
        params,
      })
    } else {
      Err(bad_request("Invalid parameters"))
    }
  }
  // ...
}
Enter fullscreen mode Exit fullscreen mode

And here's the save method. It's mostly about preparing the JSON attributes and passing them to the database through a DynamoDbModel trait method.

impl AddComment {
  // ...
  pub fn save(&mut self) -> Result<&mut Self, HttpError> {
    let current_user_id = &self.current_user.as_ref().unwrap().id;
    let mut attributes = IntoDynamoDbAttributes {
      attributes: hashmap!{
        String::from("primary_key") => self.commentable_id.clone().into(),
        String::from("id") => comment_id(&self.commentable_id, current_user_id).into(),
        String::from("user_id") => current_user_id.clone().into(),
        String::from("body") => self.params.body.clone().into(),
        String::from("created_at") => Utc::now().to_rfc3339().into(),
      }
    };
    // replies_to can be empty so we have to insert it separately
    if let Some(parent_comment_id) = self.params.replies_to.clone() {
      attributes.attributes.insert(String::from("replies_to"), parent_comment_id.into());
    }
    match Comment::create(&self.db, attributes) {
      Ok(comment) => {
        self.comment = Some(comment);
        Ok(self)
      },
      Err(err) => Err(internal_server_error(err))
    }
  }
  // ...
}
Enter fullscreen mode Exit fullscreen mode

And that's it! The rest of the code is written using the same general patterns.

Result

I'm overall very happy with how this project turned out. Rust has been a blast to work with and its safety features have saved me multiple times from introducing bugs into the code. The library support for writing Serverless applications is very good. The most problematic thing during development was understanding DynamoDB and getting around some of its limitations - I had to redesign the data layer quite a few times. The performance could also have been a bit better - Rust itself is super fast but it seems that the custom Lambda runtime is introducing some overhead to the response times. When it comes to the future of Commentable, we're aiming to soon realease the client side libraries - for React, Vue, and vanilla JS. After that it's only a matter of adding proper documentation and tests and the product will be ready to ship.

Top comments (0)