DEV Community

Tameem Iftikhar
Tameem Iftikhar

Posted on

The Complete Guide to Blazing-Fast Performance in Rails

Ruby on Rails is a tremendous framework when you want development speed for your project or startup. It’s useful right out of the box and comes with a plethora of behind-the-scenes magic to make your life easier. However, it’s not considered the fastest framework out there in terms of performance. You will find examples of individuals and companies drifting away from Rails in favor of something else. Despite this, there are many companies out there who have succeeded in scaling Rails and found success — just take a look at Airbnb, Github, Gitlab & Shopify.

So before you jump ship, you should consider keeping performance at the forefront of your mind when working with Rails, and you can succeed too. This article aims to list the most important tips & tricks I’ve learned over the years to make Rails run at blazing fast speeds and scale to millions of requests per minute.

General Tips

First off, there are some general tips to implement in your Rails project to set yourself up for success.

Set Up an APM

You can’t improve performance if you don’t measure it first, making it necessary to have the right metrics tracked and monitored. You should be tracking load times, request times, and database query timings amongst other things. Personally, I’ve found New Relic to be one of the best APM tools for Rails but it is a tad bit on the pricey side. A more affordable alternative is SkyLight’s free trial.

Stay Current

I know personally how horrifying updating the Rails version for your project can be if you left it stale for too long, and I sympathize with anyone who had to go through this ordeal. So do yourself a favor and try to keep in sync with the newer versions of Ruby and Rails. This will help you skip the pain of jumping multiple versions and ensures that you have all the newer performance enhancements.

Pareto Principle (the 80/20 rule)

This is a well-known rule when it comes to developing software — The Pareto Principle. It’s also known as the ‘law of vital few’ and states that for most events, 80% of the effects are generated from 20% of causes. The idea behind this is to not waste time on micro-optimizations while you have bigger issues that need resolution. You won’t accomplish much by shaving off milliseconds in serialization if your database queries are extremely slow. So pick your battles carefully.

Keep it Lean

Rails is backed by an amazing community behind it and a library of gems that can easily help you accomplish complicated tasks. But it’s easy to get carried away adding gems to your project, causing bloat. Be careful while selecting what gems to add to your project and try keeping your dependencies lean.

Be Lazy — Do it in a Background Job

Whenever you need to do anything complicated or long-running, consider throwing it into a background worker — sending an email, push notifications, uploading pictures and the like. Minimizing work in the main thread will ensure a snappy response for users. The good thing for us is that Rails has multiple options to achieve this easily — Sidekiq, Rescue or ActiveJob.

Change Your Serializer

If you have an API in your project you are most likely using the ActiveModelSerializers gem for serializing your data. I would strongly suggest switching over to the fast_jsonapi by Netflix. This gem is a whopping 25 times faster than the default ActiveModelSerializers and I can personally vouch for that from experience.

Cache Me Outside

Sometimes if you have a lot of static data or you can’t make things faster, another alternative is to use a cache. Rails makes this incredibly easy to do right out of the box. Here is an example of how easy it is to do a cache with an expiry time:

  # Lets say you have some categories you offer in your project.

  Rails.cache.fetch("categories", expires_in: 5.minutes) do 
      Categories.all.map(&:name)
  end

Active Record (Database)

ActiveRecord is the magical ORM (one of the best ever to exist) offered by Rails. It’s easy to get caught up in the ease of use without understanding the details which can lead you to bottlenecks in the future.

Use Thy Database

This is something that goes unnoticed or unimplemented by a lot of developers as they rush to do everything in code or maybe they are intimidated by writing raw SQL. But whenever possible or convenient, you should use thy database. Processing and sorting data structures in Ruby can chew up CPU time, while your database can do this without breaking a sweat.

Peek Behind the Curtain

Don’t just get enamored by the magic of active record without worrying about what’s happening behind the scenes. In order for you to sweep out performance bottlenecks, you need to look at the actual queries that get triggered and understand them. In the development environment, Rails will print out all the queries getting run which allows you to notice any unnecessary queries. Also, here are a few tricks that will help you dig in a little deeper:

https://gist.github.com/tameem92/08579a3d2ac690fce395c21bedab4f11

SQL Wizardry

Don’t worry, I am not about to suggest you write everything in raw SQL queries. But learning the basics of SQL and database design will help you better understand what’s happening under the covers and allow you to optimize the queries as needed. It’s a valuable skill to have as a software developer and will go a long way in your career.

Be Stingy

One way to make your queries more efficient is to only select what you really need. Instead of doing a SELECT * specify which columns you need the database to retrieve. By default ActiveRecord selects everything but you can leverage select or pluck to fix this problem.

https://gist.github.com/tameem92/d219ccc9745d4bd11a7f7da9f8d743f6

N+1 Queries

This is a classic problem. If you are loading a Blog from the database and then try to find all the comments for that blog by looping through the records, you are forcing Rails to run a query for each of the comments. This can be eliminated by preloading the comments Blog.includes(:comments) and helps avoid the N+1 query problem. Pro Tip: take a look at Bullet gem to help you find any N+1 query problems.

Combining Queries

When working with bigger teams with multiple developers on a project, sometimes you will notice every person touching the codebase might be adding queries throughout the code path. More often than not, these queries can be combined into fewer queries at the top of the code path. This ensures there are no duplicated queries and allows the database to strategically do the heavy lifting.

Index, Index, Index

This is a database best practice that shouldn’t be ignored. If you don’t index properly you will negatively affect database performance and cause unnecessary table scans. When building a new feature think ahead to what will be queried in the project and attempt to add the correct indexes. If you have an existing project you can always use Active Record Doctor or LolDBA to sniff out missing indexes.

Migrations

When you are running at scale, and you have millions of records in your tables the normal way of running migrations in Rails will, unfortunately, fail you. They will either error out or lock tables for extended periods of time, taking your site down. Having dealt with this in the past there are two tools that will solve this pain point for you:

  • Gh-ost: This is a triggerless online schema migration solution for MySQL and the only tool that worked for me at scale.

  • LHM: This will migrate your tables while they are still online without locking your table.

Extreme Tip: Use Raw SQL if you dare

If there is an endpoint where you desperately need performance, you can consider combining all the Active Record queries throughout your functionality to a single massive SQL query at the top to pull out any records you need to fulfill the use case. Disclaimer: Yes, it’s hard to maintain a massive raw SQL query, and it’s also less readable. But I did mention this is only in case of desperation.

Servers & Hardware

When deploying a Rails project there are a few things to remember regarding the underlying infrastructure and architecture.

Choose a Scalable Architecture

With cloud infrastructure having come such a long way, gone are the days of having to build bare metal servers in order to scale your application. When deciding on your underlying architecture for a Rails project you should utilize a scalable cloud-based system. As an example, you can use AWS Fargate or Kubernetes to automatically scale your dockerized Rails application as needed.

Brotli Compression

Brotli is another compression algorithm based on gzip with multiple improvements and a better compression ratio. It is now supported with most web servers and adding it is an easy way to optimize the compression speed. And well, who doesn’t want free web performance improvement. (Reference: Brotli vs Gzip)

Give it More Juice

Rails is notorious for using a lot of memory, especially if you have multiple puma workers running. So don’t let your application go thirsty, and give it some juice from the start. You can utilize memory-optimized instances on your cloud provider to set you up for success. And don’t forget to keep an eye on the swap usage on the server.

Tuning

Since Rails 5 the webserver for Rails has been switched to Puma, and by default, it will only run one worker. As soon as you set up your Rails deployment make sure to increase the number of workers to the available cores on your machine or something more reasonable.

HTTP/2

If you are utilizing a reverse proxy like Nginx make sure you have the HTTP/2 option turned on to give you all the performance advantages you get from it.

The Cherry on Top (Miscellaneous)

We are almost at the finish line — this section is all about some extra tips for performance.

Don’t Use Dynamic Methods

Rails magic comes with a price. A few of these methods consume a lot of resources and it would be better to skip using them. As an example, the find_by() and find_all_by() are kind of slow because they need to run through method_missing and parse the name against the list of columns in the DB.

Throttling

As you scale you are bound to run into malicious attempts on various endpoints that can’t be cached and will cause expensive operations to chew up resources. To get around this, I would recommend adding a gem like rack-attack to implement throttling on endpoints like login, reset password or sign up.

Know your O(n) vs O(1)

As the data set size increases we need to heed the impact of our code in terms of time complexity. In a scenario where lots of data needs to be processed or looped over, consider using a hash instead of defaulting to arrays.

Always use a CDN

All your static assets should always be fronted by a CDN and make sure the cache policies are reasonable. If you want granular control over the cache invalidation, you can look into using e-tags and Cache-Control.

Advanced

In extreme cases, you can consider some advanced optimizations. I would argue if you’ve reached this point it might be time to consider other languages for certain parts of your project that are computationally intensive, so I will keep this section brief.

Brush up on C

The main implementation of Ruby is written in C, which allows you to rewrite the slow part of your code in C as an alternative — e.g. if you are doing crypto algorithms for generating certificates. If you aren’t so excited about the idea of dwelling into C code, you can leverage third-party gems written in C bythe Rails community.

Faster Background Jobs

With an increasing background workload, you can switch your background workers to a more performant language. By utilizing queues like Redis or Amazon SQS, the workers can be decoupled into their own microservice. Checkout this Sidekiq compatible go-workers library or a version of sidekick written with Crystal as two options.

Try Faster Versions of Ruby

There are a few implementations of Ruby aiming to increase the performance. If you are interested in these variations, take a quick look at Truffle-Ruby or JRuby.

The Bottom Line

If we look at companies using Rails we see that they were able to utilize the amazing development speed of Rails to put their customers first, and also managed to improve performance as they scaled. In this article, we talked about multiple tips & tricks for increasing performance. I hope you found this guide helpful. Till next time.

Love talking tech or startups?

You’re in luck, me too! If you want to chat about cutting-edge technology, entrepreneurship or the perils of being a startup founder, find me on Twitter or on LinkedIn.

Top comments (2)

Collapse
 
panoscodes profile image
Panos Dalitsouris

Nice one !!!

Collapse
 
botreetech profile image
BoTree Technologies

Greate one!!