DEV Community

Cover image for Track and fix excessive Active Record instantiation
Matouš Borák for NejŘemeslníci

Posted on

Track and fix excessive Active Record instantiation

When working with a Rails back-end code, grabbing too many records from the database is discouraged, and for very good reasons:

  • your database server has to work hard to find the records,
  • a lot of data needs to be transferred from the database to your app,
  • and — last but not least — Active Record instantiates too many objects, leading to a memory bloat in your application.

All of this slows down the response time of your web app for the given request. But even worse, it also affects all future requests because the memory, newly allocated by the Rails process, usually becomes fragmented and hard to release back, unless you apply special tweaks.

OK but is this a real issue?

Even though limiting the db queries is rather a basic rule, mentioned early in the Rails Guides, we noticed that a huge SELECT still occasionally slips into our production code. How come? Turns out, it is surprisingly easy for several reasons:

  • developers usually work with a small dev database and forget about the scale of production data,
  • even though our dev team actually works with a large subset of production data, it’s too easy to forget to test a worse case scenario,
  • the Active Record syntax is very succinct and lets an unsuspecting developer build huge JOINs very easily; for example, this innocent-looking query: User.recent_customers.eager_load(orders: :order_logs) can suddenly cause a gigantic data load when a power user with many orders falls into the recent_customers scope,
  • and, sometimes devs simply forget to have large data processed in batches.

We first became aware of the issue when we profiled some exceptionally slow requests in DataDog and noticed large Active Record instantiation spans, such as this one:

AR instantiation span

The trace shows that in this particular request, Active Record instantiated over 2,100 ZipCode model objects which delayed the response by ~160 ms (and this even excludes the time needed to run the query and transfer the results to the Rails app). That’s insane! We don’t think we need information about two thousand zip codes anywhere on our site.

Tracking the problem in production

After finding and fixing a few places, we decided that we needed a continuous monitoring of this problem in production. We could have probably done this in DataDog itself via a custom metric generated from the Indexed APM spans. Other APM systems may have different options, such as ScoutAPM’s memory bloat detection.

But in the end, we chose to build a custom solution that we could more easily send to our reporting system instead. Because, it turns out, tracing the instantiations is very well supported using Rails instrumentation. Each time Active Record instantiates objects after retrieving data from the database, it generates the instantiation.active_record event which we can hook into.

Below is the complete code for a custom log subscriber (i.e. a class to consume the given instrumentation events and log them) that processes "instantiation" events.

# app/subscribers/active_record_instantiation_subscriber.rb
require "active_support/log_subscriber"

# Send Rollbar error when ActiveRecord instantiates too many objects.
# Log all AR instantiation in dev log.
class ActiveRecordInstantiationSubscriber < ::ActiveSupport::LogSubscriber
  MAX_TOLERATED_DURATION = 200 # in milliseconds

  def instantiation(event)
    return if Rails.env.test?

    payload = event.payload
    excessive_load = payload[:record_count].to_i > MAX_TOLERATED_RECORDS || event.duration > MAX_TOLERATED_DURATION
    if Rails.env.development?
      message = "  Instantiated #{payload[:record_count]} records 
                 of class #{payload[:class_name]} 
                 in #{event.duration} ms".squish
      excessive_load ? error(message) : debug(message)

    elsif excessive_load && Rails.env.production?
      Rollbar.error("Too many ActiveRecord objects instantiated",
                    record_count: payload[:record_count],
                    class_name: payload[:class_name],
                    duration: event.duration,
                    source_code: Rails.backtrace_cleaner.clean(caller))

Enter fullscreen mode Exit fullscreen mode

The location of the log subscriber file is arbitrary given that it is set up from a Rails initializer:

require "./app/subscribers/active_record_instantiation_subscriber"

Enter fullscreen mode Exit fullscreen mode

In essence, the subscriber serves two purposes:

  1. It logs a message to the Rails log about each instantiation in development environment, so that the developer can see potential problems with creating too many model objects as early as possible.

    Instantiation log messages

  2. In production, the code reports a custom error to our tracking system if the number of instantiated records is especially high (above 2000, as configured in the MAX_TOLERATED_RECORDS constant) or the instantiation too slow (above 200 ms, see the MAX_TOLERATED_DURATION constant). This error message includes:

    • the number of instantiated objects,
    • the time the instantiation took (in ms),
    • the instantiated class,
    • and a stack trace pointing to the proper place in code.

    Instantiation reporting in Rollbar

This way, we can continuously monitor excessive instantiation in our tracking system from the real traffic.

How to fix the issues found

OK, excessive Active Record instantiation warnings successfully fill your reporting system so… what next? The general effort here is to try to decrease the amount of data loaded from the database. The specific way to do that will depend on why the data is loaded in the first place. Let’s have a look on some of the options:

  • Add a limit clause to your queries wherever it makes sense. Use pagination for data listings and index pages.

  • Use select or even better, pluck: the select method still leads to model objects instantiation but limits instantiating attributes only to those explicitly listed. The pluck method skips model objects creation altogether and returns a simple array of the results data, saving a lot of memory and CPU cycles.

  • In case you only look for an aggregate value, use calculation methods instead of grabbing all records and aggregating them in ruby.

  • If you really need to process a lot of records (e.g. in a rake task), use methods for loading the data in batches. The in_batches method can even be combined with pluck, for example. Or better yet, use mass update if appropriate.

  • Try rewriting complex (especially nested) eager load queries into multiple simpler queries that you can control more precisely.

By the way, most of these tips are more thoroughly explained in the Complete Guide to Rails Performance by Nate Berkopec, a book we recommend with love. So, may your memory be free!

If you don’t want to miss future posts like this, follow me here or on Twitter. Cheers!

Discussion (0)