loading...
Cover image for Quick hacks to lighten your database loads with minimal code

Quick hacks to lighten your database loads with minimal code

ritikesh profile image Ritikesh Originally published at freshworks.com on ・5 min read

[Graduating from startup to scaleup means having to constantly evolve your applications to keep pace with your customers’ growth. In this Freshworks Engineering series, Rails@Scale, we talk about some of the techniques we employ to scale up our products.]

Databases are a core component in most applications. But database loads can be a silent performance killer, causing sluggish application response times and leading to unsatisfied customers. While most database accesses are necessary, some can be avoided. The most obvious example would be ensuring that your application effectively leverages cached data. However, there can be oversights. We identified such missed opportunities and made minimal code changes to reduce a large number of queries fired to the database.

Leveraging the delegation pattern for cached accesses

Delegation is a common software development pattern from the object-oriented programming world. It uses object composition to achieve code reusability similar to the inheritance worlds.

There are some very detailed articles on delegation patterns and how to apply those principles in Ruby/Rails applications. In this blog, we talk about how you can optimize performance with little changes to your existing delegation code.

Freshservice, like other Freshworks products, is a multi-tenant SaaS web application. Each tenant is unique and can have configurations that are specific to them. We store some of these configurations in Redis and some of them in the good old RDBMS. The choice of storage is based on factors such as the requirement of ACID properties or constraints or relationships.

The RDBMS-based configuration table is backed by ActiveRecord Model called TenantConfigs and directly related to our tenant model with the has_one relationship. To reduce the load on the database and for faster access to frequently read data, we use memcached as an LRU cache. The TenantConfigs objects are also cached per tenant for faster access.

The tenant model has a handful of attributes delegated to the TenantConfigs association. We noticed that at least one of these attributes were accessed as part of every web request that was made to the application. The delegation was a good old Ruby delegation defined as:

Every time we did tenant.locale, Rails would fire a DB query to fetch the locale information from the tenant_configs table. Since we were already caching this information, we wanted to leverage it for faster access and reduce the load on the database. To achieve this, we simply replaced the delegatee from tenant_configs to tenant_configs_from_cache.

The delegate method comes from ActiveSupport and requires the delegatee to be a valid method within the scope of the defining class. Hence, there is no change in how the drop-in replacement works.

As expected, the changes gave us a significant drop in the number of queries fired for the tenant_configs model, from almost 1,500 queries per minute (QPM) to near zero. This was achieved by merely appending two words of code.


A reactive approach to Rails’ uniqueness validations

Ruby on Rails allows you to enforce attribute or functional uniqueness at the ORM level through the magical validates_uniqueness_of validation in ActiveRecord. This makes ActiveRecord query check if the database already contains a record for the said attribute(s).

If it does, the validation fails and Rails doesn’t save the record. This is in line with Rails’ way of doing validations and works seamlessly. However, if you google validates_uniqueness_of, you’ll find numerous articles such as this talking about why this validation is not entirely reliable. Briefly speaking, the main pitfalls are:

  1. They are not reliable. Even with the uniqueness checks, there could still be race conditions and an attempt could be made to insert duplicate values in unique columns;
  2. The above scenario would not be handled cleanly when relied upon validates_uniqueness_of and would throw up a 500, even if you used the safer .save;
  3. Depending on the scale, generates way too many SELECT 1 queries.

We were lucky enough to not face the race condition issue in production. However, with growing scale, the number of exists/select 1 queries to our databases was increasing. Hence we decided to move away from uniqueness validations (validates_uniqueness_of) for some of our core models.

Since our database already had unique indexes, all we had to do was remove the validates_uniqueness_of calls in the models. But this approach has a problem. When trying to save duplicates, the database would issue a rollback and Rails would raise an ActiveRecord::RecordNotUnique exception.

Normally, you would expect only methods ending with a ! like save! to raise exceptions. However, this exception gets raised even when using the regular ‘save’ as well. If we were to proceed with the removal of validates_uniqueness_of from our models, our controllers would have to handle this exception specifically.

To solve this problem and to maintain consistency with our current code flows, we ended up writing a gem (record_not_unique) to capture ActiveRecord::RecordNotUnique exceptions at the model level and add a validation error on the associated attributes(s).

The exception would be captured only for exception-safe methods such as save and update_columns. save! would continue to throw exceptions. After replacing validates_uniqueness_of with handle_record_not_unique, we measured the results using our log aggregator.

https://gist.github.com/ritikesh/465b69314bce378a9bedaa17d2ffa8e5

Some of the noticeable results were as follows:

Downtrend on Select 1 queries on the tickets table after changes were deployed to production(1). From 200,000+ queries over the week to zero.

Downtrend on Select 1 queries on the users table after changes were deployed to production(1). From 57,000+ queries over the week to zero.

For other lesser 'written-to' modules such as groups, select 1 queries per week per table reduced by about 20,000.

This was a significant load reduction on our databases without making any changes to our existing code flows.

Bill Gates rightly said, "Measuring programming progress by lines of code is like measuring aircraft building progress by weight". The gains we were able to achieve with the above two techniques with minimal lines (words) of code is testament to that.

Posted on by:

Discussion

markdown guide