DEV Community

Cover image for How we sped up the loading time of a critical experience by 3x
James Won
James Won

Posted on • Edited on

How we sped up the loading time of a critical experience by 3x

Recently we sped up the loading time of a critical experience in our application by over 3x.

Here's how we did this and some of the learnings I gained from the experience.

Background

The repo in concern is a front-end platform that is the main customer-facing product of our company. This platform uses React, NextJS and Redux.

In this application there is a really important experience - the user table where users can set various settings of a particular user. User information is also used throughout our application.

There were three concepts of user in our application:

1. Auth0 user

When the codebase was first built, it relied on Auth0 to store and provide permission role of the user. Over time further metadata was added onto the user in Auth0. These attributes played important roles in our application as users with different permission roles have different experiences within our application.

2. The application 'member'

This was an overlapping but slightly different concept of a user. This contained attributes about a particular user of a platform regardless of the permission role and was stored in our own DynamoDB table. For example an important attribute was data role which determines how the processing of the user's data is done. It also included information like email and other personal attributes about the user.

3. The combined 'user state'

Because the user was incomplete without the combination of the first two user concepts, there was a third important concept: the user state.

This state was a combination of (1) Auth0 information and (2) member. This was stored in Redux global state.

When the table first loaded up there was logic to retrieve and store the data together as a combined object. For the purposes of the application, this was the most important source of information as this was relied on by all experiences needing user information.

The problem

You probably have a good idea now where this is going.

This setup was great at first as there was no duplication of data and needing to keep things in sync. However, as experiences became more complex and the need to retrieve and update user information expanded we started building significant logic that cross-weaved between the three user data concepts.

  • Any user related action required complex logic to check and update the two sources of truth and update the local state. There was additional logic specifically to update the local state in a way where you didn't need to actually retrieve the two sources of truth (to prevent the long retrieval wait times).

  • To make this concept even more complicated, some forms of users didn't have 'member' attributes. This led to situations where most members were Auth0 users, but not necessarily so meaning there wasn't a direct way to combine users through a 'member' id.

Long-story short this setup worked, mostly. There were two huge tradeoffs:

  1. Long load time: It was immensely long the first time (5000ms plus). Then afterwards it was mostly immediate when it relied on the combined user state. But in some situations a full refresh was required and yes, this required the full 5000ms+.

  2. Overtime it became almost impossible to add even the basic interaction for a user without having issues crop up.

  3. When bugs with users occurred patching resulted in adding layers to an already complex web of logic.

Our solution

With the frequency of issues we decided to act. Because data relating to a user touched all corners our application it got to a point where we just couldn't ignore the underlying cause.

We pinpointed the issue to be the two sources of truth, and the complexity that it spawned.

Single source of truth

We transitioned to storing all the combined attributes of a single concept of a 'member' (including the concept of the prior 'member' and auth0 user) directly to a DynamoDB table.

This meant that we had one source of truth, the 'member'.

We got rid of the situation of a user without member information and by default made sure 'member' always had a unique id and this could be used to identify a members full details (including former 'member' and Auth0 user details.

Amending the backend and databases

This required massive changes to our backend (both our API spec and micro-service connecting to our 'member' table), a new database and carefully coordinate database migration to transition and combine data sources to the new DynamoDB database. This was a huge amazing effort on the parts of our lead engineer who led the data and backend changes.

Refactoring the front-end

On the front-end we went through a painstaking refactor. This included:

  • Rewriting out the existing user table logic
  • Identifying everywhere we used 'member', 'user' or the local combined state and rewriting the logic to use the now combined concept of 'member'.
  • All the combined logic to retrieve, create, or update users were rewritten. We still needed to update Auth0 when the permissions changes - however we could do this in a way that we could rely on the 'member' state to be in sync with this update.
  • The concept of the combined user state was erased and now we simply retrieve the updated 'member' list when an update takes place.

The results

  • We don't need to wait on the 5000ms to combine data data anymore. This data is retrieved on the first login and when there is a reload it is under 1500ms.
  • Much more simpler code - for example one Redux action was over 400 lines with a bunch of switch and if statements. These have been rewritten to be much, much smaller with minimal conditionals.
  • Less bugs and much more understanding about the concept of a user. The time saved as a result is exponentially growing in our dev team.
  • We deleted thousands of lines of code.
  • We got this done on time! Not a small feat given the scope of changes.

Learnings

1. Re-evaluate code often

Sometimes we make decisions that was entirely appropriate at the time but simply didn't grow with your needs. By regularly evaluating areas of the code you know are problematic for it's viability, it helps you make painful decisions that are worth it.

2. Simple is always best

Some of the logic that existed before our refactor was quite clever. However, it create complexity that was really not needed. For example having a combined local user state that had complex logic around updating accurately without retrieving the actual updated information sources of truth. Just simplifying to use the actual source of truth we could just delete this entire logic and use the data as is without any layers on top.

3. Refactoring is a fact of life

Sometimes we do so much to avoid a refactor. Yes, it's painful but it's sometimes better to face up to it and do the work. If it's an important piece of logic, the sooner the less complex the work will be.

4. TypeScript is awesome

Knowing that we would be pretty much rewriting the entire logic for users, I casually added in TypeScript for our repo before we got started with the refactor. I am so thankful I did this.

  • We were able to eliminate a lot of issues such as methods that converted objects into new objects.

  • It also allowed us to code with confidence knowing that the new objects coming through could be relied upon to contain the attributes that we needed.

If you are thinking of adopting to TypeScript (and especially if you are contemplating a refactor like us 😂) I'd highly recommend it as without it we would have definitely struggled more than we did.

Top comments (0)