DEV Community

Cover image for Building High-Performance Full-Stack Apps with React, Node.js & MongoDB: A Journey in Scalability, Speed & Solutions
Mukhil Padmanabhan
Mukhil Padmanabhan

Posted on

Building High-Performance Full-Stack Apps with React, Node.js & MongoDB: A Journey in Scalability, Speed & Solutions

You open your production app and notice it’s grinding to a halt. The frontend is unresponsive. Backend APIs are timing out. MongoDB queries appear to be running indefinitely. Your inbox is flooded with user complaints. Your team huddles together trying to triage the situation.

Been there? Yeah, me too.

I’m a Senior Full Stack Developer and I’m sick of apps that are fine whilst you’re only using them as a single user or when the problem space is simple but then just wilt & collapse under real traffic or a slightly more demanding task.

Stay with me, and I’ll walk you through how I addressed these concerns using React, Node.js, and MongoDB.

I won’t just be giving you another plain old tutorial, I’ll be sharing a story. A story on how to tackle real world problems and how to build a fast high scalable application that could pass the test of time and anything thrown at it.

1: When React Became the Bottleneck

We had just rolled out an update for our web app, developed with React, at my job. We were brimming with confidence, believing users would appreciate the new features.

However, it wasn’t long before we started receiving complaints: the app was loading extremely slowly, transitions were stuttering, and users were growing increasingly frustrated. Despite knowing that the new features were beneficial, they inadvertently led to performance issues. Our investigation revealed a problem: the app was bundling all its components into a single package, which forced users to download everything each time they accessed the app.

The Fix: We implemented a very useful concept called Lazy Loading. I had come across this idea before, but it was exactly what we needed We completely revamped the app’s structure, ensuring it only loads the necessary components when required.

Here’s a glimpse into how we implemented this solution:

const Dashboard = React.lazy(() => import('./Dashboard'));
const Profile = React.lazy(() => import('./Profile'));

<Suspense fallback={<div>Loading...</div>}>
  <Route path="/dashboard" component={Dashboard} />
  <Route path="/profile" component={Profile} />
</Suspense>

Enter fullscreen mode Exit fullscreen mode

The Result: The impact of this change was nothing short of remarkable. We saw a whopping 30% shrink in our bundle and users experienced much faster initial load. Best part was users had no idea that certain parts of the app were still loading, we used Suspense wisely and showed a simple non intrusive loading message.

2: Taming the Beast of State Management in React

As we fast-forward a few months, our development team was hitting its stride and shipping lots of new functionality. But along with growth, we had inadvertently started to build what I call a more complex app. Redux quickly became a liability rather than an aide in facilitating simple interactions.

So, I spent some time creating a POC for a better alternative. I documented the heck out of it and facilitated multiple knowledge-share meetings on what that approach possibly looked like. We eventually decided as a group to try React Hooks (and in particular useReducer) as our proposed solution for managing state because ultimately we wanted simpler code and less of the massive runtime footprint that newer versions of Redux had growing overhead with many smaller self-contained states.

The transformation that followed was nothing short of revolutionary. We found ourselves replacing dozens of lines of boilerplate code with concise, easy-to-understand hook logic. Here’s an illustrative example of how we implemented this new approach:

const initialState = { count: 0 };

function reducer(state, action) {
  switch (action.type) {
    case 'increment':
      return { count: state.count + 1 };
    case 'decrement':
      return { count: state.count - 1 };
    default:
      throw new Error();
  }
}

const CounterContext = React.createContext();

function CounterProvider({ children }) {
  const [state, dispatch] = useReducer(reducer, initialState);
  return (
    <CounterContext.Provider value={{ state, dispatch }}>
      {children}
    </CounterContext.Provider>
  );
}

Enter fullscreen mode Exit fullscreen mode

The Result: The impact of this transition was profound and far-reaching. Our application became significantly more predictable and easier to reason about. The codebase, now leaner and more intuitive, allowed our team to iterate at a much faster pace. Perhaps most importantly, our junior developers reported a marked improvement in their ability to navigate and understand the codebase. The end result was a win-win situation: less code to maintain, fewer bugs to squash, and a noticeably happier and more productive development team.

3: Conquering the Backend Battlefield — Optimizing Node.js APIs for Peak Performance

While we were able to introduce a lot of improvement to our frontend, soon after we had multiple issues on backend. Our API performance became horrible and there were few endpoints in particular which started performing abysmally. Those endpoints make a sequence of calls to different third party services and with growing userbase, system was not able to handle this load.

It was quite common-sensical what was wrong: We were NOT Parallel! i.e., Requests into each endpoint were handled in sequential manner i.e. every next call would wait for previous call to be complete. In this high scale (hundred thousand requests) system, it proved disastrous.

The Solution: To fix this we decided to rewrite lots of our code and use the power of Promise.all() for making the API request in a concurrent way. That means that you launch multiple requests and you don’t have to wait until every call finish to launch the next one.

For doing so, we are not launching an API call, waiting until it finishes, making another one and so on…

Instead of that simply by using Promise.all(), everything was launched at once and much faster.

Here’s a glimpse into how we implemented this solution:

const getUserData = async () => {
  const [profile, posts, comments] = await Promise.all([
    fetch('/api/profile'),
    fetch('/api/posts'),
    fetch('/api/comments')
  ]);
  return { profile, posts, comments };
};

Enter fullscreen mode Exit fullscreen mode

The Result: The impact of this optimization was immediate and substantial. We observed a remarkable 50% reduction in response times, and our backend demonstrated significantly improved resilience under heavy load. Users no longer experienced frustrating delays, and we saw a dramatic decrease in the number of server timeouts. This enhancement not only improved the user experience but also allowed our system to handle a much higher volume of requests without compromising performance.

4: The MongoDB Quest — Taming the Data Beast

As our application gained traction and our user base grew by orders of magnitude, we had to face a new obstacle: how do you scale its data? Our once responsive MongoDB instance started to choke when having to deal with millions of documents. Queries that used to run in milliseconds took seconds to complete — or timed out.

We spent a few days looking into MongoDB’s performance analysis tools and identified the big bad guy: unindexed queries. Some of our most common queries (e.g. requests for user profiles) were scanning entire collections against which they could use rock-solid indexes.

The Solution: With the information we had in hand, we knew that all we needed to do was create compound indexes on those most requested fields and that this would fix our database body lookups time for good. Here is how we did it when it came to the “username” and “email” fields.

db.users.createIndex({ "username": 1, "email": 1 });

Enter fullscreen mode Exit fullscreen mode

The Result: The impact of this optimization was nothing short of remarkable. Queries that had previously taken up to 2 seconds to execute were now completing in under 200 milliseconds — a tenfold improvement in performance. Our database regained its snappy responsiveness, allowing us to handle a significantly higher volume of traffic without any noticeable slowdown.

However, we didn’t stop there. Recognizing that our rapid growth trajectory would likely continue, we took proactive measures to ensure long-term scalability. We implemented sharding to distribute our data across multiple servers. This strategic decision allowed us to scale horizontally, ensuring that our ability to handle data grew in tandem with our expanding user base.

5. Embracing Microservices — Solving the Scalability Puzzle

As our user base continued to multiply, it was becoming more and more apparent that not only did we need to scale our infrastructure, but we had to evolve our application in order to be able to scale with confidence. The monolithic architecture suited us well when we were a smaller team, but over time it became quite cumbersome. We knew that we needed to take the leap and start building towards a microservices architecture — an intimidating task for any engineering team, but one with a great deal of scalability and reliability upside.

One of the biggest problems was communication between services. HTTP requests really doesn’t work for our case and it has left us with one more bottleneck in system as a huge amount of operations were waiting for response all restlessly and killed program if needed one had too much to do. At this point we realized that using RabbitMQ is obvious answer here, so we applied it without thinking too much.

Here’s a glimpse into how we implemented this solution:

const amqp = require('amqplib/callback_api');

amqp.connect('amqp://localhost', (err, conn) => {
  conn.createChannel((err, ch) => {
    const queue = 'task_queue';
    const msg = 'Hello World';

    ch.assertQueue(queue, { durable: true });
    ch.sendToQueue(queue, Buffer.from(msg), { persistent: true });
    console.log(`Sent ${msg}`);
  });
});

Enter fullscreen mode Exit fullscreen mode

The Result: The transition itself along with communication made through RabbitMQ looked like magic from our point of view… and numbers confirmed it!!! We became lucky owners of loosely coupled microservices where each service could be scale on its own. Suddenly, real traffic spikes on concrete dns zone didn’t involve fear that system is going down (as no matter which service operation ask same because they are always cascade) but worked nicely, as remaining parts/operations just raised their hands calmly saying ‘I can sleep my dear’. Maintenance also became easier and less problematic while adding new features or updates were faster more confident operation.

Conclusion: Plotting a course for future innovation

Each step along this thrilling journey was a lesson, reminding us that full-stack development is more than writing code. It’s understanding and then solving complicated interrelated problems — from making our frontends faster and building backends to withstand failures, to dealing with databases that scale while your user base explodes.

As we look to the second half of 2024 and beyond, the increasing demand on web applications will not be slowing down. If we stay focused on building scalable, performance optimized and well-architected applications, then we are positioned to solve any problem today — and take advantage of those other challenges in our future. These real-life experiences have greatly impacted how I approach full-stack development — and I can’t wait to see where these influences will continue to push our industry!

But how about you? Have you faced similar roadblocks or had luck with other creative ways of overcoming these issues? I’d love to hear your stories or insights — let me know in the comments or connect with me!

Top comments (1)

Collapse
 
swathi_m_815e2d057c5531b9 profile image
swathi m

Very much helpful.