DEV Community

Gerus Lab
Gerus Lab

Posted on

Stop Blaming Your Framework: Your App Is Slow Because You Stopped Caring About Performance

Let me start with a confession.

Three years ago, our team at Gerus-lab deployed a SaaS dashboard that took 4.2 seconds to load on a decent laptop with a fast connection. The backend was processing simple aggregation queries. The frontend was React with a bunch of libraries we "needed." The client was unhappy. We were embarrassed.

We blamed webpack. We blamed the cloud provider. We blamed the data size.

We were wrong. We had simply stopped thinking about what our code actually does to hardware.

This is a story that's repeating itself across the industry — and nobody seems to want to talk about it honestly.


Hardware Got Faster. Software Got... Heavier.

Here's a thought experiment. Open a text editor built in the 1990s. It launches in under a second on a machine with 64MB RAM. Now open a modern note-taking app. It'll use 300-600MB of RAM and take 2-3 seconds to start. What changed? Not the fundamental task — you're still editing text. What changed is that we buried the actual operation under 15 layers of abstraction.

The typical modern web app stack looks something like this:

User → React/Vue/Angular → State Management → API Layer → 
Node.js → ORM → Database → …and back
Enter fullscreen mode Exit fullscreen mode

Every layer exists for a reason. But every layer also has a cost — and most developers today have no idea what those costs actually are.

A memory access to RAM takes 200-300 CPU cycles. An L1 cache hit? 4 cycles. The difference is roughly the same as walking to the coffee machine versus flying to Colombia to pick the beans yourself. When your code scatters data across memory with no regard for cache locality — and most high-level code does — you're paying that Colombia tax on every operation.


The Framework Isn't the Problem. You Are.

I'll say the controversial part out loud: frameworks are not to blame for slow apps. Developers are.

I've watched junior engineers reach for useEffect chains that trigger 6 re-renders for a single state change. I've seen Node.js services that reconstruct 50,000-object arrays on every request because "we'll optimize later." I've seen Solidity smart contracts gas-optimized to perfection by the same developers who then ship a React frontend that downloads 3MB of JavaScript to display a login form.

The problem isn't that React is slow. The problem is that React makes it very easy to be slow without noticing it.

When we were building a GameFi project for a client (see our portfolio at gerus-lab.com), we had a leaderboard component that queried 10,000 records, passed them through two mapping operations and a filter, and rendered a list of 20. Every sort. Every filter. Every time a user moved up the rankings, the whole chain ran again.

The fix took 90 minutes: memoize the sorted dataset, paginate the query, debounce the updates. Load time dropped from 1.8s to 180ms. The code we removed was more than the code we wrote.


What Actually Happens When Your Code Runs

Most developers never look at this. I get it — you're shipping features, not writing a CS thesis. But here's the thing: a little hardware awareness goes a long way.

Take this contrived but very real scenario:

// This runs on every render. Don't do this.
const processedData = rawData
  .filter(item => item.active)
  .map(item => ({ ...item, displayName: `${item.first} ${item.last}` }))
  .sort((a, b) => b.score - a.score);
Enter fullscreen mode Exit fullscreen mode

If rawData has 5,000 items and this lives in a component that re-renders on any state change, you're running three O(n) operations plus a sort (O(n log n)) potentially dozens of times per second. Each of those operations creates new objects, which means heap allocations, which means GC pressure.

// This is better. Compute once, memoize.
const processedData = useMemo(() => 
  rawData
    .filter(item => item.active)
    .map(item => ({ ...item, displayName: `${item.first} ${item.last}` }))
    .sort((a, b) => b.score - a.score),
  [rawData]
);
Enter fullscreen mode Exit fullscreen mode

This isn't advanced stuff. This is just thinking about what the code does.


The Business Case for Caring About Performance

Here's where it gets real. The reason this conversation is uncomfortable is that the industry has quietly decided that developer time is worth more than CPU cycles. Servers are cheap. Engineers are expensive. "Just add another instance" is a real strategy that works at small scale.

Until it doesn't.

We had a client — a fintech startup — whose backend was consuming 8GB RAM on a simple transaction history service. Three developers had worked on it over 18 months, each adding features without ever looking at what came before. We audited and refactored the service in six weeks. Memory usage dropped to 900MB. Infrastructure costs dropped by ~70%. The performance work paid for itself in three months.

That 8GB service wasn't built by bad engineers. It was built by good engineers who were never asked to care about performance. Nobody's performance review ever said "your API response time improved by 40ms."

But the compounding cost is real. Every architectural laziness you ship today becomes technical debt that multiplies over time.


What We Actually Do Differently

At Gerus-lab, we've shipped 14+ products — from Web3 infrastructure to AI-powered SaaS to GameFi platforms. Here's what we've learned to do differently:

1. Profile before you optimize. Don't guess. Use Chrome DevTools, React DevTools Profiler, or perf on the backend. You'll almost always be surprised by where the time actually goes.

2. Measure bundle size like a feature. We treat JavaScript bundle size as a feature metric. If a new dependency adds 40KB, we want to know. We want to know why. Tools like webpack-bundle-analyzer and source-map-explorer live in our CI pipeline.

3. Think about data shapes early. The struct-of-arrays vs array-of-structs pattern from systems programming applies to JavaScript too. If you're only ever reading one field from a large object array, rethink your data model.

4. Database queries are not free. An ORM will let you write user.posts.comments.likes and silently issue 4 separate queries. Always look at what queries your ORM generates. Use EXPLAIN ANALYZE. A composite index added in 20 minutes can eliminate a timeout that's costing you customers.

5. Async doesn't mean fast. async/await doesn't make things faster — it makes them non-blocking. If you're awaiting 10 independent API calls sequentially, Promise.all will make them actually parallel. This is basic, but we see it missed constantly.


The Uncomfortable Truth About "Ship Fast" Culture

"Move fast and break things" was fine when you were finding product-market fit. It's less fine when you have paying customers.

We've seen startups rebuild their entire backend at Series A because the original "move fast" codebase couldn't handle 10x more users. We've seen mobile apps that built up a reputation for draining batteries because nobody profiled the background tasks. We've seen smart contract bugs that cost real money because the dev team thought "gas optimization is for later."

Performance isn't a feature you add at the end. It's a discipline you maintain throughout.

The companies that win aren't the ones who shipped the fastest. They're the ones who shipped fast and built something that scales without falling apart.


A Quick Checklist

Before you ship your next feature, run through this:

  • Did you open the Network tab and check what's being downloaded?
  • Are there any unnecessary re-renders in React DevTools?
  • Did you check what SQL queries your ORM generated?
  • Is there any computation that could be memoized or moved server-side?
  • Does your bundle size make sense for the value delivered?
  • Would this component handle 10x the data it currently gets?

None of this is exotic. It's just discipline. And discipline compounds.


Final Thoughts

The title of this post is provocative because the situation deserves some provocation. We've built an industry where you can ship a 49MB webpage (yes, that's a real number from a major news site) and nobody gets fired for it.

But users notice. Bounce rates notice. Your infrastructure bill notices.

Hardware has gotten dramatically faster over the past 20 years. Our software hasn't kept pace — not because the tools got worse, but because we stopped demanding that it should.

Start asking "why is this slow?" before "how do I add this feature?" You'll be surprised how often the answer is simple.


Need help building products that are actually fast? We've shipped 14+ products at Gerus-lab — Web3 platforms, AI tools, SaaS dashboards — and performance engineering is part of every project from day one. If you're dealing with slow load times, scaling pain, or just want to build it right the first time, let's talk → gerus-lab.com

Top comments (0)