DEV Community

Cover image for Solved: Why does every email builder still feel so slow?
Darian Vance
Darian Vance

Posted on • Originally published at wp.me

Solved: Why does every email builder still feel so slow?

🚀 Executive Summary

TL;DR: Email builders often suffer from sluggish performance due to I/O bottlenecks, not CPU or RAM limitations, as applications spend significant time waiting for disk operations. Solutions involve upgrading disk IOPS for immediate relief, offloading static assets to object storage like S3 for a permanent architectural fix, or implementing in-memory caches like Redis for extreme performance needs.

🎯 Key Takeaways

  • Application slowness, despite healthy CPU and RAM, frequently indicates an I/O bottleneck, where the application is ‘starved’ waiting for disk operations, identifiable with tools like iotop.
  • Offloading static assets (images, templates) to dedicated object storage services like Amazon S3 or Google Cloud Storage is the ‘correct architecture’ for long-term performance, freeing up the server’s local disk I/O for application logic.
  • Implementing an in-memory cache like Redis or Memcached can provide sub-millisecond access for frequently accessed ‘hot data’ but introduces significant architectural complexity, particularly concerning cache invalidation, and should be reserved for extreme performance requirements.

Tired of sluggish email builders? Uncover the hidden I/O bottlenecks killing your performance and learn three practical DevOps solutions—from quick disk upgrades to robust architectural refactoring—to make your tools fly again.

Why is Our Email Builder Still So Slow? A DevOps War Story

I still remember the 3 AM PagerDuty alert. Not for a downed server, but a Slack message from our Head of Marketing. The subject: “URGENT: Black Friday Campaign Launch Blocked.” I jumped on a call and found the entire marketing team staring at a loading spinner. Our internal email builder, the one we built to give them creative freedom, was taking minutes—literally minutes—to load a template or save a small image change. The servers looked fine: CPU was idling, memory was plentiful. Yet the application felt like it was running through mud. That night, I learned a lesson that every DevOps engineer eventually learns the hard way: it’s almost always the I/O.

The Real Culprit: You’re Starved for I/O, Not CPU

When an application like an email builder feels slow but the main server metrics (CPU, RAM) look healthy, it’s easy to get confused. We instinctively want to throw more processing power at the problem. But in my experience, nine times out of ten, the bottleneck isn’t processing; it’s the time the application spends waiting for the disk.

Think about what an email builder does:

  • It reads template files from a disk.
  • It writes new versions of those templates to a disk.
  • It uploads, processes, and saves images to a disk.
  • It pulls user data from a database, which itself is constantly reading and writing to its own disk.

These are all Input/Output (I/O) operations. When your application is running on a server with a slow or over-utilized disk (like a standard, general-purpose cloud volume), every one of these actions joins a queue. Your powerful CPU sits there, twiddling its thumbs, waiting for the disk to deliver the data it needs. The application isn’t slow; it’s starved. We confirmed it with a simple iotop command on the server, which showed our main application process at 99% I/O wait.

The Fixes: From a Band-Aid to a Cure

Okay, so we know the problem is disk I/O. How do we fix it? I’ve seen teams handle this in a few ways, ranging from a quick-and-dirty fix to a proper architectural overhaul. Here’s my playbook.

1. The Quick Fix: Upgrade The Disk

This is the “stop the bleeding” approach. If your application is running on a cloud VM, the fastest way to alleviate I/O pain is to provision a faster disk. In AWS, this means moving from a General Purpose SSD (like gp2 or gp3) to a Provisioned IOPS SSD (io1 or io2). You’re essentially paying for a dedicated, high-speed lane for your data.

It’s a straightforward infrastructure change, often requiring little to no downtime. You’re just telling your cloud provider, “Give me more disk speed, and send me the bill.”

Pro Tip: This is a perfectly valid short-term solution. When the marketing team is blocked from sending a multi-million dollar campaign, you don’t have time to refactor code. You apply the expensive band-aid, get them working, and then you plan the real fix.

2. The Permanent Fix: Offload Assets to Object Storage

The real, long-term solution is to stop treating your server’s local disk as a filing cabinet for everything. Your web server’s primary job is to run application code, not to be a high-performance file server for static assets like images, CSS, and HTML templates.

The correct architecture is to offload all of that static content to a dedicated object storage service, like Amazon S3 or Google Cloud Storage.

  1. The application logic is changed. When a user uploads an image, the app doesn’t save it to /var/www/uploads. Instead, it uses the cloud SDK to upload it directly to an S3 bucket.
  2. The database (our trusty prod-db-01) doesn’t store the asset; it stores a pointer to it—the S3 object key.
  3. When a template is loaded, the application fetches the assets from S3, not the local disk.

This change fundamentally frees up your server’s disk I/O to do what it’s supposed to: run the application and serve dynamic requests. The heavy lifting of storing and serving files is moved to a service built for exactly that purpose.

3. The ‘Nuclear’ Option: Implement an In-Memory Cache

What if even S3 isn’t fast enough? This can happen with extremely high-traffic builders where the same few templates are accessed thousands of time per minute. The latency of fetching from S3, while low, can still add up. For this scenario, we bring in an in-memory cache like Redis or Memcached.

The logic becomes a waterfall:

function get_template(template_id) {
    // 1. Check the super-fast in-memory cache first
    data = redis.get(f"template:{template_id}");
    if (data) {
        return data; // Cache Hit! Super fast.
    }

    // 2. If not in cache, get it from the reliable source (S3)
    data = fetch_from_s3(f"templates/{template_id}.html");
    if (data) {
        // 3. Put it in the cache for the *next* request
        // Set a Time-To-Live (TTL) of 1 hour (3600s)
        redis.set(f"template:{template_id}", data, ex=3600);
    }

    return data;
}
Enter fullscreen mode Exit fullscreen mode

Warning: Don’t jump to this solution first. It adds significant complexity to your architecture. Cache invalidation (“How do I make sure the cache is cleared when a template is updated?”) is one of the classic hard problems in computer science. Only use this when you have a clear performance need for sub-millisecond access to hot data.

Comparing The Solutions

To make it clearer, here’s how I’d break down the options for my team:

Solution Effort Cost Long-Term Viability
1. Upgrade Disk (IOPS) Low (Infra change) Medium to High (Recurring) Poor (It’s a band-aid)
2. Offload to S3 Medium (Code change) Low (S3 is cheap) Excellent (Correct architecture)
3. In-Memory Cache (Redis) High (New service + code change) Medium (Redis server cost) Situational (For extreme performance needs)

That night, we went with Option 1 to get the campaign out the door. But the very next sprint, we implemented Option 2. The builder has been flying ever since, and my PagerDuty has been wonderfully quiet. The moral of the story? Next time your application feels slow, stop looking at the CPU meter and start investigating your I/O. Your sanity will thank you for it.


Darian Vance

👉 Read the original article on TechResolve.blog


☕ Support my work

If this article helped you, you can buy me a coffee:

👉 https://buymeacoffee.com/darianvance

Top comments (0)