DEV Community

Cover image for Removing Third-Party Dependencies Made My Status Page Faster (Here’s How)
Nikola
Nikola

Posted on

Removing Third-Party Dependencies Made My Status Page Faster (Here’s How)

When I wrote about why a status page shouldn’t depend on third-party CDNs, the immediate response was predictable:

“Sure, but CDNs are faster.”

That assumption is so common it rarely gets tested. So I tested it.

I removed all third-party runtime dependencies from my status pages.

Nothing broke.

Performance improved — materially.

Not because of a trick, but because I removed architectural friction.

  • Fewer requests
  • Smaller payloads
  • More predictable render behavior
  • Lower TTFB

To make this concrete:

  • Europe TTFB dropped from ~3s to ~181ms
  • America dropped to ~530ms
  • Asia Pacific dropped to ~861ms
  • JavaScript bundle shrank from ~1MB to ~45KB

Europe benefited the most because it’s closest to the primary infrastructure. Other regions are still bounded by geography.

What changed wasn’t physics.
It was server-side inefficiency.

The architectural bottlenecks are gone.

What Didn't Change

Geography still matters.

Layered caching removes server-side bottlenecks.
It does not remove physical distance.

The goal wasn't to beat latency physics.
It was to remove unnecessary self-inflicted delays.

This isn’t an anti-CDN rant. CDNs are incredibly useful for many workloads. But a status page is not “many workloads.” It’s a very specific type of system with very specific goals.


What Comes Next

Layered caching removed most self-inflicted latency.

What remains is mostly geography.

The next step is introducing read-only replicas in additional regions to reduce cross-region database round-trip time.

Caching removes unnecessary work.
Regional replicas reduce distance.

They solve different problems.

Layered caching made the system efficient.
Replicas will make it geographically closer.

They’re complementary, not interchangeable.


How I Actually Sped It Up

There wasn’t one magic change. I removed a chain of small delays.

The biggest wins came from:

  • Letting Caddy handle more of the boring HTTP work (compression + headers)
  • Layered caching (browser → Redis → in-process → filesystem)
  • Shrinking what the browser downloads and executes

The result wasn’t just better synthetic metrics. It felt faster.

Before these changes, the cache hit rate was effectively 0% and the performance grade reflected that. After introducing layered caching and proper HTTP policies, repeated requests became dramatically cheaper.

PageSpeed.dev test results for status.statuspage.me

a screenshot of https://status.statuspage.me test results


1. Caddy Tuning: Compression + Explicit Cache Policy

Caddy is great out of the box. But status pages are perfect for aggressive HTTP fundamentals.

example.com {
    encode zstd gzip

    # HTML: short-lived or revalidated
    @html path / /status* /incidents* /history*
    header @html Cache-Control "public, max-age=0, s-maxage=30, must-revalidate"

    # Fingerprinted assets: cache "forever"
    @assets path /assets/* /static/*
    header @assets Cache-Control "public, max-age=31536000, immutable"

    file_server
}
Enter fullscreen mode Exit fullscreen mode

Most of my pages behave like “mostly static + occasional updates.”

So HTML is short-lived. Assets are immutable.

That alone reduced repeat downloads to near-zero.


2. Static Files: Make Repeat Visits Cheap

If fonts, icons, JS bundles, and CSS aren’t cached hard, you pay the same cost on every visit.

@fonts path /fonts/* /assets/fonts/*
header @fonts Cache-Control "public, max-age=31536000, immutable"

@images path /img/* /assets/img/*
header @images Cache-Control "public, max-age=604800"
Enter fullscreen mode Exit fullscreen mode

Static delivery is the cheapest request your server can handle.
Ideally it never reaches the app at all.

After fixing headers, repeat-view performance improved dramatically.


3. Redis: Cache Expensive Shared Computations

Redis wasn’t about caching everything.

It was about preventing repeated fan-out queries:

  • Region summaries
  • Uptime aggregates
  • Incident rollups
key := "status:public:v1:region_summary"
if v, ok := redis.Get(ctx, key); ok {
    return v
}

data := computeRegionSummaryFromDB(ctx)

// TTL + jitter to prevent stampedes
ttl := 30*time.Second + time.Duration(rand.Intn(10))*time.Second
redis.Set(ctx, key, data, ttl)

return data
Enter fullscreen mode Exit fullscreen mode

Redis acted as a shock absorber between traffic spikes and the database.

Short TTLs. Shared expensive work cached. Fresh enough, stable enough.


4. In-Process Cache: Stop Hitting Redis for Ultra-Hot Keys

Once Redis is in place, the next bottleneck can be Redis itself.

For ultra-hot keys, a tiny in-memory cache helps.

if v, ok := memCache.Get(key); ok {
    return v
}

v := redisOrDB()
memCache.Set(key, v, 2*time.Second)
return v
Enter fullscreen mode Exit fullscreen mode

It’s a small change, but shaving a few milliseconds off thousands of requests per minute adds up.


5. Filesystem Cache: Serve Pre-Rendered Output

For the hottest pages, I cached rendered artifacts on disk.

Conceptually:

  • Render final HTML
  • Write to disk with version/timestamp key
  • Serve directly when fresh
  • Regenerate in background when stale

Sometimes “read file and return it” beats “query + render + serialize.”


6. HTTP Revalidation: Cheap Refreshes (304 > 200)

Even when HTML can’t be cached long-term, refreshes can still be cheap.

Short-lived HTML + ETag / Last-Modified means many refreshes become:

304 Not Modified

That’s huge during incidents when users refresh constantly.


7. Asset Minification + Smaller Hydration Surface

I treated asset size like a performance budget.
The JavaScript bundle alone went from ~1MB to ~45KB after removing unnecessary hydration and minifying aggressively.

  • Minified JS/CSS (Terser)
  • Removed unused CSS
  • Compressed with zstd/gzip
  • Cached fingerprinted assets as immutable

And most importantly:

I reduced client-side hydration.

Status pages are content-first.
They don’t need SPA-level JavaScript.

If the page works without JavaScript, it’s already fast.
Then you add JS only where needed.


The Layered Caching Model

What I ended up with:

  • Browser cache for immutable assets
  • Short-lived HTML + revalidation
  • Redis for shared expensive reads
  • In-process cache for ultra-hot keys
  • Optional filesystem cache for rendered artifacts

Each layer reduces work for the layer beneath it.

That’s why the gains stack instead of overlapping.


The Takeaway

For many systems, third-party CDNs absolutely make sense.

For incident communication paths, control often beats theoretical edge distribution.

Removing third-party runtime dependencies did not hurt performance.

It removed hidden latency.
It reduced the failure surface.
It improved repeat-visit behavior.

For status pages, predictability beats theoretical edge performance.

If a status page exists for when things break, it should be built like things will break.

Top comments (7)

Collapse
 
nedcodes profile image
Ned C

Those TTFB numbers are wild. 3s down to 181ms just by removing CDN round-trips is a good reminder that "add a CDN" isn't always the answer.

For a status page specifically this makes total sense. The one thing that absolutely needs to load when everything else is on fire shouldn't depend on anything external. Self-contained is the right call here.

Collapse
 
dragstor profile image
Nikola

Appreciate it.

Quick nuance: the biggest win wasn’t removing CDN round-trips as much as removing self-inflicted work (cache policy + layered caching + smaller hydration surface). Geography still shows up in US/APAC, but the server-side bottlenecks are gone!

For status pages, I’m optimizing for predictable rendering under degraded conditions, not just best-case edge perf.

Collapse
 
nedcodes profile image
Ned C

good distinction. optimizing for degraded conditions is a completely different design goal than optimizing for best-case speed. most monitoring/status pages I've seen just chase lighthouse scores without thinking about what happens when things are actually breaking, which is when you need the page most.

Thread Thread
 
dragstor profile image
Nikola

Exactly.

Lighthouse optimizes for ideal conditions. Status pages need to optimize for partial failure and high-refresh scenarios.

During incidents, users refresh constantly. If every refresh triggers heavy server work or full client hydration, you’re compounding the problem.

Designing for degraded conditions changes almost every tradeoff.

Thread Thread
 
nedcodes profile image
Ned C

The constant-refresh point is underrated. Most performance benchmarks assume a single page load, but during an incident your status page might get hit hundreds of times per minute by the same users. If each load triggers a full hydration cycle or expensive server-side work, you're basically DDoSing yourself at the worst possible time.

Designing for that scenario first and working backwards to normal conditions seems like the right order of operations.

Thread Thread
 
nedcodes profile image
Ned C

really good point about designing for degraded conditions. most performance work assumes everything is healthy, but the patterns you need under load or partial failure are completely different. that refresh storm problem alone makes most standard caching strategies useless.

Collapse
 
dragstor profile image
Nikola

This wasn’t about “CDNs are bad.”

It was about removing self-inflicted latency on a read-heavy, incident-critical surface.

The biggest gain came from layered caching and shrinking the hydration surface, not from chasing synthetic scores.


Curious how others approach cross-region reads in similar setups. Do you lean more on aggressive caching, regional replicas, or both?