DEV Community

Cover image for Frontend Engineers Should Care More About Infrastructure
Gavin Clive
Gavin Clive

Posted on • Originally published at gavinclive.hashnode.dev

Frontend Engineers Should Care More About Infrastructure

One 304 Not Modified is easy to ignore. A page full of them is latency.

We migrated CDN providers that quarter.

The new provider was meaningfully cheaper, and its edge coverage in Southeast Asia looked comparable. The migration itself went smoothly. Traffic moved over, error rates stayed normal, and the infra team signed it off.

One thing had changed underneath us: the Cache-Control headers our old provider had been injecting at the edge did not carry over.

Before that migration, I did not think about those headers much. Not because they were unimportant, but because they had already been decided below the application layer. The asset pipeline emitted files, the CDN served them, the browser cached them, and the page worked.

The new provider was still serving our static images correctly. ETag was present. Repeat loads returned 304 Not Modified. Nothing obvious was being re-downloaded. At a glance, the network tab looked fine.

Nothing was broken.

That was what made it hard to spot.

The browser was doing the responsible thing. The server was doing the responsible thing. DevTools showed small transfer sizes, and the status code looked like evidence that caching worked. If you were scanning for wasted bytes, you would move on.

The LCP regression showed up two weeks later in RUM data.

The moment it clicked was a Sentry trace from an icon-heavy page. It was not one suspicious request. It was a page full of tiny validations, all individually harmless-looking, sitting in the same trace. Together, they made the page feel like it was checking its pockets before every step.


Without Cache-Control or Expires, the browser falls back to heuristic caching. Usually that means a short, unpredictable freshness window based on the gap between Last-Modified and the current date.

So repeat visits kept sending conditional GET requests with If-None-Match to ask whether the image had changed. The server replied 304. The image body was not downloaded again, but the browser still paid for a round trip before rendering the largest contentful element.

In a Jakarta office on a fast connection, this was invisible.

On a mid-range Android device on 4G, it compounded.

LCP does not care that the requests ended in 304. It cares that rendering waited.

I already knew a 304 was not free. What I had not internalized was how invisible the aggregate could be. A validated cache hit saves bandwidth, but if a page is full of them, the latency still shows up in the trace.


The distinction worth remembering is simple: ETag asks, has this changed? Cache-Control asks, do I need to ask right now?

Without an explicit freshness policy, the browser can keep asking that question much sooner than you expect.

For static product and UI images served at stable URLs, we should have been sending something like:

Cache-Control: max-age=604800, stale-while-revalidate=86400
Enter fullscreen mode Exit fullscreen mode

Seven days of freshness, with background revalidation after that. Repeat visits skip the validation request entirely, so the browser does not need to ask the edge before it can render.

The important constraint is stable URLs. If the same URL can point to a different image tomorrow, a long max-age is a footgun. But for versioned assets, product images with controlled invalidation, and UI images that do not change under the same path, freshness is the point.

This is now one of the checks I wish I had treated as frontend work. Not just whether the image loads. Not just whether the status is 200 or 304. Check the trace for repeated validation spans. Check the response headers on the actual LCP resource. Check whether those spans sit before the LCP mark.


The infra team did nothing wrong. The migration was correct.

The old provider had simply been doing us a quiet favor, and we did not know enough to notice when it stopped.

That was the real bug.

If I had paired that icon-heavy Sentry trace with the missing Cache-Control header sooner, I would have found this in an afternoon instead of two weeks into a KPI cycle.

After restoring explicit freshness on those media responses, repeat visits no longer had to validate the same image URLs before rendering.

Sentry will tell you LCP is 4.2s. Chrome DevTools will show you the waterfall. But if the response headers read like a foreign language, you can spend weeks looking for a frontend bug that is not in the codebase.

The browser is the last mile.

Infrastructure is the road under it.

Frontend engineers should know what it is made of.

Top comments (0)