A 500 error is loud. Your monitoring fires. Your on-call gets paged. Someone fixes it within the hour.
A 200 that serves broken content is silent. Your monitoring dashboard stays green. Your users see a blank page, a broken checkout, a form that submits to nothing. And nobody on your team knows until a customer complains — or churns.
Modern web stacks fail in ways a simple status code cannot describe. Here are six common patterns where 200 OK actively hides broken websites.
1. The Phantom Deploy: Missing Bundle Hashes
Modern frontend frameworks produce hashed filenames: main.a4f2c.js, styles.b7e91.css. On every deploy, these hashes change. The HTML document references the new filenames. The old files get cleaned up.
Here's the failure mode:
You deploy at 2pm. The build produces vendor.c8d13.js and app.7fb2e.css. Your CDN edge nodes in Frankfurt still serve the HTML from the previous build — which references vendor.9a1b0.js and app.3de4f.css. Those files are gone. The document loads fine. Every asset 404s. Users in Europe see a blank screen for 45 minutes until the CDN cache expires.
This is especially common on:
- Netlify with atomic deploys but CDN propagation delays
- Vercel with ISR pages serving stale HTML
- Any CDN where HTML cache TTLs are longer than deploy frequency
The fix is cache invalidation discipline — but the point is that your uptime monitor never sees the problem. It fetches the document, gets 200, and moves on. It never checks whether main.a4f2c.js actually exists.
2. The Silent MIME Rejection
Browsers enforce strict MIME type checking for scripts and stylesheets. If your server responds with Content-Type: text/html for a JavaScript file, the browser silently blocks execution. No visible error. Your SPA just doesn't boot.
How does a JS file end up served as HTML?
- A CDN or reverse proxy intercepts the request and returns an error page (HTML) instead of the actual file
- A misconfigured Nginx
try_filesdirective falls through toindex.htmlfor any path — including.jsfiles - Cloudflare's custom error pages replace a 404 with a branded HTML error page, served as
text/html
The server logs show 200. The CDN metrics show successful delivery. The browser silently ignores the script. Your monitoring sees 200 and reports everything healthy. The only way to catch this is to verify the Content-Type header against the expected MIME type for each critical asset.
3. The Redirect Ouroboros
A redirect loop is usually obvious — the browser shows ERR_TOO_MANY_REDIRECTS. But most monitoring tools follow a limited number of redirects and report the final status, some don't follow redirects at all, and the loop might only trigger for specific user agents, cookie states, or geographic regions.
Common causes:
- A Shopify app redirects
/productsto/collections, while a theme redirect sends/collectionsback to/products - Mixed HTTP/HTTPS configurations where each protocol redirects to the other
- CDN-level redirect rules conflicting with origin-level rules
Your monitoring sees the first 301 and considers that a valid response. Meanwhile, real browsers with real cookies hit an infinite loop.
4. The Ghost Server
After a DNS migration, CDN swap, or infrastructure change, your domain might resolve to the wrong server. The wrong server still responds — it's just not your server anymore.
Scenarios:
- You migrate from AWS to Vercel but a DNS record still points a subdomain to the old EC2 instance. The instance serves stale content from 6 months ago. 200 OK.
- A CDN failover activates and sends traffic to a backup origin that was never updated. It serves the version from the last time someone deployed to it. 200 OK.
- A load balancer health check passes because the server responds, but it's responding with a default page, not your application.
The response is valid HTTP. The status is 200. The content is just... wrong. It's from a different version, a different environment, or a different application entirely.
Catching this requires tracking the resolved IP and server identity over time — and comparing the content fingerprint against a known baseline. Not just "did it respond" but "did it respond with the right content from the right server."
5. The API Gateway Mask
API gateways, reverse proxies, and edge functions sit in front of your application. When the app behind them crashes, the gateway often catches the error and returns its own response — with a 200 status code.
Your landing page URL returns {"error":"unauthorized","statusCode":401} as a 200 OK with Content-Type: application/json. Or your edge function crashes and the platform serves a generic fallback page. Valid HTML. 200 OK. Not your application.
Common with AWS API Gateway fronting Lambda, Cloudflare Workers with fallback behavior, and Vercel edge middleware that wraps errors. If your monitoring doesn't verify that the response content type matches what you'd expect for that URL, these failures pass undetected.
6. The Third-Party Collapse
Your site loads. Your HTML is correct. Your own assets are fine. But a critical external dependency is down — and your page breaks without your server knowing.
This matters most when the dependency is load-bearing: Stripe's JS SDK failing means your checkout form renders as an empty div. An auth provider's script timing out means users can't log in. A CDN-hosted app bundle 404ing means your SPA doesn't boot. A bot-protection script blocking form submission means your lead capture is dead.
Your server returns 200. Your monitoring checks your domain. The broken resource is on someone else's domain. That gap is where this failure hides.
The Common Thread
All six patterns share one characteristic: the HTTP status code says everything is fine while the user experience is broken.
Status codes tell you whether a request succeeded at the protocol level, not whether the page is actually usable. A 200 means "the server processed your request and returned a response." It doesn't mean the response contains what the user needs.
A lot of monitoring still relies on a false assumption: that a successful HTTP response is a good proxy for a working site. For simple, server-rendered pages with no external dependencies, that was close enough. For modern web applications with hashed assets, CDN layers, API gateways, SPAs, and third-party dependencies, it's a dangerous gap.
What Would Actually Catch These
The common thread across all six patterns is that the status code is the wrong layer to monitor. Catching these failures means shifting from "did the server respond?" to "does the response actually constitute a working page?"
That means treating the HTML document as a manifest, not a destination — following up on the assets it references, verifying their types, and confirming they load. It means following redirect chains to completion rather than trusting the first hop. It means tracking what your site returns over time so you notice when the content silently changes. And it means checking from more than one location, because CDN failures don't happen everywhere at once.
This is the problem we built Sitewatch to solve. But regardless of what tool you use, the principle matters: stop trusting 200 OK as proof that your site works.
Have you been burned by a silent failure that hid behind 200 OK? I'd be curious to hear what the failure pattern was — these stories are how the industry gets better at catching them.
Top comments (1)
The phantom deploy scenario is real, but I'd argue the fix isn't monitoring — it's making deploys atomic at the CDN level. If edge nodes serve stale HTML referencing deleted assets, the deploy pipeline is the bug, not the status code.