Earlier this year I migrated phyfun.com from www to non-www. On paper it's a five-minute job. In practice it took the site down for several hours, generated thousands of Search Console errors, and taught me more about SiteGround's hosting stack than I ever wanted to know.
This is the war story. If you're on SiteGround, or any Nginx-in-front-of-Apache hybrid setup, and you've ever wondered why your .htaccess HTTPS redirect rules don't behave the way the docs say they should — this post is for you.
The plan
phyfun.com is a browser physics-games site that's been online for years. The domain was historically configured with www.phyfun.com as canonical, with the bare domain redirecting to www. I wanted to flip that — make the bare domain canonical, redirect www to bare.
The reasoning was simple. I'd been moving toward shorter, cleaner URLs across all my sites, and Search Console was showing weird canonical conflicts on a few pages. Cleaning up the canonical was on my "do this when I have an evening" list for a while.
So one evening I sat down to do it. The plan was:
- Add canonical tags pointing to non-www on all pages.
- Update sitemap to use non-www URLs.
- Add 301 redirects in
.htaccessfrom www to non-www, and from HTTP to HTTPS. - Update Search Console with the non-www property as canonical.
- Wait, watch, sleep.
Steps 1 and 2 were trivial. Step 3 is where everything went wrong.
The naive redirect rule
Here's the .htaccess I started with — the kind of rule you'll find in approximately every Stack Overflow answer about Apache HTTPS redirects:
RewriteEngine On
# Force HTTPS
RewriteCond %{HTTPS} off
RewriteRule ^(.*)$ https://%{HTTP_HOST}%{REQUEST_URI} [L,R=301]
# Force non-www
RewriteCond %{HTTP_HOST} ^www\.(.+)$ [NC]
RewriteRule ^(.*)$ https://%1/$1 [L,R=301]
This is correct on a vanilla Apache setup. It is not correct on SiteGround. I deployed it. The site went down.
What I saw in the browser: ERR_TOO_MANY_REDIRECTS. What I saw in the Apache logs (after I figured out where SiteGround keeps them): every single request was being 301'd back to itself, in an infinite loop.
I rolled back. The site came up. I went to make coffee.
Why this fails on SiteGround
It took me an embarrassing amount of digging to understand what was happening. The short version:
SiteGround runs Nginx in front of Apache. Nginx terminates TLS. By the time the request reaches Apache, it's already been decrypted, and Apache sees a plain HTTP request — even when the user is browsing over HTTPS.
So when my .htaccess rule says:
RewriteCond %{HTTPS} off
Apache evaluates %{HTTPS} as off for every single request, including HTTPS ones, because that's what Apache sees. The rule then 301-redirects the request to HTTPS. The browser follows the redirect, hits Nginx, which terminates TLS again, hands the now-HTTP request to Apache, which sees %{HTTPS} = off, redirects again, and the cycle continues until the browser gives up.
This is a really common gotcha on hosts that use this kind of hybrid setup — SiteGround, certain WP Engine configurations, some Cloudways stacks, anywhere there's a TLS-terminating reverse proxy in front of Apache. If you've never run into it, count yourself lucky.
The fix: X-Forwarded-Proto
The fix is to stop trusting Apache's view of the protocol and instead read the header that Nginx adds when forwarding the request. That header is X-Forwarded-Proto, and it contains the actual protocol the client used (http or https).
Here's the corrected rule:
RewriteEngine On
# Force HTTPS — using X-Forwarded-Proto for Nginx-in-front-of-Apache hosts
RewriteCond %{HTTP:X-Forwarded-Proto} !https
RewriteRule ^(.*)$ https://%{HTTP_HOST}%{REQUEST_URI} [L,R=301]
# Force non-www
RewriteCond %{HTTP_HOST} ^www\.(.+)$ [NC]
RewriteRule ^(.*)$ https://%1/$1 [L,R=301]
The key change is RewriteCond %{HTTP:X-Forwarded-Proto} !https instead of RewriteCond %{HTTPS} off. Apache's %{HTTPS} is wrong on this kind of host. The header is right.
A few things worth knowing about this syntax:
-
%{HTTP:HeaderName}is how you access an arbitrary HTTP header in mod_rewrite. TheHTTP:prefix tells Apache to look in the request headers, not its built-in environment variables. - The
!negates the match. So!httpsmeans "if X-Forwarded-Proto is anything other than https, redirect." - This works because Nginx faithfully sets
X-Forwarded-Protowhen forwarding. If you ever set this rule on a host where Nginx isn't doing that, the rule will silently misbehave. Confirm your host actually sets the header before relying on it. You can check withcurl -Ior by dumping headers in a tiny PHP file.
After deploying this, the site came back up. The redirect loop was gone. I made more coffee.
The bonus problem: .aspx zombie URLs
While I was already in .htaccess, I decided to deal with another issue I'd been ignoring.
phyfun.com had been on a different platform years ago, and that platform used .aspx URLs. When I rebuilt the site on a different stack, those URLs went away — but the wider web had no way to know that. Old links to .aspx pages were still being requested, by humans and bots, years later.
For a long time those requests had been hitting my generic 404 page, which returned a 200 OK status with "page not found" content. This is what's called a soft 404 — the response body says "not found" but the HTTP status code says "OK". Google really doesn't like this. Search Console had been flagging hundreds of these for ages, which I'd been politely ignoring.
The right answer for URLs that are gone and never coming back is HTTP 410 Gone, not 404. The semantic difference matters:
-
404 Not Foundmeans "the resource isn't here right now, maybe try later." -
410 Gonemeans "this resource is permanently gone, stop asking, deindex it."
Google treats them very differently. 410s drop out of the index much faster than 404s.
The .htaccess rule:
# Permanently mark old .aspx URLs as gone
RewriteRule \.aspx$ - [G,L]
The [G] flag returns 410 Gone. The - means "don't substitute the URL." The [L] stops further rewrite processing for these URLs.
Within a few weeks of deploying this, Search Console's "Soft 404" report stopped growing. Within about two months, the old .aspx URLs were essentially fully removed from Google's index. The Search Console error counts dropped from "hundreds, occasionally rising" to "zero."
If you have a site with a long history of URL changes, run a "soft 404" audit. Anything that's permanently gone should return 410, not 404, and definitely not 200.
The full corrected .htaccess
For reference, here's roughly what I ended up with:
RewriteEngine On
# 1. Force HTTPS (Nginx-in-front-of-Apache aware)
RewriteCond %{HTTP:X-Forwarded-Proto} !https
RewriteRule ^(.*)$ https://%{HTTP_HOST}%{REQUEST_URI} [L,R=301]
# 2. Force non-www
RewriteCond %{HTTP_HOST} ^www\.(.+)$ [NC]
RewriteRule ^(.*)$ https://%1/$1 [L,R=301]
# 3. Permanently mark old .aspx URLs as Gone
RewriteRule \.aspx$ - [G,L]
# 4. Standard rewrite rules for the rest of the site below…
That's the migration in three blocks. The first one is the trap I fell into. The second is straightforward. The third is the tidying-up that should have happened years earlier.
Lessons learned
Trust your host's actual stack, not the generic Apache docs. "Nginx-in-front-of-Apache" is a different deployment from "vanilla Apache," and a lot of standard .htaccess snippets are subtly wrong on it. Always check what your host actually runs before pasting Stack Overflow rules.
X-Forwarded-Proto is the canonical way to detect protocol behind a reverse proxy. Not just for Apache — Express, Flask, Django, every web framework has equivalent helpers (req.protocol with trust proxy, request.is_secure, request.scheme with SECURE_PROXY_SSL_HEADER). If you've ever wondered why your framework "thinks" HTTPS is HTTP, this is usually the answer.
Soft 404s are not the same as 404s. Google treats them differently. If a page is permanently gone, return 410. If a page is broken right now but might come back, return 503. If you're just genuinely unsure, 404. The status code matters.
Test redirect rules on a staging copy first. I didn't, because the rule was "obvious." It wasn't. Twenty minutes of staging would have saved me three hours of production hot-fixing. I keep saying I've learned this lesson and I keep proving I haven't.
Search Console mostly fixes itself if you fix the underlying signals. The soft 404 cleanup showed me that GSC reports are largely a function of what your server actually sends. Once the server stops sending bad signals, the reports clear, often without further intervention.
Wrapping up
If you're hosting on SiteGround or any similar Nginx-in-front-of-Apache stack and you've ever had a redirect rule that "just doesn't work," X-Forwarded-Proto is probably what you need.
If you have an old site with URLs that no longer exist, switching them from soft 404 to 410 Gone is one of the lowest-effort, highest-leverage hygiene moves available — especially if Google's been quietly downranking those URLs for years.
And if you've ever taken down your own site at 2am, welcome. The club has many members.
Top comments (0)