Escaping Managed Hosting: What Happened When We Migrated a WooCommerce Site to a VPS (And Got Attacked)
Managed WordPress hosting sounds like a great deal — until it isn't. This is the story of migrating a WooCommerce + WPML site off a major managed host, the chaos that followed immediately after, and the hard lessons learned about what managed hosting was silently doing for us that we didn't fully appreciate until it was gone.
Why We Left Managed Hosting
The decision wasn't dramatic. It came down to three compounding frustrations:
Cost vs. control. Managed WordPress hosting at the enterprise tier isn't cheap. As traffic and complexity grew, so did the invoice. But the control stayed locked down — no custom server config, limited cache tuning, no ability to see what was actually hitting the server at a low level.
Performance ceiling. A WooCommerce store with WPML (multilingual) generates a lot of unique URLs — filtered shop pages, language variants, paginated archives. The managed host's caching layer was a black box. When performance degraded, the answer was always "upgrade your plan." There was no way to diagnose what was actually happening underneath.
Visibility. When something went wrong, we couldn't see access logs in real time, couldn't inspect PHP worker counts, couldn't adjust server-level settings. Everything was abstracted. The host was the gatekeeper between us and the actual machine.
The move to a self-managed VPS with RunCloud and OpenLiteSpeed (OLS) promised full visibility and control. It delivered on that promise — but it also immediately exposed us to everything the managed host had been silently absorbing on our behalf.
The Migration: What We Were Walking Into
The stack:
- WordPress + WooCommerce with WPML (multilingual, two languages)
- Elementor as the page builder
- LSCache for caching, Redis for object caching
- Migrating from managed hosting to a VPS via RunCloud with OpenLiteSpeed
The migration itself was technically straightforward: export, import, update DNS. What wasn't straightforward was what we discovered in the database afterward.
Problem 1: Serialized Data URL Contamination
During migration, the site temporarily lived on a staging domain (something like mysite.staging.temphost.link). The standard search-replace after migration should catch all references to the old domain and replace them with the new one.
It didn't catch everything.
Several plugins store data in WordPress's wp_options table as serialized PHP arrays. A normal string search-replace on serialized data corrupts it — because serialized strings encode their own length, and changing the URL changes the string length without updating the length prefix.
The plugins that caused contamination:
-
A media carousel plugin — stored 400KB+ of serialized image data in
wp_options, all with the temp domain baked in for image paths -
Elementor — had generated CSS files on disk at
wp-content/uploads/elementor/google-fonts/css/withhttp://URLs hardcoded, not regenerated after migration - An image optimization plugin — had a WebP cache directory full of references to the old domain
The fix required using WP-CLI's search-replace with the --precise flag, which handles serialized data correctly:
wp search-replace 'mysite.staging.temphost.link' 'mysite.com' --precise --all-tables
For the Elementor font CSS files on disk, a direct find + sed was needed since WP-CLI doesn't touch files:
find /var/www/mysite/wp-content/uploads/elementor/google-fonts/css/ -name "*.css" \
-exec sed -i 's|http://mysite.staging.temphost.link|https://mysite.com|g' {} \;
Problem 2: Missing SSL Config in wp-config.php
Despite the site running on HTTPS, Elementor kept generating http:// asset URLs. The reason was that wp-config.php was missing two lines that tell WordPress it's behind HTTPS:
$_SERVER['HTTPS'] = 'on';
define('FORCE_SSL_ADMIN', true);
Without these, WordPress doesn't know the request came in over SSL (especially behind a proxy or load balancer), so dynamic URLs default to http://. A subtle issue that caused a surprisingly large number of mixed content problems.
Problem 3: A Security Plugin Blocking Its Own Admin
One of the installed security plugins had added a .htaccess rule that was blocking all PHP file access:
RewriteRule ^.*\.php$ - [F,L,NC]
This rule was intended to block direct PHP file execution in upload directories. But it was placed at the wrong level — it blocked wp-admin, wp-login.php, and every other PHP file on the site. The admin panel was completely inaccessible. The fix was removing that rule from .htaccess manually via SSH before anything else could be done.
The Attack: Day One on the VPS
The migration was complete. The site was live. And then within hours, the server was at 700%+ CPU — that's 7 full cores pinned on a machine that should have been comfortably handling the traffic.
On managed hosting, this had never happened. Not because the traffic wasn't there — but because the managed host was absorbing it silently. Now it was hitting our VPS directly.
Attacker 1: Malicious Bots
Access log analysis revealed two suspicious IPs hammering the site:
grep "111.88.x.x" /var/log/ols/mysite.com_access.log | awk '{print $7}' | sort | uniq -c | sort -rn | head -30
Both IPs were sending dozens of requests per minute to wp-admin/admin-ajax.php, WooCommerce AJAX endpoints, and Contact Form 7 REST endpoints — the fingerprint of bots probing for vulnerabilities and scraping data.
Fix: blocked in .htaccess at the top of the file, before any WordPress rules:
# Block malicious IPs
<RequireAll>
Require all granted
Require not ip 111.88.x.x
Require not ip 45.77.x.x
</RequireAll>
Attacker 2: The Meta Crawler (The Real Problem)
Blocking the malicious IPs brought load down — but not to safe levels. Something else was still hammering the server. Back to the access logs:
grep "meta-externalagent" /var/log/ols/mysite.com_access.log | awk '{print $7}' | sort | uniq -c | sort -rn | head -30
Facebook's Meta crawler (meta-externalagent/1.1) was systematically crawling every combination of the WooCommerce shop's filter URLs:
/shop/?filter_color=red&filter_size=small
/shop/?filter_color=red&filter_size=medium
/shop/?filter_color=blue&filter_size=small
... hundreds of unique combinations
Here's why this is catastrophic for a WooCommerce + WPML site:
Every unique URL is a cache miss. LSCache serves cached pages instantly with zero PHP. But a cached page is keyed by URL. Each filter combination is a different URL — so each one bypasses the cache entirely, boots WordPress, boots WooCommerce, boots WPML, runs a database query, and renders a response. The crawler was generating thousands of cache misses per hour.
The fix — immediate:
# Block Meta crawler
RewriteEngine On
RewriteCond %{HTTP_USER_AGENT} meta-externalagent [NC]
RewriteRule ^ - [F,L]
After adding this rule and restarting OLS:
load average: 0.94 — 4 lsphp processes
From 700%+ CPU to essentially idle. The Meta crawler was the primary load driver the entire time.
The longer-term fix: add the shop's filter URL pattern to robots.txt so crawlers stop attempting them:
User-agent: meta-externalagent
Disallow: /
User-agent: *
Disallow: /shop/?*
What Managed Hosting Was Silently Doing
This is the part that changed how we think about managed hosting.
The managed host had several layers we never thought about:
- Bot filtering at the edge — known bad actors and aggressive crawlers were blocked before they reached WordPress at all
- DDoS mitigation — volumetric attacks were absorbed by the host's network layer
- Rate limiting — aggressive crawlers were throttled automatically
None of this was documented prominently. It was just... happening. Moving to a raw VPS removed all of it at once. The site went from being behind a shield to being fully exposed to the internet with only .htaccess and OLS between it and every bot on the planet.
The visibility was exactly what we wanted — we could finally see everything. But we also now had to handle everything ourselves.
The Permanent Solution: Cloudflare
Fixing bots with .htaccess rules is whack-a-mole. Block one IP, another appears. Block one user agent, it rotates. The real fix is a layer in front of the server that handles this at scale before it ever reaches OLS.
Cloudflare's free plan provides:
- Bot Fight Mode — automatically identifies and blocks known bot fingerprints
- Rate limiting — caps any single IP's request rate before it can cause load spikes
- Edge caching — WooCommerce shop pages can be cached at Cloudflare's edge, so even cache misses on the origin become cache hits at the CDN level
- Real-time traffic analytics — finally, full visibility into what's hitting the site and from where
The architecture after adding Cloudflare:
Internet → Cloudflare edge (bot filtering, rate limiting, CDN cache)
→ VPS / OpenLiteSpeed (LSCache, Redis)
→ WordPress / WooCommerce / WPML
Bad traffic is rejected at Cloudflare before it touches the server. The PHP worker pool stays available for real users.
This is what managed hosting was providing implicitly. Cloudflare makes it explicit, configurable, and visible — and the free tier handles the vast majority of what a typical WooCommerce store needs.
Additional Server Hardening After Stabilization
With the immediate crisis resolved, we locked down the remaining attack surface:
Security headers via OLS:
X-Frame-Options: SAMEORIGIN
X-Content-Type-Options: nosniff
X-XSS-Protection: 1; mode=block
Strict-Transport-Security: max-age=31536000; includeSubDomains
PHP worker limits — OLS was configured with a hard cap on concurrent PHP workers:
maxConns 8
PHP_LSAPI_CHILDREN=8
Without this cap, a flood of requests spawns unlimited PHP workers and exhausts RAM. With the cap, excess requests queue rather than spawning new processes.
Snapshot immediately after stabilization — once the server was clean and stable, we took a VPS snapshot as a known-good baseline. If anything goes wrong in future, rollback is one click.
Lessons Learned
1. Managed hosting hides its value until it's gone.
The bot filtering, DDoS mitigation, and crawler management that managed hosts provide are rarely documented but deeply valuable. Budget for replacing that capability explicitly when moving to a VPS.
2. Always use --precise for search-replace on migrated WordPress sites.
Standard search-replace corrupts serialized data. The --precise flag in WP-CLI handles it correctly. Make it a default step in every migration checklist.
3. Cloudflare is not optional for a self-managed WooCommerce store.
Put it in front on day one. Not after you've been attacked. The free plan covers the essentials, and the visibility alone is worth it.
4. WooCommerce filter URLs are a crawler trap.
Any WooCommerce store with faceted filtering generates effectively infinite unique URLs. Configure LSCache to ignore query strings on shop pages, and disallow filter URL patterns in robots.txt before crawlers index them.
5. Access logs are your best friend on a VPS.
The moment something goes wrong, SSH in and read the access log. The answer is almost always there — which IPs, which user agents, which URLs, how many requests per minute. On managed hosting, you often can't do this. On a VPS, it's the first thing you reach for.
The Outcome
The site runs more reliably now on the VPS than it ever did on managed hosting — and at significantly lower cost. Load averages stay under 1.0 under normal traffic. The Cloudflare layer handles bot traffic before it reaches the server. LSCache and Redis handle the WordPress-level caching. And when something goes wrong, we can actually see it.
The migration pain was real. But it was a one-time cost that permanently increased visibility, control, and resilience. The managed host was comfortable — but comfort was masking problems we couldn't see or fix.
Top comments (0)