Last week I migrated a client's WordPress site off shared hosting onto a $6/month VPS. The before-and-after was genuinely embarrassing. We're talking TTFB dropping from 2.8 seconds to 180 milliseconds. Same code. Same database. Same content. The only difference was where it was running.
If you've ever stared at a slow site and thought "maybe I need to optimize my queries" when the real problem is your neighbor on the same box running a crypto miner — this one's for you.
Why Shared Hosting Is Killing Your Performance
Shared hosting means your site shares CPU, RAM, and disk I/O with dozens (sometimes hundreds) of other sites on the same physical server. The hosting provider oversells capacity because most sites are idle most of the time. That works fine until it doesn't.
Here's what's actually happening under the hood:
- CPU throttling: Your process gets timesliced with everyone else. During peak hours, your PHP workers are literally waiting in line.
- Disk I/O contention: One site doing heavy database writes tanks read performance for everyone. Shared disks are the bottleneck nobody talks about.
- Memory limits: You're typically capped at 256-512MB regardless of what the server actually has. OOM kills happen silently.
- Noisy neighbors: You have zero control over what other tenants are doing. One misconfigured cron job can spike load for the entire box.
The thing that tipped me off with this client was inconsistent response times. Sometimes the site loaded in 400ms, sometimes 4 seconds. That variance is the telltale sign of resource contention.
Diagnosing the Problem Before You Migrate
Before ripping everything out, confirm that shared hosting is actually the bottleneck. SSH into your current host (if they allow it) and run some quick checks:
# Check current server load — anything above the CPU count is bad
uptime
# Output: load average: 24.31, 22.67, 21.89 (on a 4-core box... yikes)
# See how many sites are running on this box
ls /home/ | wc -l
# Output: 187
# Check disk I/O wait — high iowait means disk contention
iostat -x 1 3
# Look at %iowait and await columns
If your load average is consistently above the CPU core count and you see high I/O wait, no amount of code optimization will fix this. You need your own box.
Step-by-Step VPS Migration
Here's the exact process I followed. The whole thing took about two hours including DNS propagation.
1. Provision and Secure the VPS
Spin up a VPS with your provider of choice. For most small-to-medium sites, 1 vCPU and 1GB RAM is more than enough. Seriously. That's more dedicated resources than you were getting on shared hosting.
# First things first — update and lock it down
apt update && apt upgrade -y
# Create a non-root user
adduser deploy
usermod -aG sudo deploy
# Set up SSH key auth and disable password login
mkdir -p /home/deploy/.ssh
cp ~/.ssh/authorized_keys /home/deploy/.ssh/
chown -R deploy:deploy /home/deploy/.ssh
chmod 700 /home/deploy/.ssh
chmod 600 /home/deploy/.ssh/authorized_keys
# Disable root login and password auth
sed -i 's/PermitRootLogin yes/PermitRootLogin no/' /etc/ssh/sshd_config
sed -i 's/#PasswordAuthentication yes/PasswordAuthentication no/' /etc/ssh/sshd_config
systemctl restart sshd
# Basic firewall — only allow SSH, HTTP, HTTPS
ufw allow OpenSSH
ufw allow 'Nginx Full'
ufw enable
Don't skip the security steps. An unsecured VPS will get brute-forced within hours. I'm not exaggerating.
2. Install Your Stack
For this particular migration, I went with Nginx, PHP-FPM, and MariaDB. If you're migrating a Node app or something else, adjust accordingly.
# Install the essentials
apt install -y nginx mariadb-server php8.3-fpm php8.3-mysql \
php8.3-curl php8.3-gd php8.3-mbstring php8.3-xml php8.3-zip
# Secure MariaDB
mysql_secure_installation
# Tune PHP-FPM for your available memory
# For 1GB RAM, these are reasonable starting values
sudo nano /etc/php/8.3/fpm/pool.d/www.conf
Here's the PHP-FPM config that made the biggest difference:
; Switch from dynamic to ondemand if memory is tight
pm = ondemand
pm.max_children = 10
pm.process_idle_timeout = 10s
pm.max_requests = 500
; Enable opcache — this alone cut response times in half
[opcache]
opcache.enable=1
opcache.memory_consumption=128
opcache.interned_strings_buffer=8
opcache.max_accelerated_files=10000
opcache.validate_timestamps=0 ; set to 1 during development
That opcache.validate_timestamps=0 line is important. It tells PHP to never check if files changed, which eliminates stat() calls on every request. Just remember to restart PHP-FPM after deployments.
3. Migrate the Data
# On the old server — dump the database
mysqldump -u root -p --all-databases --single-transaction > dump.sql
# Tar up the site files
tar czf site-backup.tar.gz /var/www/html/
# Transfer to new server
rsync -avz --progress dump.sql deploy@new-server:/tmp/
rsync -avz --progress site-backup.tar.gz deploy@new-server:/tmp/
# On the new server — import
mysql -u root -p < /tmp/dump.sql
tar xzf /tmp/site-backup.tar.gz -C /
Use rsync instead of scp — it handles interruptions gracefully and shows progress. For large databases, pipe the dump through gzip to speed up the transfer.
4. Configure Nginx
Replace Apache's .htaccess sprawl with a clean Nginx config:
server {
listen 80;
server_name example.com www.example.com;
root /var/www/html;
index index.php;
# Enable gzip — shared hosts often have this disabled
gzip on;
gzip_types text/css application/javascript application/json image/svg+xml;
gzip_min_length 1000;
# Static file caching
location ~* \.(jpg|jpeg|png|gif|ico|css|js|woff2)$ {
expires 30d;
add_header Cache-Control "public, immutable";
}
location / {
try_files $uri $uri/ /index.php?$args;
}
location ~ \.php$ {
fastcgi_pass unix:/run/php/php8.3-fpm.sock;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
fastcgi_read_timeout 60;
}
}
5. Set Up TLS and Flip DNS
# Install certbot and grab a certificate
apt install -y certbot python3-certbot-nginx
certbot --nginx -d example.com -d www.example.com
# Verify auto-renewal works
certbot renew --dry-run
Then update your DNS A record to point to the new server's IP. Set a low TTL (300 seconds) a day before the migration so the switchover is fast.
The Results
After the migration, I ran some benchmarks with curl:
# Measure TTFB
curl -o /dev/null -s -w "TTFB: %{time_starttransfer}s\nTotal: %{time_total}s\n" https://example.com
# Before (shared hosting):
# TTFB: 2.847s
# Total: 3.221s
# After (VPS):
# TTFB: 0.183s
# Total: 0.247s
That's a 15x improvement in TTFB. The site went from a PageSpeed score of 34 to 91 without touching a single line of application code.
Preventing Future Problems
Now that you own the server, you own the problems too. Set up monitoring so you're not flying blind:
-
Set up unattended security updates:
apt install unattended-upgradesand configure it. Seriously, do this day one. - Monitor disk space: Logs and backups will fill your disk eventually. Set up a cron job or use a monitoring tool to alert you.
- Automate backups: A VPS without backups is a ticking time bomb. Schedule daily database dumps and weekly full snapshots.
-
Watch your logs: Check
/var/log/nginx/error.logand PHP-FPM logs periodically. Errors that were invisible on shared hosting will now show up clearly.
The one downside of a VPS is that you're responsible for everything. No more opening a support ticket when MySQL crashes at 3 AM. But honestly, for the performance difference, it's a tradeoff worth making every single time.
If you're still on shared hosting and wondering whether migration is worth the effort — it is. Two hours of work for a 15x performance improvement is about the best ROI you'll ever get in web development.
Top comments (0)