From Zero to Production: How I Deployed My App on a VPS Without Losing My Mind
A brutally honest, step-by-step guide to going from a blank server to a live, production-ready app using OVHcloud, UFW, Fail2ban, and Coolify v4
Every developer has that moment. You've built something you're proud of. The code works on your laptop. Your friends are waiting. Your users are ready. And then someone asks: "Okay, but how do people actually use it?"
That question sent me down a rabbit hole that lasted an entire day — from buying a VPS at midnight, to staring at a blinking cursor on a remote server, to watching my app finally load in a browser on a live domain with a proper https://. This is that story.
Chapter 1: Choosing a VPS — The Decision Nobody Tells You About
Before you can deploy anything, you need a place to deploy to. Think of a VPS (Virtual Private Server) like renting a flat instead of a hotel room. A shared hosting platform (like Heroku or Railway) is the hotel — comfortable, managed, but you're sharing walls with strangers and the rules aren't yours. A VPS is the flat — it's your space, you control everything, but you're responsible for your own plumbing.
I chose OVHcloud. Their entry-level VPS plans are competitive on price — especially compared to AWS or DigitalOcean at equivalent specs. I ordered a VPS with Ubuntu 24.04, which landed in my inbox with SSH credentials within minutes.
TIP: When picking a data center region, choose the one closest to where most of your users are. Latency is the silent killer of perceived performance. If your users are in Lagos, don't host in Oregon.
Once that email lands in your inbox with the VPS IP and root credentials, there's a very specific kind of excitement — like getting handed the keys to an empty apartment. You go in, look around, and realise there's nothing there yet. Nowhere to sit. No furniture. Just your tools and your ambition. So the first thing you do is log in.
Chapter 2: Your First Login — Meeting the Empty Room
SSH-ing into a fresh VPS for the first time is humbling. It's just you and a blinking cursor. No apps. No services. Nothing but a bare Ubuntu install.
ssh root@YOUR_VPS_IP
Before anything else, update the system. This is like dusting a new flat before you move furniture in:
apt update && apt upgrade -y
Now here's the thing nobody warns you about. The moment your server is provisioned, it's already being target by automated bots scanning for open ports. Your server exists on the public internet — and the public internet is not friendly. Before you install a single dependency or write a single config file, you need to lock down who can even talk to your machine. That means building a fence.
Chapter 3: UFW — Building the Fence Around Your House
Imagine moving into a new house and leaving every door and window wide open because "nobody knows your address yet." That's a fresh server with no firewall. Bots are constantly scanning the internet for open ports — and they don't care how new your server is.
UFW (Uncomplicated Firewall) is exactly what the name says. It's Ubuntu's friendly wrapper over the more complex iptables firewall rules.
The philosophy is: deny everything by default, then only open what you need.
# Install UFW
apt install ufw -y
# Deny ALL incoming connections by default
ufw default deny incoming
# Allow all outgoing (your server needs to reach the internet)
ufw default allow outgoing
# Always allow SSH first — if you forget this and enable the firewall,
# you will lock yourself out. Permanently. Don't be that person.
ufw allow 22/tcp
# Allow HTTP and HTTPS for the web
ufw allow 80/tcp
ufw allow 443/tcp
# Allow port 3000 for Coolify's dashboard
ufw allow 3000/tcp
# Enable it
ufw enable
Check what you've got:
ufw status verbose
The analogy: UFW is the security guard at the entrance of your apartment building. He lets in residents (ports you've explicitly allowed) and turns away everyone else. Every port you don't open is a door that doesn't exist to the outside world.
But here's the gap UFW leaves open: port 22 — SSH — has to stay open. That's the door you use to manage your server. And since it has to stay open, anyone in the world can knock on it. UFW will let them knock. What you need is something that notices when someone is knocking too aggressively and shows them the door permanently.
Chapter 4: Fail2ban — The Bouncer Who Remembers Faces
UFW is great, but it's passive. It just blocks doors. What about the people actively trying to guess their way in?
SSH brute-force attacks are real. Someone, somewhere, is pointing a bot at every IP on the internet and trying admin/admin, root/password, root/123456 thousands of times per minute. Without protection, your server will just sit there and take it.
Fail2ban is the answer. It watches your server logs, and when it sees the same IP failing to log in multiple times, it temporarily bans that IP. It's like a bouncer who has a list — three failed attempts and you're cut from the queue.
apt install fail2ban -y
Create a local configuration (never edit the default directly — updates will overwrite it):
cp /etc/fail2ban/jail.conf /etc/fail2ban/jail.local
nano /etc/fail2ban/jail.local
Find the [sshd] section and configure it:
[sshd]
enabled = true
port = ssh
maxretry = 5
bantime = 3600
findtime = 600
This says: if any IP fails to authenticate more than 5 times within 10 minutes, ban them for 1 hour.
systemctl enable fail2ban
systemctl start fail2ban
# Check who's been banned already (you'd be surprised)
fail2ban-client status sshd
The analogy: If UFW is the wall around your house, Fail2ban is the doorbell camera that automatically calls the police on anyone who jiggles your handle more than five times.
Your server is now hardened. It has a fence. It has a bouncer. Nobody who shouldn't be here is getting in. Now comes the fun part — actually putting something on this server. You could do it the hard way: manually install Nginx, write config files, manage Docker yourself, wrestle with SSL certs at 2am. Or you could install Coolify, which does all of that for you through a browser UI.
Chapter 5: Coolify v4 — Your Own Heroku, On Your Own Terms
This is where the magic happens.
Coolify is an open-source, self-hosted Platform-as-a-Service (PaaS). Think of it as Heroku or Railway — but you own the server, you own the data, and you pay the VPS provider instead of them. It handles deployments, SSL certificates, environment variables, databases, reverse-proxying — all through a clean web UI.
Installing it is genuinely one command:
wget -q https://get.coollabs.io/coolify/install.sh -O install.sh
sudo bash install.sh
The installer:
- Installs Docker (Coolify runs everything in containers)
- Sets up Coolify's own containers
- Starts the service on port 3000
Once it's done, open your browser and go to:
http://YOUR_VPS_IP:3000
You'll see the Coolify setup wizard. Create your admin account. That's it — you now have your own cloud platform.
The analogy: Coolify is like buying a plot of land and building your own shopping mall, where you can open any number of shops (apps). Heroku is renting a stall in someone else's mall — convenient, but they set the rules and the rent goes up.
With your platform ready, the first thing to deploy isn't your app — it's the thing your app depends on. Every application needs somewhere to store its data. And for production, that means a real database.
Chapter 6: PostgreSQL — The Database
In Coolify, navigate to Databases → PostgreSQL → New. Choose a version (PostgreSQL 15 or 16 is fine), give it a name, and deploy it.
Coolify provisions it inside Docker. The important thing to grab is the internal connection string:
postgresql://postgres:your_password@YOUR_VPS_IP:5432/postgres
Save this — your backend will need it.
Why PostgreSQL over SQLite? SQLite is a file. It works brilliantly for development, but a file on a server that gets redeployed gets wiped. PostgreSQL is a proper server — persistent, concurrent, production-grade.
Database: running. Now you need the thing that actually uses it. The backend is the brain of your app — it handles authentication, validates requests, talks to the database, and enforces all the business logic. Without it, the frontend is just a pretty screen with nowhere to send data.
Chapter 7: Deploying the Backend
You've got a database. Now let's give it something to talk to.
In Coolify:
- Click Applications → New
- Connect your GitHub repository
- Select the
backenddirectory - Choose Nixpacks as the build pack (it auto-detects Node.js)
Then set your environment variables — these are the secrets your app needs to run:
NODE_ENV=production
DATABASE_URL=postgresql://postgres:your_password@YOUR_VPS_IP:5432/postgres
JWT_SECRET=a_very_long_random_string_you_generate_once
FRONTEND_URL=https://pos.yourdomain.com
PORT=3000
IMPORTANT: Your
JWT_SECRETsigns every login token. If someone gets it, they can impersonate any user. Generate it with:openssl rand -hex 32And never commit it to Git. Never.
Assign a domain — e.g. https://api.yourdomain.com — and hit Deploy.
Coolify will:
- Pull your code from GitHub
- Build it with Nixpacks
- Start it in a Docker container
- Expose it via Traefik (the reverse proxy it ships with)
- Request an SSL certificate from Let's Encrypt automatically
Once the backend is up and you can hit /health and get a response, the hardest part is done. The API exists. The database is talking to it. All that's left is giving users a face to put to it — the frontend.
Chapter 8: Deploying the Frontend
Same process, different directory:
- Applications → New → same GitHub repo
- Select the
frontenddirectory - Build pack: Static / Vite
Environment variables:
VITE_API_URL=https://api.yourdomain.com
Assign domain: https://pos.yourdomain.com
One catch with React SPAs: when a user navigates to https://pos.yourdomain.com/dashboard, the server looks for a file called /dashboard/index.html — which doesn't exist. You need to tell the static server to always serve index.html and let React Router handle the routing.
In Coolify's static site config, add a rewrite rule:
/* → /index.html
Deploy it. Watch the build logs. When you see "Build successful" — exhale.
But here's the thing: your app is still living behind an IP address. https://51.195.xx.xx is not something you'd put on a business card. You need your domain. You need DNS.
Chapter 9: DNS — Pointing the World at Your Server
You've got a live backend and frontend. But they're still at IP addresses. You need domains.
In your DNS provider (we used Namecheap), add A Records:
| Host | Value | TTL |
|---|---|---|
api |
YOUR_VPS_IP |
Automatic |
pos |
YOUR_VPS_IP |
Automatic |
DNS propagation can take up to 48 hours, but usually resolves within minutes if you're lucky.
Once DNS resolves, Coolify's Traefik reverse proxy intercepts the request, routes it to the right container, and serves SSL — the padlock in your browser — via Let's Encrypt. Completely free. Completely automated.
And then you wait. Maybe ten minutes. Maybe two hours if your registrar is slow. But eventually, you type the domain into your browser and — if everything went right — you see your app. Not on localhost. Not on a staging URL. On your domain. With a padlock.
Chapter 10: The Moment of Truth
Open a fresh browser tab — not incognito, not localhost:
https://pos.yourdomain.com
If you see your login page with a padlock in the address bar — you're live.
If you see a Coolify error or a 502, check:
-
https://api.yourdomain.com/health— is the backend responding? - Coolify logs (click the app → Logs) — what error is it throwing?
- Environment variables — did
DATABASE_URLget set correctly?
Debugging a production deploy is a different discipline from debugging local code — you're reading logs instead of console.logs, and every change means a redeploy. But that's the job.
What I Learned
Fail2ban's name, by the way — you were trying to remember it. It comes from exactly what it does: it fails bots until it bans them. Simple.
The hardest part wasn't the setup — it was the order. Do UFW before Fail2ban. Get ports right before enabling the firewall. Understand what each service needs before restricting network access. Every "locked myself out" story from developers comes from doing things in the wrong order.
Coolify is worth it. The abstraction it provides over raw Docker and Nginx configuration saves hours of work. Its UI is clean, its logs are readable, and auto-SSL is genuinely magical the first time you see it work.
The Full Checklist
✅ Bought VPS (OVHcloud) — Ubuntu 24.04
✅ Updated packages (apt update && apt upgrade)
✅ Configured UFW — default deny, opened 22, 80, 443, 3000
✅ Installed Fail2ban — maxretry 5, bantime 1h
✅ Installed Coolify v4 — accessed at :3000
✅ Provisioned PostgreSQL via Coolify
✅ Deployed backend (Node.js/Nixpacks) with env vars
✅ Deployed frontend (React/Vite/Static) with SPA rewrite rule
✅ Configured DNS A records (Namecheap)
✅ Verified SSL via Let's Encrypt (automatic in Coolify)
✅ Tested live endpoints
From a blank server to a live app, start to finish, in one day. It's not magic — it's just knowing the order of operations.
Top comments (0)