FastAPI makes it super easy to build APIs — but deploying it to real production on a VPS is where many developers get stuck.
Over the past months, I’ve deployed multiple FastAPI services (APIs, webhooks, and automation backends) to VPS servers for real users. In this guide, I’ll walk through a clean, production-ready setup you can reuse for your own projects.
We’ll go from:
main.pyon your laptop → Live HTTPS API on a VPS withgunicorn,uvicorn,systemd, andnginx.
What We’ll Build
We’ll deploy a FastAPI app so that:
- It runs behind gunicorn + uvicorn workers
- It’s managed by systemd (auto-start on reboot, restart on crash)
- It’s served via nginx as a reverse proxy
- It’s protected with HTTPS using Let’s Encrypt
- It uses environment variables for secrets (no hardcoded tokens)
- It’s easy to update with a simple git pull + restart
I’ll assume:
- You’re using Ubuntu 20.04+ on your VPS
- You have SSH access
- You have a domain pointing to the VPS IP (optional but recommended)
- Create a Minimal FastAPI App
On your local machine (or directly on the server), create a basic FastAPI app:
mkdir fastapiapp
cd fastapiapp
python -m venv venv
source venv/bin/activate On Windows: venv\Scripts\activate
pip install fastapi uvicorn[standard]
Create app/main.py:
from fastapi import FastAPI
app = FastAPI()
@app.get("/")
async def read_root():
return {"message": "Hello from FastAPI on VPS!"}
Test locally:
uvicorn app.main:app --reload
If it works locally, we’re ready to move to the VPS.
- Connect to Your VPS
From your local terminal:
ssh root@YOUR_SERVER_IP
(Replace root with your username if you’re using a non-root user.)
Update packages:
apt update && apt upgrade -y
Install some essentials:
apt install -y git python3 python3-venv python3-pip nginx
- Clone Your FastAPI Project to the VPS
Inside your server:
cd /opt
git clone https://github.com/ibrahimpelumi6142/fastapi-production-guide
cd fastapi-production-guide
python3 -m venv venv
source venv/bin/activate
pip install --upgrade pip
pip install fastapi uvicorn[standard] gunicorn
(If your project is private, use ssh clone or GitHub deploy keys.)
You should now have:
fastapi-production-guide/
├── app/
│ └── main.py
├── nginx/
│ └── fastapi.conf
├── systemd/
│ └── fastapi.service
├── .env.example
├── requirements.txt
└── README.md
- Run FastAPI with Gunicorn + Uvicorn Workers
Instead of running plain uvicorn, in production we use gunicorn to manage multiple uvicorn workers.
From /opt/fastapiapp:
source venv/bin/activate
gunicorn -k uvicorn.workers.UvicornWorker app.main:app --bind 0.0.0.0:8000 --workers 3
Your app should be available on port 8000:
curl http://127.0.0.1:8000/
If you see:
{"message":"Hello from FastAPI on VPS!"}
We’re good.
Now we’ll turn this into a service.
- Create a systemd Service (So It Runs in the Background)
We don’t want to manually start gunicorn every time.
We’ll use systemd to manage it like a proper service.
Create a service file:
nano /etc/systemd/system/fastapi.service
Paste this (update paths/usernames where needed):
[Unit]
Description=Gunicorn instance to serve FastAPI app
After=network.target
[Service]
User=root
WorkingDirectory=/opt/fastapi-production-guide
Environment="PATH=/opt/fastapi-production-guide/venv/bin"
ExecStart=/opt/fastapi-production-guide/venv/bin/gunicorn -k uvicorn.workers.UvicornWorker app.main:app --bind 0.0.0.0:8000 --workers 3
Restart=always
RestartSec=5
[Install]
WantedBy=multi-user.target
Save and exit.
Reload systemd:
systemctl daemon-reload
systemctl start fastapi
systemctl enable fastapi
Check status:
systemctl status fastapi
If it’s active (running), your backend is now running in the background, even if you log out.
- Configure Nginx as a Reverse Proxy
We don’t expose gunicorn directly to the internet. Instead, we let nginx:
- Handle HTTP(S)
- Proxy requests to
127.0.0.1:8000 - Deal with static files (if needed)
- Improve security & performance
Create a new Nginx config:
nano /etc/nginx/sites-available/fastapi
If you have a domain, use it. If not, you can temporarily use the server IP.
server {
listen 80;
server_name YOUR_DOMAIN_NAME_HERE; or your server IP
location / {
proxy_pass http://127.0.0.1:8000;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
Enable this site:
ln -s /etc/nginx/sites-available/fastapi /etc/nginx/sites-enabled/
rm /etc/nginx/sites-enabled/default optional, remove default site
nginx -t
systemctl restart nginx
Now, visit:
-
http://YOUR_DOMAIN_NAME/or http://YOUR_SERVER_IP/
You should see your FastAPI JSON response.
- Add HTTPS with Let’s Encrypt (Certbot)
Never leave a public API on pure HTTP.
Install Certbot:
apt install -y certbot python3-certbot-nginx
Run:
certbot --nginx -d YOUR_DOMAIN_NAME_HERE
Follow the prompts:
- Provide email
- Accept terms
- Choose redirect option → Redirect all traffic to HTTPS
After this, your site is live at:
https://YOUR_DOMAIN_NAME/
Certbot also sets up automatic certificate renewal.
- Using Environment Variables for Secrets
Never hardcode secrets like:
- API keys
- Database credentials
- JWT secrets
In your app, use os.getenv or pydantic BaseSettings.
Example: app/config.py
from pydantic import BaseSettings
class Settings(BaseSettings):
app_name: str = "My FastAPI App"
secret_key: str
debug: bool = False
class Config:
env_file = ".env"
settings = Settings()
In main.py:
from fastapi import FastAPI
from .config import settings
app = FastAPI(title=settings.app_name)
On the server, create .env in /opt/fastapiapp:
nano /opt/fastapiapp/.env
Example:
SECRET_KEY=supersecretvalue
DEBUG=false
Restart the service:
systemctl restart fastapi
Now your secrets live outside the code.
- Updating Your App (Deploying New Versions)
When you change your code on GitHub, deploy updates like this:
ssh root@YOUR_SERVER_IP
cd /opt/fastapiapp
git pull
source venv/bin/activate
pip install -r requirements.txt if using one
systemctl restart fastapi
That’s it — instant deploy.
If you want smoother deploys, you can:
- Add a
deploy.shscript - Use
systemctl reloadfor zero-downtime (with more advanced gunicorn configs) - Add logging to file + log rotation
- Basic Logging & Monitoring
You can see logs via systemd:
journalctl -u fastapi -f
Or Nginx logs:
tail -f /var/log/nginx/access.log
tail -f /var/log/nginx/error.log
From here, you can integrate:
- Uvicorn access logs
- Centralized logging (e.g. Loki, ELK, etc.)
- Health checks & uptime monitoring (e.g. UptimeRobot or Better Stack)
- Common Mistakes to Avoid
Some things I’ve seen (and done myself) that cause pain in production:
❌ Running uvicorn directly in screen/tmux
If the terminal dies → app dies. Use systemd.
❌ Exposing port 8000 to the world
Always hide gunicorn/uvicorn behind Nginx.
❌ No HTTPS
APIs without HTTPS are insecure and sometimes blocked by clients.
❌ Hardcoding tokens in code
Always use .env + BaseSettings or environment variables.
❌ No restart policy
If your app crashes once and doesn’t restart, you’re down. systemd’s Restart=always saves you here.
- When to Move Beyond a Single VPS
A single VPS with FastAPI + gunicorn + Nginx is great for:
- MVPs
- Side projects
- Small SaaS
- Internal tools
- Bots & automation services
When do you outgrow it?
- You need auto-scaling
- You want zero-downtime blue/green deployments
- You need container orchestration (Kubernetes, ECS, etc.)
- You have many microservices and need a service mesh
But for 80% of real-world use cases, a well-configured VPS is more than enough, especially when you’re just shipping and proving value.
Conclusion
Deploying FastAPI to production doesn’t have to be scary.
With:
- gunicorn + uvicorn workers
- systemd to manage your process
- nginx as a reverse proxy
- Let’s Encrypt for HTTPS
- environment variables for secrets
…you already have a solid, production-ready setup that many teams use in real products.
If you’re building something like a WhatsApp bot backend, job assistant, automation service, or SaaS API, this stack will carry you very far.
About the Author
I’m Lasisi Ibrahim Pelumi, a full-stack developer and automation engineer based in the UK. I build AI-powered products, automation tools, and APIs — including WhatsApp-based assistants and SaaS platforms deployed on VPS and cloud infrastructure.
You can find more of my work on:
- GitHub:
@ibrahimpelumi6142 - LinkedIn: Lasisi Ibrahim Pelumi
Top comments (0)