1.Environment Setup(VPS Preparation)
Before deploying the application, I prepared a clean Ubuntu-based VPS environment.
Connect to VPS
ssh root@your_vps_ip
System Update
apt update && apt upgrade -y
During the upgrade process:
- Kept the existing SSH configuration
- Restarted services using default options This ensures system packages are up to date without breaking remote access.
Install Docker
To containerize the microservices, Docker and Docker Compose were installed:
apt install -y ca-certificates curl gnupg
install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | gpg --dearmor -o /etc/apt/keyrings/docker.gpg
chmod a+r /etc/apt/keyrings/docker.gpg
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
$(. /etc/os-release && echo $VERSION_CODENAME) stable" | tee /etc/apt/sources.list.d/docker.list > /dev/null
apt update
apt install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
Verify Installation
docker run hello-world
This confirms Docker is correctly installed and functioning.
2. Application Deployment & Infrastructure Setup
After preparing the VPS environment, I deployed the application using Docker and configured domain routing, reverse proxy, and HTTPS.
Deploy Application with Docker Compose
The project is structured as a microservices system and deployed via Docker Compose:
git clone <your-repo>
cd <repo>
docker compose up -d --build
docker ps
This starts all services.
Handling Database Initialization Issue
During the first deployment, a common issue occurred:
The application attempted to run database migrations before MySQL was fully ready.
This caused startup failures. To address this, I enabled retry logic in the database configuration, allowing the service to wait and retry until the database becomes available.
Cloudflare Proxy & SSL Configuration
A custom domain was purchased and configured via Cloudflare.
DNS records:
chenlis.com → frontend
admin.chenlis.com → admin panel
api.chenlis.com → API gateway
Cloudflare proxy (orange cloud) was enabled to introduce an additional layer between users and the origin server:
User → Cloudflare → VPS
This provides several important benefits:
- CDN acceleration: Static assets can be cached at Cloudflare edge nodes, reducing latency and improving load times for users in different regions. It also reduces direct traffic to the VPS.
- Basic DDoS protection: Incoming traffic is filtered by Cloudflare before reaching the server. This helps mitigate simple flooding or abnormal request patterns that could otherwise overwhelm a small VPS.
- Hiding the origin server IP:The public IP of the VPS is no longer exposed. Requests are routed through Cloudflare, making it harder for attackers to directly target the server.
- TLS optimization: Cloudflare handles TLS negotiation at the edge, improving connection performance and reducing cryptographic overhead on the origin server.
SSL Mode: Full (Strict)
The SSL/TLS mode was configured to Full (Strict).
This ensures:
User → HTTPS → Cloudflare → HTTPS → Origin Server
with certificate validation on both sides.
- In Full mode, Cloudflare encrypts traffic to the server but does not validate the certificate.
- In Full (Strict) mode, Cloudflare requires a valid certificate (e.g., Let’s Encrypt), ensuring both encryption and trust. Since a valid certificate was already issued via Certbot, switching to Full (Strict) guarantees end-to-end secure communication.
Reverse Proxy with Nginx
Nginx was installed and configured as a reverse proxy:
sudo apt update
sudo apt install nginx -y
sudo systemctl start nginx
sudo systemctl enable nginx
Configuration file:
server {
listen 80;
server_name chenlis.com;
location / {
proxy_pass http://localhost:3001;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
}
server {
listen 80;
server_name admin.chenlis.com;
location / {
proxy_pass http://localhost:3000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
}
server {
listen 80;
server_name api.chenlis.com;
location / {
proxy_pass http://localhost:5193;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
}
Enable the configuration:
sudo ln -s /etc/nginx/sites-available/fintrack /etc/nginx/sites-enabled/
sudo rm /etc/nginx/sites-enabled/default
sudo nginx -t
sudo systemctl reload nginx
Enable HTTPS with Certbot
To secure all endpoints, HTTPS was configured using Let’s Encrypt:
sudo apt update
sudo apt install certbot python3-certbot-nginx -y
sudo certbot --nginx \
-d chenlis.com \
-d admin.chenlis.com \
-d api.chenlis.com
This automatically:
- Issues SSL certificates
- Configures Nginx
- Enables auto-renewal
3. CI/CD Pipeline with GitHub Actions
To automate deployment, I implemented a simple CI/CD pipeline using GitHub Actions.
This allows the application to be deployed automatically to the VPS whenever changes are merged into the main branch.
Deployment Strategy
The workflow follows a straightforward approach:
- Code is developed in feature branches
- Changes are merged into main via pull requests
- Deployment is triggered only on push to main This avoids deploying unverified code and keeps the production environment stable.
GitHub Actions Workflow
The deployment is handled via SSH using a GitHub Actions workflow:
name: Deploy to VPS
on:
push:
branches:
- main
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- name: Deploy via SSH
uses: appleboy/ssh-action@v1.0.3
with:
host: ${{ secrets.VPS_HOST }}
username: ${{ secrets.VPS_USERNAME }}
key: ${{ secrets.VPS_SSH_KEY }}
port: ${{ secrets.VPS_PORT }}
script: |
set -e
cd ~/FinTrack-Microservices
git pull
docker compose up -d --build
docker image prune -f
Why SSH-Based Deployment
Instead of using a full CI/CD platform, I chose a lightweight SSH-based approach:
- Simple to set up
- No additional infrastructure required
- Full control over deployment commands This is sufficient for small-to-medium scale systems and personal projects.
Deployment Flow
Once configured, the deployment flow becomes:
git push → GitHub Actions → SSH to VPS → git pull → docker rebuild → services updated
This eliminates the need for manual deployment and ensures consistency between local and production environments.
Top comments (0)