Many people think you need an AWS or Azure account to learn deployment and CI/CD. That’s a misconception. If you have VirtualBox and GitHub Actions, you have everything you need to build a fully automated pipeline.
In this guide, I will show you how to deploy a MEAN stack application on a local Linux VM(there is no major difference for the MERN stack). This setup works on any Linux environment, whether it's a dedicated server or a VM running on your laptop.
🏗️ The Architecture
We are using a monorepo approach, meaning both the Angular frontend and Node.js backend live in the same repository. Here is how the flow works:
-
Push code to the
productionbranch. - GitHub Actions builds the Docker images.
- Images are pushed to Docker Hub.
- A Self-Hosted Runner on your VM pulls the latest images and restarts the containers.
- Nginx acts as a reverse proxy to route traffic.
If you are curious about how this differs from cloud-specific hosting, check out my previous post on Hosting a Node.js Server in an EC2 Instance.
1. Setting Up the Server (VirtualBox)
I used a Debian VM for this setup.
- Network: Set your VM adapter to Bridged Mode. This allows the VM to get an IP from your router, making it a real node on your Local Area Network (LAN).
-
Access: You should be able to SSH into it:
ssh user@your_vm_ip.
For a detailed breakdown of how to handle LAN networking and port forwarding to make your server accessible from the internet, refer to my post: How Web Technology Works - Part 01.
2. Docker Hub & GitHub Secrets
To push images automatically, GitHub needs permission to talk to Docker Hub. Do not use your account password.
- Go to Docker Hub > Settings > Personal access tokens.
- Create a New Access Token with Read & Write access.
- In your GitHub repository, go to Settings > Secrets and variables > Actions.
- Add
DOCKERHUB_USERNAMEandDOCKERHUB_TOKEN(the token you just created).
3. The Self-Hosted Runner
Instead of using GitHub’s servers to deploy, we use our own VM. This is called a Self-Hosted Runner.
- In GitHub: Settings > Actions > Runners > New self-hosted runner.
- Select Linux and follow the commands to download and configure it on your VM.
- Once configured, install it as a service so it runs in the background:
sudo ./svc.sh install
sudo ./svc.sh start
4. Containerization (The Code)
Since we are in a monorepo, we need separate Dockerfiles and a single Compose file.
Backend Dockerfile (backend/Dockerfile)
FROM node:20-alpine
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
EXPOSE 8080
CMD ["node", "server.js"]
Frontend Dockerfile (frontend/Dockerfile)
FROM node:20-alpine AS build
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
FROM nginx:alpine
COPY --from=build /app/dist/your-app-name /usr/share/nginx/html
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
Docker Compose (docker-compose.yml)
version: '3.8'
services:
backend:
image: your-docker-username/mean-backend:latest
extra_hosts:
- "host.docker.internal:host-gateway"
container_name: mean-backend
restart: always
ports:
- "8080:8080"
frontend:
image: your-docker-username/mean-frontend:latest
container_name: mean-frontend
restart: always
depends_on:
- backend
ports:
- "81:80"
5. Nginx Reverse Proxy
Install Nginx on the host VM: sudo apt install nginx. We use it to route port 80 traffic to our containers.
Configuration (/etc/nginx/sites-available/default):
server {
listen 80;
server_name 10.131.44.201; # Use your VM IP
location /api/ {
proxy_pass http://localhost:8080;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
location / {
proxy_pass http://localhost:81;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
Check if there is any issue in the new configuration syntax sudo nginx -t
Restart the nginx service sudo systemctl reload nginx
6. The CI/CD Pipeline
Create .github/workflows/deploy.yml. This script automates the entire process.
name: Build and Deploy
on:
push:
branches: [ production ]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Build and Push
run: |
docker build -t ${{ secrets.DOCKERHUB_USERNAME }}/mean-backend ./backend
docker build -t ${{ secrets.DOCKERHUB_USERNAME }}/mean-frontend ./frontend
docker push ${{ secrets.DOCKERHUB_USERNAME }}/mean-backend
docker push ${{ secrets.DOCKERHUB_USERNAME }}/mean-frontend
deploy:
needs: build
runs-on: self-hosted
steps:
- name: Pull and Restart
run: |
cd ~/your-app-dir
docker-compose pull
docker-compose up -d
CI/CD Success
Once you push, you should see all green checkmarks in GitHub Actions.
Docker Hub
Your images will appear with the latest tags.
Running Services
Check your VM to see the containers live.
Your application is now live on your local network!
http://VM_IP
Setting up a CI/CD pipeline on a local virtual machine proves that DevOps is about logic and architecture, not just the service provider you use. By utilizing VirtualBox in Bridged Mode, you can simulate a production-like environment and gain full control over your networking and deployment cycles without a cloud budget.
Key Takeaways
- Infrastructure Flexibility: This setup applies to any Linux environment, whether it is a VM, a Raspberry Pi, or a local server.
- Automation: Using a self-hosted runner allows you to keep your deployment logic local while leveraging GitHub for the build process.
- Monorepo Efficiency: Managing the Angular frontend and Node.js backend in a single repository simplifies the CI/CD workflow.
What challenges did you face setting up your local environment? Let me know in the comments.



Top comments (0)