Hello, this post will show you how to deploy a NestJS and VueJS full stack project to a VPS with the power of Dokploy. First, I'll show you how to create production ready Dockerfiles. These can be used to build images that can be pushed to your dockerhub registry for future use. After that, we'll see how we can automate the image building process with the help of Github Actions. Once that is working, you will be able to utilize said images to create services in Dokploy.
This post assumes that you have a domain so that you can attach it to your web-app and a secured VPS. Here is a link to an article that showcases how to increase security for your VPS: https://www.hostinger.com/tutorials/vps-security. In the case that you are not interested in managing a lot of the nitty-gritty security details with your VPS, platforms such as Hostinger offer managed VPS hosting services.
Dockerfile
ARG NODE_VERSION=22.15.1
FROM node:${NODE_VERSION}-bookworm AS base
WORKDIR /app
COPY package*.json .
RUN npm ci
Here we are copying our package.json
and package-lock.json
files from the host machine into our base stage that runs the clean install command from npm. What this does is attempt to install the dependencies found in the package-lock.json. How this differs from an npm install
is that there are no dependency updates and no lifecycle scripts are ran. This ensures a deterministic and reproducible execution making it perfect for our Dockerfile.
FROM node:${NODE_VERSION}-bookworm AS builder
WORKDIR /app
COPY --from=base /app/node_modules ./node_modules
COPY . .
RUN npm run build
ENV NODE_ENV=production
RUN npm ci --omit=dev && npm cache clean --force
In this build stage, we are copying over the dependencies from the previous build and the project files from the host machine into this stage. This allows us to build the project, transpile our code into JavaScript and store it in the ./dist
directory. What npm ci --omit=dev
does is pretty much the same as npm ci
, this time it excludes the devDependencies
. npm cache clean --force
removes all cached data. The force flag is necessary because typically npm prevents this due to potential accidental data loss, but this is okay in our Docker container as we are running a stateless backend.
FROM node:${NODE_VERSION}-bookworm AS runner
WORKDIR /app
COPY --from=builder --chown=node:node /app/dist ./dist
COPY --from=builder --chown=node:node /app/node_modules ./node_modules
USER node
CMD ["node", "dist/apps/f1-model-backend/main.js"]
This last stage is where we actually start running our NestJS backend. Here we are simply copying over our built project and the necessary dependencies. The CMD
command is used to run the project in NodeJS.
The reason why we have multiple stages in our Dockerfile is to reduce the size of it. At each new stage, we start with a fresh base image and explicitly copy over files from the host machine and/or previous build stages as well as run necessary commands.
Moving on to the frontend Dockerfile, the base and builder stages are going to look pretty much identical. The only stage that will look different will be the runner stage as we going to use NGINX to serve our static files.
FROM nginx:${NGINX_VERSION}-alpine${ALPINE_VERSION} AS runner
COPY --from=builder --chown=node:node /app/dist /usr/share/nginx/html
COPY --chown=node:node nginx/nginx.conf /etc/nginx/conf.d/default.conf
EXPOSE 8080
USER node
CMD ["nginx", "-g", "daemon off;"]
In this last stage, we are copying our project files over to NGINX as well as the configuration itself. This is what the conf file should look like.
server {
listen 8080;
location / {
root /usr/share/nginx/html;
index index.html;
try_files $uri $uri/ /index.html;
}
}
That's all that's needed to build our production backend and frontend images. Next we'll take a look at how we can build and push these images to a container registry. I only really have experience using Github Actions for automating this procedure so we'll go ahead and go down that route.
Github Actions
This is what our frontend workflow will look like:
name: Frontend Docker Image Build and Push
on:
[workflow_dispatch]
jobs:
publish_image:
runs-on: ubuntu-latest
defaults:
run:
working-directory: ./vue
steps:
- name: Checkout Repo
uses: actions/checkout@v4
- name: Setup Node
uses: actions/setup-node@v3
- name: Login to Docker Hub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
- name: Build and Push Image
run: |
docker buildx build -f Dockerfile.prod -t ${{ secrets.DOCKER_USERNAME }}/sports-predictions-vue-prod:latest .
docker push ${{ secrets.DOCKER_USERNAME }}/sports-predictions-vue-prod:latest
Focusing on the image step, -f
let's you specify the Dockerfile you want to use while -t
is for the image tag. ${{ secrets.DOCKER_USERNAME }}
is pulled from your actions' secrets. To set your secrets and variables, head to https://github.com/{username}/{repo_name}/settings/secrets/actions
. You should see options for repository secrets and environment secrets. Environment secrets allow you to override your default repository secrets in case you want them to be different for your staging or production builds. For example, your staging database URL is likely (should be) different compared to your production DB URL. In which case, you can set those URLs in the environment secrets area.
At the beginning of the workflow, you should see [workflow_dispatch]
. If this option is set, then you will need to manually kick off the action in the actions tab. There are other options such as an automatic dispatch when pushing to the main branch or when a Github tag is created.
Dokploy
For setting up Dokploy, I'm assuming you have a VPS with all the necessary security in place. To get started, you'll want to SSH into your server then run curl -sSL https://dokploy.com/install.sh | sh
. This will install Dokploy on the server with a Docker Swarm cluster and you'll receive a link to your new Dokploy admin panel. Once you navigate to the panel in your browser, you'll be met with a page that allows you to register. After registering, you'll be redirected to the dashboard.
If you own a domain, then attaching it to the admin panel is easy. Navigate to Web Server under settings. There, you can apply your desired domain and have an SSL certificate provisioned through Let's Encrypt.
In your chosen DNS service (Cloudflare, Namecheap, etc.) make sure to set up an A record that points to the IP address of your server.
To begin the process of deploying your application, head back to the projects tab and press the Create Project button. Enter a name for the project and hit create. Now you can start creating applications and your production DB. To create a database, simply choose the database option in the create service dropdown.
Choose from the selection of DB services, enter the name of the service, your DB name, username, password and lastly the Docker image you want the container to be built on.
The page you'll see once the service is created will contain a bunch of information related to your DB. Keep note of the External Host, that is the URL your backend will be communicating with when it is up and running.
Creating a backend service is similar, but this time, the only thing you need to define is the service name. Once you create the service, the page you'll be redirected to will be somewhat similar to the database configuration page.
The method I'm using to deploy is through the Docker tab. I just provide my credentials and the backend image we built earlier. Once we hit deploy, Dokploy will pull the image down from the registry and create a container. Before we do that, we'll need to input any necessary environment variables, such as the database URL. This is where the external host from the DB service comes into play. Something else that we'll want to do is apply the domain to the application. In the domain tab, you'll see a page that's similar to the web server settings page. Here you can set your domain, maybe something like api.{your_domain}.com. You can set up an SSL certificate and add api to the A records of your DNS service just like before. The only difference here will be the port that the API will be accessible from. Given that this is a NodeJS application, it's likely that it will be port 3000.
When it comes to the frontend application, it's going to be very similar. Of course, your image is going to be the frontend one and the subdomain may be different from your API subdomain. Due to the fact that our frontend image ran off of NGINX through port 8080, that is what our domain port will be.
Those are the basics, once all of those components are deployed and running, your webapp should be accessible through the internet. There's more to explore with Dokploy so be sure to take a look around the admin panel. One cool thing to checkout is the templates, which will be found under any project you have. The purpose of templates is to enable users to easily integrate services such as Supabase, DragonflyDB, and Elasticsearch.
I hope all of this helps and you end up enjoying Dokploy, because so far, it has been a great and easy alternative to other services.
Top comments (0)
Some comments may only be visible to logged-in visitors. Sign in to view all comments.