DEV Community

Big Mazzy
Big Mazzy

Posted on • Originally published at serverrental.store

Docker on a VPS: From Zero to Production Deployment

Thinking about moving your application from a local development environment to a live server? You're probably wondering how to make that transition smooth and reliable. This guide will walk you through deploying your application to a Virtual Private Server (VPS) using Docker, transforming your development setup into a production-ready environment.

Why Docker on a VPS?

Running your application directly on a VPS can lead to "dependency hell" – where different applications on the same server require conflicting versions of libraries or software. Docker solves this by packaging your application and its dependencies into a self-contained unit called a container. This container, built from a Docker image (a read-only template), ensures your application runs consistently across different environments, from your laptop to the production VPS.

A Virtual Private Server (VPS) is a virtual machine sold as a service by an internet hosting service. It offers dedicated resources (CPU, RAM, storage) and full root access, giving you the control of a physical server at a lower cost. Combining Docker with a VPS provides a powerful and flexible way to host your applications.

Setting Up Your VPS

Before we can deploy anything, we need a VPS. When choosing a provider, consider factors like performance, cost, and ease of management. I've had positive experiences with providers like PowerVPS and Immers Cloud for their reliable performance and competitive pricing. For a comprehensive overview of server rental options, the Server Rental Guide is an excellent resource.

Once you've selected a provider and provisioned your VPS, you'll typically receive SSH (Secure Shell) access. This command-line interface allows you to connect to and manage your server remotely.

Connecting to Your VPS

You'll use an SSH client for this. On Linux and macOS, it's built-in. On Windows, you can use PuTTY or the built-in OpenSSH client in PowerShell or Command Prompt.

ssh your_username@your_vps_ip_address
Enter fullscreen mode Exit fullscreen mode

Replace your_username with the username provided by your hosting provider and your_vps_ip_address with the IP address of your VPS.

Updating Your VPS

It's crucial to start with an up-to-date system. Run these commands to update your package lists and installed packages.

sudo apt update
sudo apt upgrade -y
Enter fullscreen mode Exit fullscreen mode

sudo (superuser do) allows you to run commands with administrative privileges. apt is the package manager for Debian-based Linux distributions (like Ubuntu, which is common for VPSs). update refreshes the list of available packages, and upgrade installs the latest versions of all installed packages. The -y flag automatically answers "yes" to any prompts.

Installing Docker

Now, let's get Docker installed on your VPS. The official Docker installation guide is the best place for the most up-to-date instructions, but here’s a common method for Debian/Ubuntu.

Add Docker's Official GPG Key

This ensures you're downloading software from a trusted source.

sudo apt install ca-certificates curl gnupg
sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
sudo chmod a+r /etc/apt/keyrings/docker.gpg
Enter fullscreen mode Exit fullscreen mode

Add Docker’s Repository

This tells your system where to find Docker packages.

echo \
  "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
  $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
  sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
Enter fullscreen mode Exit fullscreen mode

Install Docker Engine

Now, install Docker itself.

sudo apt update
sudo apt install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin -y
Enter fullscreen mode Exit fullscreen mode

Verify Installation

Check that Docker is running correctly.

sudo docker run hello-world
Enter fullscreen mode Exit fullscreen mode

If you see a message indicating Docker is working, congratulations! You've successfully installed Docker.

Add Your User to the Docker Group

By default, you need sudo to run Docker commands. To avoid this, add your user to the docker group.

sudo usermod -aG docker $USER
Enter fullscreen mode Exit fullscreen mode

You'll need to log out and log back in for this change to take effect.

Preparing Your Application for Docker

To run your application in Docker, you need a Dockerfile. This is a text file that contains instructions for building a Docker image. It's like a recipe for creating your application's environment.

The Dockerfile

Let's consider a simple Node.js application as an example. Your Dockerfile might look like this:

# Use an official Node.js runtime as a parent image
FROM node:20-alpine

# Set the working directory in the container
WORKDIR /app

# Copy package.json and package-lock.json to the working directory
COPY package*.json ./

# Install application dependencies
RUN npm install

# Bundle app source
COPY . .

# Expose the port the app runs on
EXPOSE 3000

# Define the command to run your app
CMD [ "node", "server.js" ]
Enter fullscreen mode Exit fullscreen mode

Explanation:

  • FROM node:20-alpine: This line specifies the base image for your container. We're using a lightweight Node.js image (version 20, based on Alpine Linux for smaller size).
  • WORKDIR /app: Sets the default directory inside the container where subsequent commands will be executed.
  • COPY package*.json ./: Copies your package.json and package-lock.json files into the /app directory in the container.
  • RUN npm install: Executes the command to install your Node.js dependencies.
  • COPY . .: Copies the rest of your application's source code into the container.
  • EXPOSE 3000: Informs Docker that the container will listen on port 3000 at runtime. This is documentation; it doesn't actually publish the port.
  • CMD [ "node", "server.js" ]: Specifies the command to run when the container starts.

Building Your Docker Image

Once you have your Dockerfile in your project's root directory, you can build the image on your local machine or directly on the VPS. Building on your local machine and then pushing to a registry is common for CI/CD, but for simpler deployments, building on the VPS is fine.

Navigate to your project directory on the VPS and run:

docker build -t your-app-name .
Enter fullscreen mode Exit fullscreen mode
  • docker build: The command to build an image.
  • -t your-app-name: Tags the image with a name (e.g., my-web-app).
  • .: Indicates that the Dockerfile is in the current directory.

This command will create your Docker image. You can see it by running docker images.

Deploying Your Application

Now that you have your image, you can run it as a container on your VPS.

Running Your Container

The docker run command starts a container from an image.

docker run -d -p 80:3000 your-app-name
Enter fullscreen mode Exit fullscreen mode
  • -d: Runs the container in detached mode (in the background).
  • -p 80:3000: Publishes port 80 on the host machine to port 3000 inside the container. This means requests to your VPS's IP address on port 80 will be forwarded to your application running on port 3000 inside the container.
  • your-app-name: The name of the Docker image you built.

Now, if you access your VPS's IP address in a web browser, you should see your application!

Managing Your Containers

  • List running containers: docker ps
  • List all containers (including stopped): docker ps -a
  • Stop a container: docker stop <container_id_or_name>
  • Start a stopped container: docker start <container_id_or_name>
  • Remove a stopped container: docker rm <container_id_or_name>
  • View container logs: docker logs <container_id_or_name>

Keeping Your Application Up-to-Date

When you update your application code, you'll need to build a new Docker image and restart your container.

  1. SSH into your VPS.
  2. Navigate to your application's directory.
  3. Pull the latest code changes (e.g., git pull origin main).
  4. Rebuild the Docker image: docker build -t your-app-name .
  5. Stop and remove the old container: docker stop <container_id> and docker rm <container_id>
  6. Run the new container: docker run -d -p 80:3000 your-app-name

This manual process can become tedious. This is where tools like Docker Compose and orchestration platforms come in, but for a single application, this workflow is manageable.

Beyond Basic Deployment: Docker Compose

For applications with multiple services (e.g., a web app, a database, a caching layer), managing them with individual docker run commands becomes complex. Docker Compose is a tool for defining and running multi-container Docker applications. You define your services, networks, and volumes in a docker-compose.yml file.

Here's a simplified example for a Node.js app with a Redis cache:

version: '3.8'

services:
  web:
    build: .
    ports:
      - "80:3000"
    depends_on:
      - redis
    environment:
      REDIS_HOST: redis # Service name acts as hostname

  redis:
    image: "redis:alpine"
    ports:
      - "6379:6379"
Enter fullscreen mode Exit fullscreen mode

With this file, you can start all your services with a single command: docker-compose up -d. This simplifies management significantly.

Security Considerations

  • Firewall: Always configure a firewall (like ufw on Ubuntu) to only allow necessary ports (e.g., 22 for SSH, 80 for HTTP, 443 for HTTPS).
  • Regular Updates: Keep your VPS operating system and Docker itself updated to patch security vulnerabilities.
  • Least Privilege: Run containers with the least privileges necessary. Avoid running containers as root if possible.
  • Sensitive Data: Never bake sensitive information (like API keys or database passwords) directly into your Docker image. Use environment variables or Docker secrets.

Conclusion

Deploying your application to a VPS with Docker provides a robust, consistent, and manageable solution. You've learned how to set up a VPS, install Docker, create a Dockerfile, build an image, and run your application as a container. This foundational knowledge allows you to move your projects from development to production with confidence, ensuring they run reliably wherever they are deployed. As your needs grow, exploring Docker Compose and container orchestration tools will further enhance your deployment capabilities.


Frequently Asked Questions

Q: What is a container?
A: A container is a lightweight, standalone, executable package of software that includes everything needed to run it: code, runtime, system tools, system libraries, and settings. It isolates applications from their environment, ensuring consistency.

Q: What's the difference between a Docker image and a Docker container?
A: An image is a read-only template used to create containers. A container is a runnable instance of an image. Think of an image as a blueprint and a container as the actual building constructed from that blueprint.

Q: How do I expose my application to the internet?
A: You use the -p flag in docker run (e.g., -p 80:3000) to map a port on your VPS (the host) to a port inside the container where your application is listening. Ensure your VPS firewall allows traffic on the host port.

Q: Can I run multiple applications on one VPS using Docker?
A: Yes, you can run multiple applications. Each application can have its own Docker image and run in its own container. For managing multiple applications or services, Docker Compose is highly recommended.


Disclosure: This article contains affiliate links for PowerVPS and Immers Cloud. If you sign up through these links, I may receive a commission at no extra cost to you.

Top comments (0)