DEV Community

Ankit Kumar
Ankit Kumar

Posted on

From a Single Container to a Secure Application Stack: A Practical Guide to Docker and Server Hardening

Hello, fellow tech enthusiasts and DevOps practitioners!

The world of DevOps is vast, but its foundation is built on a few core principles: containerization, orchestration, and security. Today, I want to walk you through a practical journey I took to solidify my understanding of these pillars. We'll start with a single container and the challenge of data persistence, then scale up to a multi-service application, and finally, zoom out to secure the very server our containers run on.

This isn't just a list of commands; it's a step-by-step guide with explanations of why we do what we do. Let's dive in!


LAB 1: Solving the Problem — Why Docker Volumes are Essential

By their nature, Docker containers are ephemeral. Think of them as temporary workspaces. If you remove a container, all the files and data created inside it vanish forever. This is fine for stateless applications, but what about a database or a web server that hosts user content? We need that data to stick around.

This is the problem that Docker Volumes solve. A volume is like an external hard drive that you plug into your container. It's managed by Docker but exists outside of any single container's lifecycle.

My Hands-On Scenario: Proving Data Persistence

My goal was to see this in action: could I make data survive even after its original container was deleted?

Step 1: Creating the "Digital Safe" (Our Volume)
First, I told Docker to create a managed, persistent storage space called test.

docker volume create test
Enter fullscreen mode Exit fullscreen mode

You can see all your volumes with docker volume ls. At this point, test is just an empty but safe place for data.

Step 2: Launching a Web Server and connecting the Volume

Next, I launched as Nginx web server. The key part of this command is the -v flag, which connects our test volume to the /usr/share/nginx/html directory inside the container. This is the directory where Nginx looks for its website files.

 sudo docker container run --name mywebserver -d -p 80:80 -v test:/usr/share/nginx/html nginx
Enter fullscreen mode Exit fullscreen mode

What happened here is subtle but important: since our test volume was empty, the Nginx container automatically copied it's own default index.html file into the volume. The volume is now the source of truth for the website's contnet.

Step 3: Changing the Data from Inside
To prove we were now working with the volume, I entered the running container and changed the index.html file.

# for go inside the 'mywebserver' container
docker exec -it mywebserver sh

# Once inside, overwrite the content of the main web page
echo '<h1>Data Persists Beyond the Container!</h1>' > /usr/share/nginx/html/index.html

# Exit the container shell
exit
Enter fullscreen mode Exit fullscreen mode

When I refreshed my browser at http://localhost:80, I saw my new message. This confirmed I had successfully modified the data inside the persistent volume.

Step 4: The Ultimate Test — Destroying the Container
Now for the moment of truth. I completely stopped and removed the mywebserver container.

docker rm -f mywebserver
Enter fullscreen mode Exit fullscreen mode

The container is gone. A traditional workspace would have been wiped clean.

Step 5: Resurrecting the Data with a New Container
I launched a brand new, completely separate Nginx container called mywebserver3. However, I connected it to the exact same test volume.

sudo docker run --name mywebserver3 -d -p 80:80 -v test:/usr/share/nginx/html nginx
Enter fullscreen mode Exit fullscreen mode

When I visited http://localhost:80 again... Success! The message "Data Persists Beyond the Container!" was there. We proved that the volume keeps our data safe, completely independent of the container's lifecycle. This concept is critical for running databases, CMSs, or any stateful application in Docker.


LAB 2: Building a Real Application with Docker Compose

Running one container is useful, but modern applications are rarely that simple. They are usually composed of multiple services working together—a web server, a database, a caching service, etc. Managing the lifecycle and networking of all these containers with individual docker commands would be a nightmare.

Enter Docker Compose. It's a tool for defining and running multi-container applications using a single, simple configuration file: docker-compose.yml.

My Hands-On Scenario: Deploying a WordPress Site
I set up a classic web application stack: a WordPress frontend that depends on a MySQL database backend.

The Blueprint: docker-compose.yml
This file is the single source of truth for my entire application.

version: "3.8"

services:
  # The first service: our MySQL Database
  mydatabase:
    image: mysql:5.7
    restart: always
    volumes: 
      - mydata:/var/lib/mysql # Attach a volume to the DB's data directory!
    environment: 
      MYSQL_ROOT_PASSWORD: somewordpress
      MYSQL_DATABASE: wordpress
      MYSQL_USER: wordpress
      MYSQL_PASSWORD: wordpress
    networks:
      - mynet

  # The second service: our WordPress frontend
  mywordpress:
    image: wordpress:latest
    depends_on: 
      - mydatabase # Instruction: Don't start WordPress until the database is ready!
    restart: always
    ports:
      - "80:80" # Expose the web server on port 80 of my computer
    environment: 
      WORDPRESS_DB_HOST: mydatabase:3306 # The magic of Docker networking!
      WORDPRESS_DB_USER: wordpress
      WORDPRESS_DB_PASSWORD: wordpress
      WORDPRESS_DB_NAME: wordpress
    networks:
      - mynet

# Global definitions for resources used by the services
volumes:
  mydata: {} # Formally create the volume for our database

networks:
  mynet: # Formally create a private network for our containers
    driver: bridge
Enter fullscreen mode Exit fullscreen mode

With this file in place, launching the entire stack was as simple as running docker-compose up -d.

My Key Learnings:

  • Declarative Infrastructure: I didn't have to tell Docker how to do things. I just declared the desired state in the YAML file, and Compose figured it out.

  • Service Discovery: The most powerful feature here is networking. The WordPress container found the database using its service name, mydatabase, as its hostname (WORDPRESS_DB_HOST: mydatabase:3306). Docker Compose created a private virtual network (mynet) where containers can find each other by name. No more hard-coding IP addresses!

  • Dependency Management: The depends_on key is a lifesaver. It ensures the mydatabase container is started and healthy before the mywordpress container even begins to start, preventing connection errors.


LAB 3: Securing the Foundation — A Core SysAdmin Skill

Containers are great, but they still run on a host operating system—usually Linux. Securing this host is just as important as securing the application. One of the most fundamental security-hardening steps is disabling direct root login via SSH.

Why? Because it enforces accountability. It forces every administrator to log in with their own personal user account first, and then use sudo to perform administrative tasks. This creates a clear audit trail (user 'bob' ran command 'xyz') instead of a mysterious log of actions performed by the all-powerful root.

My Hands-On Scenario: KodeKloud Security Challenge

The task was to apply this security measure on three different app servers.

The Universal 5-Step Process:

1.SSH as a Standard User: First, I connected to the server using a non-root account.

ssh tony@stapp01
Enter fullscreen mode Exit fullscreen mode

2.Elevate Privileges Correctly: I became the root user using the sudo command, which required my personal password and logged the action.

sudo su -
Enter fullscreen mode Exit fullscreen mode

3.Edit the SSH Configuration File: Using a text editor like vi, I opened the SSH daemon's configuration file.

vi /etc/ssh/sshd_config
Enter fullscreen mode Exit fullscreen mode

4.Change the Security Directive: Inside the file, I found the line PermitRootLogin and changed its value to no. (It's often commented out with a #, which must be removed).

PermitRootLogin no
Enter fullscreen mode Exit fullscreen mode

5.Apply the Changes: A configuration change is meaningless until the service is restarted.

systemctl restart sshd
Enter fullscreen mode Exit fullscreen mode

I repeated this simple but critical process on all required servers, ensuring a consistent and secure baseline across the infrastructure.

Thanks for following along! I hope this detailed walkthrough was helpful.

Top comments (0)