I recently completed a hands-on Docker and DevSecOps lab as part of my learning track, and I wanted to write up the full walkthrough — including the parts where things broke, because that's honestly where the real learning happened.
By the end of this lab, I had:
- Pulled and inspected a public container image from Docker Hub
- Built a Python Flask web app and packaged it into a secure Docker image
- Pushed it to Docker Hub and shared it
- Automated the entire workflow with a GitHub Actions pipeline that includes Trivy vulnerability scanning as a hard security gate
This article walks you through each phase exactly as I did it, including the three issues I ran into and how I resolved them. If you're replicating this yourself, this should save you some time.
What You'll Need
Before you start, make sure you have:
- Docker Desktop installed and running
- A GitHub account
- A Docker Hub account
- A code editor (VS Code recommended)
You'll also need a Docker Hub Personal Access Token — not your password. Generate one at:
hub.docker.com → Avatar → Account Settings → Security → New Access Token
Give it Read & Write permissions and copy it straight away. You only see it once.
Phase 0 — Pull and Inspect a Public Container
The first phase is simple but important. Before building anything yourself, you act as a consumer — you pull a public image, run it, and observe what happens.
# Pull the lightweight Nginx image
docker pull nginx:alpine
# Run it in detached mode, mapping host port 8080 to container port 80
docker run -d -p 8080:80 --name my-nginx nginx:alpine
# Check it's running
docker ps
# View the logs
docker logs my-nginx
# Clean up
docker rm -f my-nginx
Open http://localhost:8080 in your browser and you'll see the Nginx welcome page.
What just happened? You downloaded a pre-built filesystem (the image) from Docker Hub, and Docker spun up an isolated process (the container) from it. The -p 8080:80 flag mapped port 80 inside the container to port 8080 on your machine, making it reachable from your browser.
This pull-and-run flow is the foundation of everything else in the lab.
Pulling the nginx:alpine image from Docker Hub in the terminal
Phase 1 — Build, Push, Pull & Share Your Own Image
Now you become the image creator. This phase has you build a Python Flask web application, write a Dockerfile, push the image to Docker Hub, and share it.
The Project Structure
your-project/
├── app.py
├── templates/
│ └── index.html
├── requirements.txt
├── Dockerfile
└── .dockerignore
Project folder structure in VS Code showing all the required files
The Application
app.py — A minimal Flask app with a home page and a /health endpoint.
from flask import Flask, render_template, jsonify
import os, datetime
app = Flask(__name__)
APP_NODE = os.environ.get("APP_NODE", "developer")
@app.route("/")
def home():
return render_template("index.html", node=APP_NODE)
@app.route("/health")
def health():
return jsonify({
"status": "healthy",
"node": APP_NODE,
"time": datetime.datetime.utcnow().isoformat() + "Z"
})
if __name__ == "__main__":
app.run(host="0.0.0.0", port=5000)
The APP_NODE variable is read from the environment. This externalises configuration from code — you can inject any identifier at runtime without rebuilding the image.
templates/index.html
<!DOCTYPE html>
<html>
<head><title>Welcome</title></head>
<body style="font-family: sans-serif; text-align: center; padding-top: 5rem;">
<h1>Hello from <span style="color: #2b6cb0;">{{ node }}</span></h1>
<p>This container is alive and running!</p>
</body>
</html>
requirements.txt
flask==3.0.0
Pinning the version ensures reproducible builds.
The Dockerfile
FROM python:3.11-slim
LABEL maintainer="your-name@example.com"
LABEL version="1.0.0"
LABEL description="Simple Flask web app for Docker/GitHub Actions lab"
# Create a non-root user — running as root in a container is a security risk
RUN groupadd -r appuser && useradd -r -g appuser appuser
WORKDIR /app
# Copy requirements first to leverage Docker layer caching
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
RUN chown -R appuser:appuser /app
USER appuser
EXPOSE 5000
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
CMD curl -f http://localhost:5000/health || exit 1
CMD ["python", "app.py"]
Two things worth calling out here:
Non-root user: If the application is ever compromised, an attacker running as a non-root user has far less access than one running as root. It's a foundational security hardening step.
Layer caching order: Copying requirements.txt before the rest of the application code means Docker's build cache is preserved for the pip install layer as long as dependencies haven't changed. If you only modified app.py, Docker doesn't reinstall packages — it reuses the cached layer. This significantly speeds up builds, especially in CI pipelines.
.dockerignore
__pycache__
*.pyc
*.pyo
.git
.env
*.md
Dockerfile
.dockerignore
Without .dockerignore, your entire .git directory and any local .env files would be copied into the image — inflating its size and potentially exposing secrets.
Build and Test
docker build -t my-web-app .
docker run -d -p 5000:5000 --name webapp-test my-web-app
Visit http://localhost:5000 for the home page and http://localhost:5000/health for the health check.
docker build output showing all layers building successfully
Flask app home page loading at localhost:5000
The /health endpoint returning a JSON response in the browser
Push to Docker Hub
docker login -u your-dockerhub-username
docker tag my-web-app your-dockerhub-username/my-web-app:v1.0.0-yourname
docker push your-dockerhub-username/my-web-app:v1.0.0-yourname
The tag format <version>-<yourname> makes your image personally traceable — useful when you're sharing a registry with other learners.
Pushing the tagged image to Docker Hub in the terminal
Docker Hub showing the repository set to public
Pull Your Own Image
To simulate being a new user, delete your local copy and pull it from Docker Hub:
docker rmi your-dockerhub-username/my-web-app:v1.0.0-yourname
docker pull your-dockerhub-username/my-web-app:v1.0.0-yourname
docker run -d -p 5000:5000 --name pulled-app your-dockerhub-username/my-web-app:v1.0.0-yourname
* Pulling the image back from Docker Hub after deleting the local copy*
Running a container from the freshly pulled image
![Testing the pulled image]](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4wwzlfwsj8ejpvb3i632.png) Home page working correctly from the pulled image
Health endpoint responding correctly from the pulled image
The image should behave identically to your local build — which is the point. Portability is a core Docker guarantee.
Phase 2 — Automate with a GitHub Actions DevSecOps Pipeline
This is where it gets interesting. Phase 2 takes everything manual from Phase 1 and automates it in a CI/CD pipeline that runs on every push to main.
The pipeline:
- Builds the Docker image
- Scans it for vulnerabilities with Trivy
- Only pushes to Docker Hub if the scan passes
- Runs a smoke test to verify the pushed image works
Set Up GitHub Secrets
Go to your repo → Settings → Secrets and variables → Actions → New repository secret
Add two secrets:
-
DOCKERHUB_USERNAME— your Docker Hub username -
DOCKERHUB_TOKEN— your access token
These must be two separate secrets. More on why this matters in the issues section below.
The Workflow File
Create .github/workflows/docker-build-push.yml:
name: Build, Scan, Push and Verify Docker Image
on:
push:
branches: [ main ]
pull_request:
branches: [ main ]
env:
DOCKER_HUB_USER: ${{ secrets.DOCKERHUB_USERNAME }}
IMAGE_NAME: my-web-app
TAG_SUFFIX: yourname
jobs:
build-scan-push-verify:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Log in to Docker Hub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Build Docker image (load locally, do not push yet)
uses: docker/build-push-action@v5
with:
context: .
load: true
push: false
tags: |
${{ env.DOCKER_HUB_USER }}/${{ env.IMAGE_NAME }}:v1.0.0-${{ env.TAG_SUFFIX }}
${{ env.DOCKER_HUB_USER }}/${{ env.IMAGE_NAME }}:latest
- name: Run Trivy vulnerability scanner
uses: aquasecurity/trivy-action@v0.20.0
with:
image-ref: "${{ env.DOCKER_HUB_USER }}/${{ env.IMAGE_NAME }}:v1.0.0-${{ env.TAG_SUFFIX }}"
format: 'table'
exit-code: '1'
ignore-unfixed: true
severity: 'HIGH,CRITICAL'
- name: Push Docker image to Hub
if: success()
run: |
docker push ${{ env.DOCKER_HUB_USER }}/${{ env.IMAGE_NAME }}:v1.0.0-${{ env.TAG_SUFFIX }}
docker push ${{ env.DOCKER_HUB_USER }}/${{ env.IMAGE_NAME }}:latest
- name: Pull image and run smoke tests
if: success()
run: |
docker pull ${{ env.DOCKER_HUB_USER }}/${{ env.IMAGE_NAME }}:v1.0.0-${{ env.TAG_SUFFIX }}
docker run -d -p 5000:5000 --name verify-container \
${{ env.DOCKER_HUB_USER }}/${{ env.IMAGE_NAME }}:v1.0.0-${{ env.TAG_SUFFIX }}
sleep 5
curl -f http://localhost:5000/health || (echo "Health check failed!" && exit 1)
curl -s http://localhost:5000/ | grep -i "hello" || (echo "Home page missing greeting!" && exit 1)
echo "All smoke tests passed."
- name: Remove test container
if: always()
run: docker rm -f verify-container || true
Two conditional flags worth understanding:
-
if: success()on the push step — this ensures we never push an image that hasn't passed all previous steps, especially the vulnerability scan. -
if: always()on the cleanup step — this guarantees the test container is removed even if earlier steps failed, preventing orphaned processes on the runner.
Push and Trigger
git add .github/workflows/docker-build-push.yml
git commit -m "Add DevSecOps CI pipeline"
git push
Then go to the Actions tab in your repo and watch it run.
The Issues I Ran Into
This is the section I actually want people to read. The happy path above is clean, but this is what happened in practice.
Issue 1 — Docker Hub Credentials Not Set Up as Separate Secrets
The first pipeline run failed at login. The runner couldn't authenticate with Docker Hub.
The problem was that I hadn't properly separated the credentials into two distinct GitHub secrets. The docker/login-action needs username and password as completely separate values:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
Once I added them as two separate secrets — DOCKERHUB_USERNAME and DOCKERHUB_TOKEN — the login step worked.
Key takeaway: Never combine credentials into a single secret, and never use your Docker Hub password here — use the Personal Access Token.
Issue 2 — Trivy Action Version Tag Missing the v Prefix
The pipeline got past login and build, then failed at the Trivy step with an "Unable to resolve action" error.
The cause was a missing v in the action version tag:
# Broken
uses: aquasecurity/trivy-action@0.20.0
# Correct
uses: aquasecurity/trivy-action@v0.20.0
GitHub Actions resolves action versions against the actual Git tags in the action's repository. The official release tag is v0.20.0 — without the v, it can't find it.
Key takeaway: Always check the official action repository for the exact tag format. Most actions use v prefixes — a missing v is an easy mistake and produces an unhelpful error message.
Issue 3 — Trivy Flagging HIGH and CRITICAL Vulnerabilities, Blocking the Push
With the version tag corrected, Trivy finally ran — and immediately failed the pipeline. It found HIGH and CRITICAL severity vulnerabilities in transitive dependencies (wheel and jaraco.context) coming from the python:3.11-slim base image.
Trivy scan output listing HIGH and CRITICAL vulnerabilities that blocked the push
This was the pipeline working exactly as intended. The exit-code: '1' setting means any HIGH or CRITICAL finding blocks the push — that's the security gate in action.
The question was how to fix it. The wrong approach would have been to add patching packages directly to requirements.txt — those are transitive dependencies that the application itself doesn't actually use, and bloating the requirements file with them is bad practice.
The correct approach was a base image upgrade. By moving to a more recent patch version of python:3.11-slim, the image inherits upstream security fixes that the Python Docker team had already applied. The vulnerabilities were resolved without touching the application's dependencies at all.
Final successful end-to-end pipeline run with all steps passing
Key takeaway: When Trivy flags base image vulnerabilities, upgrade the base image — don't suppress the finding or patch transitive dependencies directly. Keeping base images current is a core DevSecOps practice, and this is exactly the kind of thing an automated pipeline should surface and enforce.
What This Lab Actually Teaches
Looking back, the manual steps in Phase 1 exist to give you a clear mental model before Phase 2 automates all of it. By the time you write the workflow file, you understand exactly what each step is doing because you've done it yourself.
The pipeline is also a good introduction to what a real supply chain security gate looks like:
- The image is built but not pushed
- It's scanned before any push happens
- A vulnerable image is blocked, not just warned about
- The push only happens if everything passes
That ordering matters. It's the difference between security as an afterthought and security baked into the delivery process.
Clean Up
docker rm -f $(docker ps -aq) 2>/dev/null
docker rmi my-web-app your-dockerhub-username/my-web-app:v1.0.0-yourname nginx:alpine 2>/dev/null
Resources
- Docker Documentation
- GitHub Actions Documentation
- Trivy by Aqua Security
- Original lab by Samuel Nartey -My Github repo
If you're working through a similar lab and hit any of the same issues, I hope this saved you some debugging time. Feel free to drop a comment if anything needs more explanation.
Top comments (0)