DEV Community

Cover image for From Docker Containers to Kubernetes Pods: Deploying My First Microservices Platform
Jeff Graham
Jeff Graham

Posted on

From Docker Containers to Kubernetes Pods: Deploying My First Microservices Platform

Last week, I built three microservices in three days. This weekend, I deployed them all to Kubernetes. Here's what I learned about container orchestration by doing it myself.

The Gap Between Docker and Kubernetes

After building my API service, auth service, and worker service with Docker, I had three working containers. I could run them individually, test them locally, and they worked fine.

But running containers individually doesn't scale. In production, you need:

  • Multiple replicas for redundancy
  • Automatic restarts when containers fail
  • Load balancing across instances
  • Service discovery so containers can find each other
  • Rolling updates without downtime

That's what Kubernetes provides. It's the difference between "I have containers" and "I have a platform."

Day 1: Deploying the First Service

I started with Minikube, a local Kubernetes cluster that runs on your machine. The installation was straightforward, but the conceptual shift from Docker to Kubernetes took time to understand.

Kubernetes introduces new abstractions:

Pods - The smallest unit in Kubernetes. Think of a pod as a wrapper around one or more containers. My API service container becomes an API service pod.

Deployments - Manages pods. Instead of saying "run this container," you say "maintain 2 replicas of this service with these resources." Kubernetes handles the rest.

Services - Provides a stable network endpoint. Pods can die and restart (with new IP addresses). Services give them a consistent name and handle load balancing.

Writing my first Kubernetes manifests felt like learning a new language. Here's what a basic deployment looks like:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: api-service
spec:
  replicas: 2
  selector:
    matchLabels:
      app: api-service
  template:
    metadata:
      labels:
        app: api-service
    spec:
      containers:
      - name: api
        image: api-service:v1
        ports:
        - containerPort: 3000
Enter fullscreen mode Exit fullscreen mode

This declares what I want (2 replicas of my API service), and Kubernetes maintains that state. If a pod crashes, Kubernetes automatically creates a new one. If I scale to 3 replicas, Kubernetes creates another pod. It's declarative infrastructure.

The first deployment took several attempts. I hit the classic error: ErrImageNeverPull. Minikube has its own Docker environment, so I had to rebuild my image in Minikube's Docker daemon. A small detail, but it cost me 30 minutes of debugging.

Once the pods were running, accessing them was another learning curve. Kubernetes doesn't expose pods directly to your host machine. You need a Service with a NodePort, or you use Minikube's tunnel feature.

minikube service api-service --url
Enter fullscreen mode Exit fullscreen mode

This gave me a localhost URL to test my API. When I hit the health endpoint and got back JSON, it felt like a breakthrough. My container was now a pod, managed by Kubernetes.

Day 2: Deploying All Three Services

With the pattern established, deploying the auth and worker services went faster. Build the image in Minikube's Docker, write the Deployment and Service manifests, apply them with kubectl, verify the pods are running.

By the end of Saturday, I had:

  • 6 pods running (2 replicas each of API, auth, worker)
  • 3 Deployments managing them
  • 3 Services exposing them
  • All orchestrated by Kubernetes

Running kubectl get all and seeing everything marked as "Running" was satisfying in a way that's hard to explain. This was no longer just containers. This was a platform.

Day 3: Service Discovery and Communication

The real power of Kubernetes became apparent when I tested service-to-service communication.

In Kubernetes, services can reach each other by name. No IP addresses, no configuration files with hardcoded endpoints. Kubernetes DNS handles service discovery automatically.

From any pod in the cluster, I could reach other services:

  • http://api-service:3000
  • http://auth-service:3001
  • http://worker-service:3002

To test this, I ran a temporary pod with curl installed:

kubectl run curl-test --image=curlimages/curl -it --rm -- sh
Enter fullscreen mode Exit fullscreen mode

Inside that pod, I tested the full authentication flow:

  1. Called auth-service to login and get a JWT token
  2. Verified the token by calling auth-service again
  3. Created a background job by calling worker-service
  4. Checked worker statistics

Everything worked. Services could discover and communicate with each other automatically. The auth service didn't need to know the worker service's IP address. Kubernetes DNS resolved the service names to the correct pods and load-balanced requests across replicas.

This is what makes microservices viable at scale. Services are decoupled. Pods can restart, move to different nodes, or scale up and down, and the network layer just works.

What I Learned About Kubernetes

Kubernetes is declarative, not imperative

With Docker, you run commands: "start this container with these flags." With Kubernetes, you declare what you want: "maintain 2 replicas of this service." Kubernetes continuously reconciles the actual state with your desired state.

If a pod crashes, Kubernetes automatically creates a new one. You don't write scripts to handle failures. The system is self-healing by design.

Resource limits matter

Every container in my deployments has CPU and memory limits:

resources:
  requests:
    memory: "128Mi"
    cpu: "100m"
  limits:
    memory: "256Mi"
    cpu: "200m"
Enter fullscreen mode Exit fullscreen mode

Requests tell Kubernetes how much the container needs. Limits prevent a single container from consuming all node resources. Without these, a memory leak in one service could crash the entire cluster.

Health checks are essential

I implemented liveness and readiness probes for every service:

livenessProbe:
  httpGet:
    path: /health
    port: 3000
  initialDelaySeconds: 10
  periodSeconds: 10
readinessProbe:
  httpGet:
    path: /health
    port: 3000
  initialDelaySeconds: 5
  periodSeconds: 5
Enter fullscreen mode Exit fullscreen mode

Liveness probes tell Kubernetes if the container is healthy (if not, restart it). Readiness probes tell Kubernetes if the container is ready to receive traffic (if not, remove it from the service endpoints).

These aren't optional features to add later. They're fundamental to how Kubernetes manages your workloads.

kubectl is your interface to everything

Learning kubectl commands was like learning a new shell. The most useful ones:

kubectl get pods              # See all pods
kubectl get pods -w           # Watch pods update
kubectl logs <pod-name>       # View pod logs
kubectl describe pod <name>   # Debug pod issues
kubectl exec -it <pod> -- sh  # Shell into a pod
kubectl scale deployment api-service --replicas=3  # Scale up
Enter fullscreen mode Exit fullscreen mode

When things go wrong (and they will), these commands are how you debug. Looking at pod events with kubectl describe saved me multiple times when pods wouldn't start.

Challenges I Hit

Image pull errors were frustrating until I understood Minikube's Docker environment. Building images locally and expecting Kubernetes to find them doesn't work. You have to build in the right context.

Networking confusion took time to sort out. NodePort services, Minikube tunnels, internal vs external access - it's more complex than Docker's port mapping. Understanding that services handle internal routing while NodePort handles external access clarified things.

YAML syntax is unforgiving. Missing indentation, typos in field names, wrong API versions - all caused cryptic errors. A linter would have saved me time.

Why This Matters for DevOps

Container orchestration is fundamental to modern DevOps practices. You can't practice continuous deployment at scale without it. You can't implement blue-green deployments, canary releases, or auto-scaling without an orchestrator.

Kubernetes provides the platform for everything else: CI/CD pipelines, infrastructure as code, observability, security policies. Learning Kubernetes isn't just about containers. It's about building production systems.

What's Next

This weekend taught me Kubernetes fundamentals, but I'm just scratching the surface. Coming next:

ConfigMaps and Secrets - Right now, configuration is hardcoded in my manifests. I need to externalize configuration and manage secrets properly.

Ingress - Instead of NodePort services, use an Ingress controller for proper HTTP routing and load balancing.

Network Policies - Implement security controls that restrict which services can talk to which.

Terraform for EKS - Deploy this platform to real AWS infrastructure using Infrastructure as Code.

CI/CD Pipeline - Automate building images, scanning for vulnerabilities, and deploying to Kubernetes.

After that: monitoring with Prometheus, logging aggregation, service mesh, and all the other pieces of a production platform.

The Power of Hands-On Learning

I could have watched 10 hours of Kubernetes tutorials and still not understood it as deeply as deploying my own services and debugging the issues.

Reading about Deployments is one thing. Writing one, applying it, watching pods fail, reading error messages, fixing the manifest, and seeing it finally work - that's how you actually learn.

If you're learning Kubernetes, I recommend this approach: build something simple, deploy it, break it, fix it. The concepts that seemed abstract in documentation become concrete when you're running actual workloads.

Want to Follow Along?

The complete project is on GitHub: secure-cloud-platform

All the Kubernetes manifests, documentation, and setup instructions are there. Each service has detailed deployment guides.

I'm documenting this journey on Dev.to and LinkedIn. If you're also learning Kubernetes or have feedback on my setup, I'd love to hear from you.

Next post: Infrastructure as Code with Terraform - provisioning AWS resources and deploying this platform to real cloud infrastructure.


About my journey: Former Air Force officer and software engineer/solutions architect, now teaching middle school computer science while transitioning back into tech with a focus on DevSecOps. Building elite expertise in Infrastructure as Code, Kubernetes security, and cloud-native platforms. AWS certified (SA Pro, Security Specialty, Developer, SysOps). Learning in public, one commit at a time.

Top comments (0)