DEV Community

Cover image for How I Cut Infrastructure Costs by 60% by developing a Fast Feedback Development Platform using containerization technologies.
zaina ahmed
zaina ahmed

Posted on

How I Cut Infrastructure Costs by 60% by developing a Fast Feedback Development Platform using containerization technologies.

When I joined Ericsson's R&D team as an intern, our development environment had a problem that every engineer on the team silently accepted as normal: spinning up a full local environment took over 30 minutes, consumed enormous cloud resources, and cost the team 40–60% more in infrastructure than it needed to.

Six months later, I managed to cut that cost by 60%, reduced memory footprint per machine by 60%, and validation time from 30 minutes to under 10 minutes by smartly optimizing resources.

The tools that made it possible? Docker and Kubernetes.

This is exactly how I did it.


🐳 What Is Docker — And Why Should You Care?

Before Docker, deploying software meant:

  • "It works on my machine" — classic
  • Different OS versions breaking builds
  • Manual dependency installation on every server
  • Hours wasted on environment setup

Docker solves this by packaging your application and everything it needs into a single portable unit called a container.

Think of it like this:

Without Docker With Docker
"Install Java 17, then Maven, then..." docker run my-app
Works on my machine, breaks on server Runs identically everywhere
Manual environment setup per developer One command, same environment
Dependency conflicts between services Each container is isolated

📦 Docker Basics — The Essential Commands

Your first Dockerfile:

# Start from official Java 17 base image
FROM eclipse-temurin:17-jre-alpine

# Set working directory inside container
WORKDIR /app

# Copy the built jar file
COPY target/notification-service.jar app.jar

# Expose the port your app runs on
EXPOSE "port-number"

# Command to run when container starts
ENTRYPOINT ["java", "-jar", "app.jar"]
Enter fullscreen mode Exit fullscreen mode

Build your image:

# Build the Docker image and tag it
docker build -t notification-service:1.0 .

# Verify it was created
docker images
Enter fullscreen mode Exit fullscreen mode

Run your container:

# Run container on port "port_number"
docker run -p "port_number":"port_number" notification-service:1.0

# Run in background (detached mode)
docker run -d -p "port_number":"port_number" --name notification notification-service:1.0

# Check running containers
docker ps

# View logs
docker logs notification

# Stop container
docker stop notification
Enter fullscreen mode Exit fullscreen mode

🔗 Docker Compose — Running Multiple Services Together

A real application has multiple services, your app, a database, Kafka, Redis. Docker Compose lets you define and run them all together.

Our notification service stack:

# docker-compose.yml
version: '3.8'

services:
  # Spring Boot notification service
  notification-service:
    build: .
    ports:
      - "port_number":"port_number"
    environment:
      - SPRING_PROFILES_ACTIVE=dev

  # PostgreSQL database
  postgres:
    image: postgres:15-alpine
    environment:
      POSTGRES_DB: notifications
      POSTGRES_USER: "user_name"
      POSTGRES_PASSWORD: "password"
    volumes:
      - postgres-data:/var/lib/postgresql/data
    networks:
      - notification-network

  # Apache Kafka
  kafka:
    image: confluentinc/cp-kafka:7.4.0
    depends_on:
      - zookeeper
    ports:
      - "port_number":"port_number"

  # Zookeeper (required by Kafka)
  zookeeper:
    image: confluentinc/cp-zookeeper:7.4.0

  # Redis cache
  redis:
    image: redis:7-alpine
    ports:
      - "port_number":"port_number"
    networks:
      - notification-network

networks:
  notification-network:
    driver: bridge

volumes:
  postgres-data:
Enter fullscreen mode Exit fullscreen mode

Start everything:

# Start all services
docker-compose up -d

# Check all running
docker-compose ps

# View logs for specific service
docker-compose logs -f notification-service

# Stop everything
docker-compose down

# Stop and remove volumes (fresh start)
docker-compose down -v
Enter fullscreen mode Exit fullscreen mode

☸️ What Is Kubernetes — And When Do You Need It?

Docker Compose is great for local development. But in production you need:

  • Auto-scaling — handle traffic spikes automatically
  • Self-healing — restart crashed containers automatically
  • Load balancing — distribute traffic across instances
  • Rolling updates — deploy new versions with zero downtime

This is what Kubernetes does.

Kubernetes (K8s) is a container orchestration platform that manages your containers in production, deciding where they run, scaling them up and down, and keeping them healthy.


🏗️ Core Kubernetes Concepts

Pod

The smallest unit in Kubernetes, one or more containers running together.

# pod.yaml
apiVersion: v1
kind: Pod
metadata:
  name: notification-pod
spec:
  containers:
    - name: notification-service
      image: notification-service:1.0
      ports:
        - containerPort: "port_number"
Enter fullscreen mode Exit fullscreen mode

Deployment

Manages multiple pod replicas and handles rolling updates.

Service

Exposes your pods to network traffic — internal or external.


🛠️ How We Used Kubernetes-in-Docker (Kind) at Ericsson

Running full Kubernetes clusters in the cloud for every developer is expensive. We solved this using Kind (Kubernetes in Docker) — a tool that runs a full Kubernetes cluster inside Docker containers on your local machine.
Note: Kind is a multi-node cluster suitable for fast, lightweight, and disposable Kubernetes testing and development directly on a local machine.

Install Kind:

# On Linux/WSL
curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.20.0/kind-linux-amd64
chmod +x ./kind
sudo mv ./kind /usr/local/bin/kind

# Verify
kind version
Enter fullscreen mode Exit fullscreen mode

Create a local cluster:

# Create cluster
kind create cluster --name kind

# Verify it's running
kubectl cluster-info --context kind-kind

# See nodes
kubectl get nodes
Enter fullscreen mode Exit fullscreen mode

Deploy our notification stack locally:

# Apply all manifests
kubectl apply -f deployment.yaml
kubectl apply -f service.yaml

# Check deployments
kubectl get deployments

# Check pods
kubectl get pods

# Check logs
kubectl logs -f deployment/notification-deployment

# Scale up to 5 replicas
kubectl scale deployment notification-deployment --replicas=5
Enter fullscreen mode Exit fullscreen mode

🎯 Helm — Kubernetes Package Manager

Managing dozens of YAML files manually gets messy fast. Helm is the package manager for Kubernetes, it templates and packages all your Kubernetes manifests into reusable charts.

Install Helm:

curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
Enter fullscreen mode Exit fullscreen mode

Create a Helm chart:

helm create notification-chart
Enter fullscreen mode Exit fullscreen mode

Chart structure:

notification-chart/
├── Chart.yaml          ← chart metadata
├── values.yaml         ← default configuration values
└── templates/
    ├── deployment.yaml ← deployment template
    ├── service.yaml    ← service template
    └── configmap.yaml  ← config template
Enter fullscreen mode Exit fullscreen mode

values.yaml — configure everything in one place:

replicaCount: 3

image:
  repository: notification-service
  tag: "1.0"
  pullPolicy: IfNotPresent

service:
  type: ClusterIP
  port: "port_number"
Enter fullscreen mode Exit fullscreen mode

Deploy with Helm:

# Install chart
helm install notification ./notification-chart

# Upgrade with new values
helm upgrade notification ./notification-chart \
  --set replicaCount=5 \
  --set image.tag=2.0

# Check releases
helm list

# Rollback to previous version
helm rollback notification 1
Enter fullscreen mode Exit fullscreen mode

This is exactly how we scaled down 10+ microservices at Ericsson, reducing memory footprint by 60% by optimising the resources.requests values in each chart's values.yaml.


📊 The Results at Ericsson

Here's what we achieved after moving to a fully containerised, Kubernetes-orchestrated setup:

Metric Before After Improvement
Environment setup time 30+ mins Under 10 mins 3x faster
Cloud infrastructure cost Baseline Reduced 40–60% saving
Memory per developer machine Baseline Optimized 60% reduction
Manual configuration steps ~20 steps 1 command 95% reduction
Onboarding time for new developers 2 days 2 hours 8x faster

🐛 Mistakes I Made (So You Don't Have To)

1. Not setting resource limits

# ❌ Wrong — no limits, one service can eat all memory
containers:
  - name: my-service
    image: my-service:1.0

# ✅ Correct — always set requests and limits
containers:
  - name: my-service
    image: my-service:1.0
    resources:
      requests:
        memory: "mem"
        cpu: "cpu"
      limits:
        memory: "mem"
        cpu: "cpu"
Enter fullscreen mode Exit fullscreen mode

2. Storing secrets in environment variables directly

# ❌ Wrong — secret visible in plain text
env:
  - name: DB_PASSWORD
    value: "mysecretpassword"

# ✅ Correct — use Kubernetes Secrets
env:
  - name: DB_PASSWORD
    valueFrom:
      secretKeyRef:
        name: db-credentials
        key: password
Enter fullscreen mode Exit fullscreen mode

Create the secret:

kubectl create secret generic db-credentials \
  --from-literal=password=mysecretpassword
Enter fullscreen mode Exit fullscreen mode

3. No health checks

# ✅ Always add readiness and liveness probes
readinessProbe:
  httpGet:
    path: /actuator/health
    port: "port_number"
  initialDelaySeconds: 30
  periodSeconds: 10
livenessProbe:
  httpGet:
    path: /actuator/health
    port: "port_number"
  initialDelaySeconds: 60
  periodSeconds: 30
Enter fullscreen mode Exit fullscreen mode

🚀 Getting Started Checklist

  • [ ] Install Docker Desktop
  • [ ] Write your first Dockerfile
  • [ ] Build and run a container locally
  • [ ] Write a docker-compose.yml for your full stack
  • [ ] Install Kind and create a local cluster
  • [ ] Write your first Deployment and Service YAML
  • [ ] Install Helm and create a chart
  • [ ] Deploy with Helm and test scaling

🔮 What's Next

Once you're comfortable with the basics:

  • Kubernetes Ingress — expose services externally with routing rules
  • Horizontal Pod Autoscaler — auto-scale based on CPU/memory
  • Kubernetes Operators — automate complex stateful applications
  • ArgoCD — GitOps continuous deployment for Kubernetes
  • Istio — service mesh for advanced traffic management

Container orchestration is the backbone of modern cloud-native engineering. Every major cloud provider — AWS (EKS), Azure (AKS), Google (GKE) is built around Kubernetes. Learning it now puts you ahead of the curve.


Thanks for reading! I'm Zaina, a Software Engineer based in Perth, Australia, working with Java microservices, Apache Kafka, Docker and Kubernetes at Ericsson. Connect with me on LinkedIn or check out my portfolio.

Found this useful? Drop a ❤️ and share it with a fellow engineer just getting started with containers!

Top comments (0)