DEV Community

Cover image for 3 Microservices, 3 Days: What I Learned About DevOps Architecture
Jeff Graham
Jeff Graham

Posted on

3 Microservices, 3 Days: What I Learned About DevOps Architecture

A few days ago, I published my first technical post defining DevOps. Then I immediately started building - three microservices in three days. Here's what I learned about DevOps architecture by actually doing the work.

Why Build Three Services?

When I started my transition from teaching to DevSecOps engineering, I knew I needed more than certifications and theory. I needed to build real systems that demonstrate the principles I'm learning.

The goal: create a production-ready microservices platform that showcases Infrastructure as Code, container orchestration, security automation, and CI/CD practices. But before I could deploy to Kubernetes or write Terraform configurations, I needed actual services to orchestrate.

So I committed to building three microservices in three days, documenting everything publicly on GitHub.

Day 1: API Service - Learning Docker Fundamentals

The first service was a Node.js REST API - simple, familiar territory that let me focus on containerization rather than application logic.

What I built:

  • Express.js API with health check and basic endpoints
  • Dockerfile with multi-stage considerations
  • Proper health checks for container orchestration
  • Clean project structure

Key learnings:

The Docker build process taught me about image layers and caching. Every line in a Dockerfile creates a layer, and the order matters. Copying package.json before the application code means dependency installation only runs when dependencies actually change, not on every code change.

Health checks aren't optional in production systems. They're how orchestrators like Kubernetes know if a container is actually ready to receive traffic. I learned to implement them from day one rather than retrofitting later.

DevOps principle in action: Automation starts with containerization. If you can't package your application consistently, you can't automate its deployment.

You can see the API service code here.

Day 2: Auth Service - Security from the Start

The second service added JWT-based authentication. This taught me about service isolation and security thinking in a microservices architecture.

What I built:

  • JWT token generation and validation
  • Token refresh mechanism
  • Proper error handling for auth failures
  • Environment-based configuration

Key learnings:

Microservices force you to think about service boundaries. The auth service doesn't know about users, orders, or business logic. It has one job: authenticate and validate tokens. This separation is powerful because it means authentication logic lives in exactly one place.

Security considerations change in distributed systems. In a monolith, you might store sessions in memory. In microservices, you need stateless authentication that works across service boundaries. JWTs solve this, but they come with their own challenges around token invalidation and secret management.

I also learned that .env.example files are critical. They document what configuration a service needs without exposing actual secrets - a small detail that matters when multiple people (or your future self) need to run the service.

DevOps principle in action: Security must be built in, not bolted on. DevSecOps means considering security at every layer, from the container base image to how secrets are managed.

Check out the auth service implementation here.

Day 3: Worker Service - Diversifying the Stack

The third service was a Python background worker for asynchronous job processing. This diversified my tech stack and taught me about async patterns in distributed systems.

What I built:

  • Flask API for job submission
  • Threading-based job queue
  • Job status tracking
  • Multiple job types (email, image processing, data sync, reports)

Key learnings:

Background workers solve a fundamental problem in web applications: some operations take too long to run synchronously. Users don't want to wait 30 seconds for an image to process or a report to generate. Workers let you respond immediately with "job queued" and process it asynchronously.

In production, this worker would connect to Redis or RabbitMQ. For learning purposes, I used Python's Queue class. The important lesson was understanding the pattern: decouple request handling from work execution.

Python's threading model is different from Node's event loop. Understanding these differences matters when you're building polyglot microservices. The right tool for the job means choosing languages based on what each service needs to do.

DevOps principle in action: Observability matters from the start. The worker exposes statistics endpoints showing queue depth, completed jobs, and processing status. You can't operate what you can't observe.

See the worker service here.

What This Taught Me About DevOps Architecture

Building these services clarified several DevOps principles I'd only understood theoretically:

Microservices Enable Independent Deployment

Each service has its own Dockerfile, dependencies, and release cycle. The API service can deploy without touching auth or worker. This independence is what enables continuous deployment at scale.

Containers Are the Unit of Deployment

With Docker, "it works on my machine" becomes "it works in this container." The container is the deployment artifact. This consistency across development, staging, and production is foundational to DevOps practices.

Documentation Is Infrastructure

Good README files, clear environment variable documentation, and example configurations aren't nice-to-haves. They're essential infrastructure. When you're running dozens of services, clear documentation is what prevents deployment failures at 2am.

Service Communication Requires Planning

Right now these services run independently. But when I deploy them to Kubernetes, I'll need to think about:

  • Service discovery (how does the API find the auth service?)
  • Network policies (which services can talk to which?)
  • Failure handling (what happens when auth is down?)

These aren't problems that exist in monoliths. Microservices trade local complexity for distributed systems complexity.

Security Thinking Changes

In a monolith, you secure the perimeter. In microservices, you need defense in depth:

  • Container security scanning
  • Network policies between services
  • Secret management per service
  • Token-based authentication across service boundaries

This is why it's DevSecOps, not just DevOps. Security has to be woven into every architectural decision.

Architecture Decisions and Tradeoffs

Why three services specifically?

Three is the minimum to demonstrate microservices patterns without overwhelming complexity. You have:

  • A stateless API (scales horizontally easily)
  • A security service (single responsibility)
  • A background worker (async patterns)

This mirrors real production architectures at smaller scale.

Why Node and Python?

Polyglot microservices are common in production. Different services have different needs. Node's async model is great for I/O-bound APIs. Python's ecosystem is strong for data processing and background jobs. Learning to containerize both prepares me for real-world heterogeneous environments.

Why no database yet?

I'm building in layers. First, get services containerized and running. Next, orchestrate them with Kubernetes. Then add databases, message queues, and observability tooling. Trying to do everything at once is how projects stall.

What about production concerns?

Each service's README documents production considerations I'm aware of but haven't implemented yet:

  • Database persistence
  • Proper message queues
  • Horizontal scaling
  • Monitoring and alerting
  • Log aggregation

Acknowledging what you don't know is as important as demonstrating what you've learned.

What's Next: Kubernetes

These three services are ready for orchestration. Next up, I'm deploying them to a local Kubernetes cluster using Minikube. That means:

  • Writing Kubernetes manifests (Deployments, Services)
  • Understanding pods, replicas, and service discovery
  • Deploying all three services to K8s
  • Implementing service-to-service communication
  • Learning kubectl and cluster management

After that: Terraform for infrastructure as code, GitHub Actions for CI/CD, and security scanning throughout the pipeline.

The Power of Building in Public

Three days ago, I had an empty GitHub repository. Today, I have three working microservices, comprehensive documentation, and a deeper understanding of DevOps architecture than any tutorial could provide.

Building in public creates accountability. Documenting as I learn forces clarity. Sharing code means writing it well enough that others could run it.

If you're learning DevOps, I recommend this approach: pick a project, commit to building it publicly, and document everything. The act of explaining what you're learning solidifies the knowledge.

Want to Follow Along?

The complete project is on GitHub: secure-cloud-platform

Each service has its own README with setup instructions, API documentation, and production considerations. The project is a work in progress - I'm adding to it as I learn.

I'm documenting this journey on Dev.to and LinkedIn. If you're also transitioning into DevOps or have feedback on the architecture, I'd love to hear from you.

Next post: Deploying these microservices to Kubernetes - what I learn when three containers become three pods.


About my journey: Former Air Force officer and software engineer/solutions architect, now teaching middle school computer science while transitioning back into tech with a focus on DevSecOps. Building elite expertise in Infrastructure as Code, Kubernetes security, and cloud-native platforms. AWS certified (SA Pro, Security Specialty, Developer, SysOps). Learning in public, one commit at a time.

Top comments (0)