<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Jeff Graham</title>
    <description>The latest articles on DEV Community by Jeff Graham (@jeffgrahamcodes).</description>
    <link>https://dev.to/jeffgrahamcodes</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/jeffgrahamcodes"/>
    <language>en</language>
    <item>
      <title>From Minikube to EKS: When "exec format error" Taught Me About Platform</title>
      <dc:creator>Jeff Graham</dc:creator>
      <pubDate>Fri, 17 Oct 2025 01:42:05 +0000</pubDate>
      <link>https://dev.to/jeffgrahamcodes/from-minikube-to-eks-when-exec-format-error-taught-me-about-platform-57f3</link>
      <guid>https://dev.to/jeffgrahamcodes/from-minikube-to-eks-when-exec-format-error-taught-me-about-platform-57f3</guid>
      <description>&lt;p&gt;Last week, I deployed three microservices to Kubernetes locally. This week, I deployed them to AWS EKS. What I thought would be a straightforward migration turned into a deep lesson about container platforms, cross-compilation, and production deployment patterns.&lt;/p&gt;

&lt;p&gt;Here's what actually happened when theory met reality.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Confidence Before the Crash
&lt;/h2&gt;

&lt;p&gt;After successfully deploying to Minikube, I felt ready. I had working Kubernetes manifests, healthy pods, and services communicating properly. Moving to EKS seemed like the natural next step - just point kubectl at a different cluster, right?&lt;/p&gt;

&lt;p&gt;I provisioned an EKS cluster, updated my kubeconfig, and confidently deployed all three services:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; k8s-eks/api-service/
kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; k8s-eks/auth-service/
kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; k8s-eks/worker-service/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then I watched the pods:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get pods &lt;span class="nt"&gt;-w&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Everything showed "Running" status initially. Then "CrashLoopBackOff." All six pods. Every single one failing.&lt;/p&gt;

&lt;p&gt;This was not the smooth migration I'd envisioned.&lt;/p&gt;

&lt;h2&gt;
  
  
  When Everything Fails: The Debugging Process
&lt;/h2&gt;

&lt;p&gt;The first instinct when all your pods crash is panic. The second is to check the logs:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl logs api-service-6bdd859969-bsssz
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The output was cryptic:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;exec /usr/local/bin/docker-entrypoint.sh: exec format error
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I'd never seen this error before. Google taught me what it meant: &lt;strong&gt;platform architecture mismatch&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;My development machine is an M2 Mac (ARM64 architecture). I'd been building Docker images locally, which defaulted to ARM64. Minikube on my Mac runs ARM64, so everything worked fine.&lt;/p&gt;

&lt;p&gt;But EKS nodes run on AMD64 architecture. When Kubernetes tried to run my ARM64 containers on AMD64 nodes, the kernel couldn't execute the binaries. Hence: "exec format error."&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The lesson: Local development and production infrastructure need platform alignment, or you need multi-platform builds.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Solution Attempt #1: Just Add --platform (Failed)
&lt;/h2&gt;

&lt;p&gt;Armed with this knowledge, I tried the obvious fix:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker buildx build &lt;span class="nt"&gt;--platform&lt;/span&gt; linux/amd64 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-t&lt;/span&gt; 023231074087.dkr.ecr.us-east-1.amazonaws.com/secure-cloud-platform/api-service:v1 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--push&lt;/span&gt; &lt;span class="nb"&gt;.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I rebuilt all three services with the &lt;code&gt;--platform linux/amd64&lt;/code&gt; flag and pushed to ECR. Then I restarted the deployments:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl rollout restart deployment api-service
kubectl rollout restart deployment worker-service
kubectl rollout restart deployment auth-service
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Surely this would work. The new pods spun up. I watched them start...and crash again.&lt;/p&gt;

&lt;p&gt;Same error. Still ARM64 images somehow.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The lesson: The default Docker builder on Mac doesn't reliably cross-compile. You need a proper buildx builder.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Solution Attempt #2: Multi-Platform Builder (Success)
&lt;/h2&gt;

&lt;p&gt;The fix required creating a dedicated buildx builder that properly supports cross-platform builds:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker buildx create &lt;span class="nt"&gt;--name&lt;/span&gt; multiplatform &lt;span class="nt"&gt;--driver&lt;/span&gt; docker-container &lt;span class="nt"&gt;--use&lt;/span&gt;
docker buildx inspect &lt;span class="nt"&gt;--bootstrap&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This creates a container-based builder that can properly build for different platforms. Then I rebuilt:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cd &lt;/span&gt;apps/api-service
docker buildx build &lt;span class="nt"&gt;--platform&lt;/span&gt; linux/amd64 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-t&lt;/span&gt; 023231074087.dkr.ecr.us-east-1.amazonaws.com/secure-cloud-platform/api-service:v1 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--push&lt;/span&gt; &lt;span class="nb"&gt;.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Same for worker and auth services.&lt;/p&gt;

&lt;p&gt;But there was still a problem. When I checked which platforms were actually in ECR:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker buildx imagetools inspect &lt;span class="se"&gt;\&lt;/span&gt;
  023231074087.dkr.ecr.us-east-1.amazonaws.com/secure-cloud-platform/api-service:v1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The output showed multiple manifests - including both AMD64 AND an "unknown/unknown" platform (build attestations). Kubernetes was apparently pulling the wrong one.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The lesson: Multi-platform images are great, but you need to ensure Kubernetes pulls the right architecture.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Final Solution: Explicit SHA Digests
&lt;/h2&gt;

&lt;p&gt;The breakthrough came from understanding that the &lt;code&gt;:v1&lt;/code&gt; tag pointed to a manifest list with multiple platform variants. Kubernetes was making the wrong choice.&lt;/p&gt;

&lt;p&gt;The fix was to reference the specific AMD64 digest directly:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl &lt;span class="nb"&gt;set &lt;/span&gt;image deployment/api-service &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nv"&gt;api&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;023231074087.dkr.ecr.us-east-1.amazonaws.com/secure-cloud-platform/api-service:v1@sha256:6b8903f38db383b2732565a4022ad916b10009c962c761986a76841c2c354834
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I got the SHA256 digest from the imagetools inspect output - it showed exactly which digest was the AMD64 platform.&lt;/p&gt;

&lt;p&gt;After updating all three deployments with their specific AMD64 digests, I watched the pods:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get pods
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;NAME                              READY   STATUS    RESTARTS   AGE
api-service-65f8f5f745-6mhd7      1/1     Running   0          39s
api-service-65f8f5f745-sv2xx      1/1     Running   0          29s
auth-service-7867d7bc58-nlkj9     1/1     Running   0          6s
worker-service-7dd95b78bc-rtslr   1/1     Running   0          20s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Finally. All pods running. All services healthy.&lt;/p&gt;

&lt;p&gt;Testing the public API LoadBalancer:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl http://aec0488a8aa514bafbe7b0ad77647334-1658914460.us-east-1.elb.amazonaws.com/health
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nl"&gt;"status"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="s2"&gt;"healthy"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="nl"&gt;"service"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="s2"&gt;"api-service"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="nl"&gt;"timestamp"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="s2"&gt;"2025-10-16T23:51:26.385Z"&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Success.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Learned About Production Deployments
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Platform Architecture Matters
&lt;/h3&gt;

&lt;p&gt;In local development, you can ignore platform differences. In production, you can't. Your build process needs to account for where your code will actually run.&lt;/p&gt;

&lt;p&gt;For cloud deployments, that almost always means AMD64/x86_64, even if you develop on ARM Macs.&lt;/p&gt;

&lt;h3&gt;
  
  
  Multi-Platform Builds Are the Standard
&lt;/h3&gt;

&lt;p&gt;The proper solution isn't to avoid building on ARM machines. It's to build multi-platform images correctly:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker buildx build &lt;span class="nt"&gt;--platform&lt;/span&gt; linux/amd64,linux/arm64 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-t&lt;/span&gt; myimage:latest &lt;span class="nt"&gt;--push&lt;/span&gt; &lt;span class="nb"&gt;.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This creates a manifest list that works on both architectures. Kubernetes will automatically pull the right variant for each node.&lt;/p&gt;

&lt;p&gt;But you need the right builder setup, or it silently fails.&lt;/p&gt;

&lt;h3&gt;
  
  
  Debugging Requires Multiple Tools
&lt;/h3&gt;

&lt;p&gt;Solving this required understanding several layers:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;kubectl logs&lt;/code&gt; to see the error&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;kubectl describe pod&lt;/code&gt; to check events&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;docker buildx imagetools inspect&lt;/code&gt; to verify what platforms were actually built&lt;/li&gt;
&lt;li&gt;ECR console to confirm what was pushed&lt;/li&gt;
&lt;li&gt;Understanding of Docker manifest lists and multi-arch images&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;No single tool showed the full picture. Production debugging means combining multiple perspectives.&lt;/p&gt;

&lt;h3&gt;
  
  
  Local-to-Production Gaps Are Real
&lt;/h3&gt;

&lt;p&gt;Minikube worked because it matched my local architecture. Moving to EKS exposed the platform mismatch.&lt;/p&gt;

&lt;p&gt;This is a small example of a larger truth: &lt;strong&gt;local development environments don't fully replicate production, no matter how hard you try.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The solution isn't to make local identical to production. It's to catch these differences early with proper CI/CD pipelines that build and test in production-like environments.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Architecture That Emerged
&lt;/h2&gt;

&lt;p&gt;After all the debugging, here's what's running in EKS:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Three Deployments:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;API Service: 2 replicas on port 3000&lt;/li&gt;
&lt;li&gt;Auth Service: 2 replicas on port 3001
&lt;/li&gt;
&lt;li&gt;Worker Service: 2 replicas on port 3002&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Three Services:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;API Service: LoadBalancer (public internet access)&lt;/li&gt;
&lt;li&gt;Auth Service: ClusterIP (internal only)&lt;/li&gt;
&lt;li&gt;Worker Service: ClusterIP (internal only)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Resource Configuration:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;requests&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;128Mi"&lt;/span&gt;
    &lt;span class="na"&gt;cpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;100m"&lt;/span&gt;
  &lt;span class="na"&gt;limits&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;256Mi"&lt;/span&gt;
    &lt;span class="na"&gt;cpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;200m"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Health Checks:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;livenessProbe&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;httpGet&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/health&lt;/span&gt;
    &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;3000&lt;/span&gt;
  &lt;span class="na"&gt;initialDelaySeconds&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;10&lt;/span&gt;
  &lt;span class="na"&gt;periodSeconds&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;10&lt;/span&gt;
&lt;span class="na"&gt;readinessProbe&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;httpGet&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/health&lt;/span&gt;
    &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;3000&lt;/span&gt;
  &lt;span class="na"&gt;initialDelaySeconds&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;5&lt;/span&gt;
  &lt;span class="na"&gt;periodSeconds&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;5&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Every service has proper resource limits to prevent one container from consuming all node resources. Every service implements health checks so Kubernetes knows when to restart or remove pods from load balancing.&lt;/p&gt;

&lt;p&gt;This isn't just "getting it to work." This is production-ready configuration.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's Next
&lt;/h2&gt;

&lt;p&gt;This deployment taught me about the gap between local development and production infrastructure. But it also revealed the next challenges:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Immediate Needs:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;ConfigMaps for application configuration&lt;/li&gt;
&lt;li&gt;Secrets for sensitive data (JWT keys, DB credentials)&lt;/li&gt;
&lt;li&gt;Proper logging and monitoring&lt;/li&gt;
&lt;li&gt;CI/CD pipeline for automated deployments&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Future Infrastructure:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Ingress controller for better HTTP routing&lt;/li&gt;
&lt;li&gt;Network policies for service-to-service security&lt;/li&gt;
&lt;li&gt;Horizontal pod autoscaling based on load&lt;/li&gt;
&lt;li&gt;Complete observability stack&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The platform is running, but it's not yet production-ready. There's still work to do.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Value of Building in Public
&lt;/h2&gt;

&lt;p&gt;When you document your learning publicly, you can't hide mistakes. That's uncomfortable but valuable.&lt;/p&gt;

&lt;p&gt;I could have written a clean tutorial: "Here's how to deploy to EKS in 5 easy steps." That would have been simpler.&lt;/p&gt;

&lt;p&gt;But it wouldn't have been honest. And it wouldn't have taught the real lesson: &lt;strong&gt;production deployments are rarely smooth, and the debugging process is where you actually learn.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Every error message is a lesson. Every wrong assumption gets corrected. Every "why doesn't this work?" forces you to understand the system more deeply.&lt;/p&gt;

&lt;p&gt;That's how you build real expertise.&lt;/p&gt;

&lt;h2&gt;
  
  
  Want to Follow Along?
&lt;/h2&gt;

&lt;p&gt;The complete project is on GitHub: &lt;a href="https://github.com/jeffgrahamcodes/secure-cloud-platform" rel="noopener noreferrer"&gt;secure-cloud-platform&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;All the Kubernetes manifests, Dockerfiles, and deployment documentation are there. You can see exactly what I built and how it's configured.&lt;/p&gt;

&lt;p&gt;I'm documenting this journey on &lt;a href="https://dev.to/jeffgrahamcodes"&gt;Dev.to&lt;/a&gt; and &lt;a href="https://www.linkedin.com/in/jeffgrahamcodes/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;. If you're also learning Kubernetes, dealing with platform architecture issues, or just want to follow along, connect with me.&lt;/p&gt;

&lt;p&gt;Next post: Setting up proper configuration management with ConfigMaps and Secrets, and why hardcoded values in manifests are technical debt.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;About my journey:&lt;/strong&gt; Former Air Force officer and software engineer/solutions architect, now teaching middle school computer science while transitioning back into tech with a focus on DevSecOps. Building elite expertise in Infrastructure as Code, Kubernetes security, and cloud-native platforms. AWS certified (SA Pro, Security Specialty, Developer, SysOps). Learning in public, one commit at a time.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>aws</category>
      <category>devops</category>
      <category>docker</category>
    </item>
    <item>
      <title>From Docker Containers to Kubernetes Pods: Deploying My First Microservices Platform</title>
      <dc:creator>Jeff Graham</dc:creator>
      <pubDate>Sun, 12 Oct 2025 00:12:56 +0000</pubDate>
      <link>https://dev.to/jeffgrahamcodes/from-docker-containers-to-kubernetes-pods-deploying-my-first-microservices-platform-30a7</link>
      <guid>https://dev.to/jeffgrahamcodes/from-docker-containers-to-kubernetes-pods-deploying-my-first-microservices-platform-30a7</guid>
      <description>&lt;p&gt;Last week, I built three microservices in three days. This weekend, I deployed them all to Kubernetes. Here's what I learned about container orchestration by doing it myself.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Gap Between Docker and Kubernetes
&lt;/h2&gt;

&lt;p&gt;After building my API service, auth service, and worker service with Docker, I had three working containers. I could run them individually, test them locally, and they worked fine.&lt;/p&gt;

&lt;p&gt;But running containers individually doesn't scale. In production, you need:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Multiple replicas for redundancy&lt;/li&gt;
&lt;li&gt;Automatic restarts when containers fail&lt;/li&gt;
&lt;li&gt;Load balancing across instances&lt;/li&gt;
&lt;li&gt;Service discovery so containers can find each other&lt;/li&gt;
&lt;li&gt;Rolling updates without downtime&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That's what Kubernetes provides. It's the difference between "I have containers" and "I have a platform."&lt;/p&gt;

&lt;h2&gt;
  
  
  Day 1: Deploying the First Service
&lt;/h2&gt;

&lt;p&gt;I started with Minikube, a local Kubernetes cluster that runs on your machine. The installation was straightforward, but the conceptual shift from Docker to Kubernetes took time to understand.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Kubernetes introduces new abstractions:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pods&lt;/strong&gt; - The smallest unit in Kubernetes. Think of a pod as a wrapper around one or more containers. My API service container becomes an API service pod.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Deployments&lt;/strong&gt; - Manages pods. Instead of saying "run this container," you say "maintain 2 replicas of this service with these resources." Kubernetes handles the rest.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Services&lt;/strong&gt; - Provides a stable network endpoint. Pods can die and restart (with new IP addresses). Services give them a consistent name and handle load balancing.&lt;/p&gt;

&lt;p&gt;Writing my first Kubernetes manifests felt like learning a new language. Here's what a basic deployment looks like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;api-service&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;2&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;api-service&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;api-service&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;api&lt;/span&gt;
        &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;api-service:v1&lt;/span&gt;
        &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;3000&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This declares what I want (2 replicas of my API service), and Kubernetes maintains that state. If a pod crashes, Kubernetes automatically creates a new one. If I scale to 3 replicas, Kubernetes creates another pod. It's declarative infrastructure.&lt;/p&gt;

&lt;p&gt;The first deployment took several attempts. I hit the classic error: &lt;code&gt;ErrImageNeverPull&lt;/code&gt;. Minikube has its own Docker environment, so I had to rebuild my image in Minikube's Docker daemon. A small detail, but it cost me 30 minutes of debugging.&lt;/p&gt;

&lt;p&gt;Once the pods were running, accessing them was another learning curve. Kubernetes doesn't expose pods directly to your host machine. You need a Service with a NodePort, or you use Minikube's tunnel feature.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;minikube service api-service &lt;span class="nt"&gt;--url&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This gave me a localhost URL to test my API. When I hit the health endpoint and got back JSON, it felt like a breakthrough. My container was now a pod, managed by Kubernetes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Day 2: Deploying All Three Services
&lt;/h2&gt;

&lt;p&gt;With the pattern established, deploying the auth and worker services went faster. Build the image in Minikube's Docker, write the Deployment and Service manifests, apply them with kubectl, verify the pods are running.&lt;/p&gt;

&lt;p&gt;By the end of Saturday, I had:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;6 pods running (2 replicas each of API, auth, worker)&lt;/li&gt;
&lt;li&gt;3 Deployments managing them&lt;/li&gt;
&lt;li&gt;3 Services exposing them&lt;/li&gt;
&lt;li&gt;All orchestrated by Kubernetes&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Running &lt;code&gt;kubectl get all&lt;/code&gt; and seeing everything marked as "Running" was satisfying in a way that's hard to explain. This was no longer just containers. This was a platform.&lt;/p&gt;

&lt;h2&gt;
  
  
  Day 3: Service Discovery and Communication
&lt;/h2&gt;

&lt;p&gt;The real power of Kubernetes became apparent when I tested service-to-service communication.&lt;/p&gt;

&lt;p&gt;In Kubernetes, services can reach each other by name. No IP addresses, no configuration files with hardcoded endpoints. Kubernetes DNS handles service discovery automatically.&lt;/p&gt;

&lt;p&gt;From any pod in the cluster, I could reach other services:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;http://api-service:3000&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;http://auth-service:3001&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;http://worker-service:3002&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To test this, I ran a temporary pod with curl installed:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl run curl-test &lt;span class="nt"&gt;--image&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;curlimages/curl &lt;span class="nt"&gt;-it&lt;/span&gt; &lt;span class="nt"&gt;--rm&lt;/span&gt; &lt;span class="nt"&gt;--&lt;/span&gt; sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Inside that pod, I tested the full authentication flow:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Called auth-service to login and get a JWT token&lt;/li&gt;
&lt;li&gt;Verified the token by calling auth-service again&lt;/li&gt;
&lt;li&gt;Created a background job by calling worker-service&lt;/li&gt;
&lt;li&gt;Checked worker statistics&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Everything worked. Services could discover and communicate with each other automatically. The auth service didn't need to know the worker service's IP address. Kubernetes DNS resolved the service names to the correct pods and load-balanced requests across replicas.&lt;/p&gt;

&lt;p&gt;This is what makes microservices viable at scale. Services are decoupled. Pods can restart, move to different nodes, or scale up and down, and the network layer just works.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Learned About Kubernetes
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Kubernetes is declarative, not imperative&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;With Docker, you run commands: "start this container with these flags." With Kubernetes, you declare what you want: "maintain 2 replicas of this service." Kubernetes continuously reconciles the actual state with your desired state.&lt;/p&gt;

&lt;p&gt;If a pod crashes, Kubernetes automatically creates a new one. You don't write scripts to handle failures. The system is self-healing by design.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Resource limits matter&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Every container in my deployments has CPU and memory limits:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;requests&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;128Mi"&lt;/span&gt;
    &lt;span class="na"&gt;cpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;100m"&lt;/span&gt;
  &lt;span class="na"&gt;limits&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;256Mi"&lt;/span&gt;
    &lt;span class="na"&gt;cpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;200m"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Requests tell Kubernetes how much the container needs. Limits prevent a single container from consuming all node resources. Without these, a memory leak in one service could crash the entire cluster.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Health checks are essential&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I implemented liveness and readiness probes for every service:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;livenessProbe&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;httpGet&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/health&lt;/span&gt;
    &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;3000&lt;/span&gt;
  &lt;span class="na"&gt;initialDelaySeconds&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;10&lt;/span&gt;
  &lt;span class="na"&gt;periodSeconds&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;10&lt;/span&gt;
&lt;span class="na"&gt;readinessProbe&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;httpGet&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/health&lt;/span&gt;
    &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;3000&lt;/span&gt;
  &lt;span class="na"&gt;initialDelaySeconds&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;5&lt;/span&gt;
  &lt;span class="na"&gt;periodSeconds&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;5&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Liveness probes tell Kubernetes if the container is healthy (if not, restart it). Readiness probes tell Kubernetes if the container is ready to receive traffic (if not, remove it from the service endpoints).&lt;/p&gt;

&lt;p&gt;These aren't optional features to add later. They're fundamental to how Kubernetes manages your workloads.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;kubectl is your interface to everything&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Learning kubectl commands was like learning a new shell. The most useful ones:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get pods              &lt;span class="c"&gt;# See all pods&lt;/span&gt;
kubectl get pods &lt;span class="nt"&gt;-w&lt;/span&gt;           &lt;span class="c"&gt;# Watch pods update&lt;/span&gt;
kubectl logs &amp;lt;pod-name&amp;gt;       &lt;span class="c"&gt;# View pod logs&lt;/span&gt;
kubectl describe pod &amp;lt;name&amp;gt;   &lt;span class="c"&gt;# Debug pod issues&lt;/span&gt;
kubectl &lt;span class="nb"&gt;exec&lt;/span&gt; &lt;span class="nt"&gt;-it&lt;/span&gt; &amp;lt;pod&amp;gt; &lt;span class="nt"&gt;--&lt;/span&gt; sh  &lt;span class="c"&gt;# Shell into a pod&lt;/span&gt;
kubectl scale deployment api-service &lt;span class="nt"&gt;--replicas&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;3  &lt;span class="c"&gt;# Scale up&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When things go wrong (and they will), these commands are how you debug. Looking at pod events with &lt;code&gt;kubectl describe&lt;/code&gt; saved me multiple times when pods wouldn't start.&lt;/p&gt;

&lt;h2&gt;
  
  
  Challenges I Hit
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Image pull errors&lt;/strong&gt; were frustrating until I understood Minikube's Docker environment. Building images locally and expecting Kubernetes to find them doesn't work. You have to build in the right context.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Networking confusion&lt;/strong&gt; took time to sort out. NodePort services, Minikube tunnels, internal vs external access - it's more complex than Docker's port mapping. Understanding that services handle internal routing while NodePort handles external access clarified things.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;YAML syntax&lt;/strong&gt; is unforgiving. Missing indentation, typos in field names, wrong API versions - all caused cryptic errors. A linter would have saved me time.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why This Matters for DevOps
&lt;/h2&gt;

&lt;p&gt;Container orchestration is fundamental to modern DevOps practices. You can't practice continuous deployment at scale without it. You can't implement blue-green deployments, canary releases, or auto-scaling without an orchestrator.&lt;/p&gt;

&lt;p&gt;Kubernetes provides the platform for everything else: CI/CD pipelines, infrastructure as code, observability, security policies. Learning Kubernetes isn't just about containers. It's about building production systems.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's Next
&lt;/h2&gt;

&lt;p&gt;This weekend taught me Kubernetes fundamentals, but I'm just scratching the surface. Coming next:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ConfigMaps and Secrets&lt;/strong&gt; - Right now, configuration is hardcoded in my manifests. I need to externalize configuration and manage secrets properly.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Ingress&lt;/strong&gt; - Instead of NodePort services, use an Ingress controller for proper HTTP routing and load balancing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Network Policies&lt;/strong&gt; - Implement security controls that restrict which services can talk to which.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Terraform for EKS&lt;/strong&gt; - Deploy this platform to real AWS infrastructure using Infrastructure as Code.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;CI/CD Pipeline&lt;/strong&gt; - Automate building images, scanning for vulnerabilities, and deploying to Kubernetes.&lt;/p&gt;

&lt;p&gt;After that: monitoring with Prometheus, logging aggregation, service mesh, and all the other pieces of a production platform.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Power of Hands-On Learning
&lt;/h2&gt;

&lt;p&gt;I could have watched 10 hours of Kubernetes tutorials and still not understood it as deeply as deploying my own services and debugging the issues.&lt;/p&gt;

&lt;p&gt;Reading about Deployments is one thing. Writing one, applying it, watching pods fail, reading error messages, fixing the manifest, and seeing it finally work - that's how you actually learn.&lt;/p&gt;

&lt;p&gt;If you're learning Kubernetes, I recommend this approach: build something simple, deploy it, break it, fix it. The concepts that seemed abstract in documentation become concrete when you're running actual workloads.&lt;/p&gt;

&lt;h2&gt;
  
  
  Want to Follow Along?
&lt;/h2&gt;

&lt;p&gt;The complete project is on GitHub: &lt;a href="https://github.com/jeffgrahamcodes/secure-cloud-platform" rel="noopener noreferrer"&gt;secure-cloud-platform&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;All the Kubernetes manifests, documentation, and setup instructions are there. Each service has detailed deployment guides.&lt;/p&gt;

&lt;p&gt;I'm documenting this journey on &lt;a href="https://dev.to/jeffgrahamcodes"&gt;Dev.to&lt;/a&gt; and &lt;a href="https://www.linkedin.com/in/jeffgrahamcodes/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;. If you're also learning Kubernetes or have feedback on my setup, I'd love to hear from you.&lt;/p&gt;

&lt;p&gt;Next post: Infrastructure as Code with Terraform - provisioning AWS resources and deploying this platform to real cloud infrastructure.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;About my journey:&lt;/strong&gt; Former Air Force officer and software engineer/solutions architect, now teaching middle school computer science while transitioning back into tech with a focus on DevSecOps. Building elite expertise in Infrastructure as Code, Kubernetes security, and cloud-native platforms. AWS certified (SA Pro, Security Specialty, Developer, SysOps). Learning in public, one commit at a time.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
      <category>microservices</category>
      <category>docker</category>
    </item>
    <item>
      <title>3 Microservices, 3 Days: What I Learned About DevOps Architecture</title>
      <dc:creator>Jeff Graham</dc:creator>
      <pubDate>Fri, 10 Oct 2025 02:05:19 +0000</pubDate>
      <link>https://dev.to/jeffgrahamcodes/3-microservices-3-days-what-i-learned-about-devops-architecture-23n0</link>
      <guid>https://dev.to/jeffgrahamcodes/3-microservices-3-days-what-i-learned-about-devops-architecture-23n0</guid>
      <description>&lt;p&gt;A few days ago, I published my first technical post defining DevOps. Then I immediately started building - three microservices in three days. Here's what I learned about DevOps architecture by actually doing the work.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Build Three Services?
&lt;/h2&gt;

&lt;p&gt;When I started my transition from teaching to DevSecOps engineering, I knew I needed more than certifications and theory. I needed to build real systems that demonstrate the principles I'm learning.&lt;/p&gt;

&lt;p&gt;The goal: create a production-ready microservices platform that showcases Infrastructure as Code, container orchestration, security automation, and CI/CD practices. But before I could deploy to Kubernetes or write Terraform configurations, I needed actual services to orchestrate.&lt;/p&gt;

&lt;p&gt;So I committed to building three microservices in three days, documenting everything publicly on GitHub.&lt;/p&gt;

&lt;h2&gt;
  
  
  Day 1: API Service - Learning Docker Fundamentals
&lt;/h2&gt;

&lt;p&gt;The first service was a Node.js REST API - simple, familiar territory that let me focus on containerization rather than application logic.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What I built:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Express.js API with health check and basic endpoints&lt;/li&gt;
&lt;li&gt;Dockerfile with multi-stage considerations&lt;/li&gt;
&lt;li&gt;Proper health checks for container orchestration&lt;/li&gt;
&lt;li&gt;Clean project structure&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Key learnings:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The Docker build process taught me about image layers and caching. Every line in a Dockerfile creates a layer, and the order matters. Copying &lt;code&gt;package.json&lt;/code&gt; before the application code means dependency installation only runs when dependencies actually change, not on every code change.&lt;/p&gt;

&lt;p&gt;Health checks aren't optional in production systems. They're how orchestrators like Kubernetes know if a container is actually ready to receive traffic. I learned to implement them from day one rather than retrofitting later.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;DevOps principle in action:&lt;/strong&gt; Automation starts with containerization. If you can't package your application consistently, you can't automate its deployment.&lt;/p&gt;

&lt;p&gt;You can see the API service code &lt;a href="https://github.com/jeffgrahamcodes/secure-cloud-platform/tree/main/apps/api-service" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Day 2: Auth Service - Security from the Start
&lt;/h2&gt;

&lt;p&gt;The second service added JWT-based authentication. This taught me about service isolation and security thinking in a microservices architecture.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What I built:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;JWT token generation and validation&lt;/li&gt;
&lt;li&gt;Token refresh mechanism&lt;/li&gt;
&lt;li&gt;Proper error handling for auth failures&lt;/li&gt;
&lt;li&gt;Environment-based configuration&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Key learnings:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Microservices force you to think about service boundaries. The auth service doesn't know about users, orders, or business logic. It has one job: authenticate and validate tokens. This separation is powerful because it means authentication logic lives in exactly one place.&lt;/p&gt;

&lt;p&gt;Security considerations change in distributed systems. In a monolith, you might store sessions in memory. In microservices, you need stateless authentication that works across service boundaries. JWTs solve this, but they come with their own challenges around token invalidation and secret management.&lt;/p&gt;

&lt;p&gt;I also learned that &lt;code&gt;.env.example&lt;/code&gt; files are critical. They document what configuration a service needs without exposing actual secrets - a small detail that matters when multiple people (or your future self) need to run the service.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;DevOps principle in action:&lt;/strong&gt; Security must be built in, not bolted on. DevSecOps means considering security at every layer, from the container base image to how secrets are managed.&lt;/p&gt;

&lt;p&gt;Check out the auth service implementation &lt;a href="https://github.com/jeffgrahamcodes/secure-cloud-platform/tree/main/apps/auth-service" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Day 3: Worker Service - Diversifying the Stack
&lt;/h2&gt;

&lt;p&gt;The third service was a Python background worker for asynchronous job processing. This diversified my tech stack and taught me about async patterns in distributed systems.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What I built:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Flask API for job submission&lt;/li&gt;
&lt;li&gt;Threading-based job queue&lt;/li&gt;
&lt;li&gt;Job status tracking&lt;/li&gt;
&lt;li&gt;Multiple job types (email, image processing, data sync, reports)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Key learnings:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Background workers solve a fundamental problem in web applications: some operations take too long to run synchronously. Users don't want to wait 30 seconds for an image to process or a report to generate. Workers let you respond immediately with "job queued" and process it asynchronously.&lt;/p&gt;

&lt;p&gt;In production, this worker would connect to Redis or RabbitMQ. For learning purposes, I used Python's Queue class. The important lesson was understanding the pattern: decouple request handling from work execution.&lt;/p&gt;

&lt;p&gt;Python's threading model is different from Node's event loop. Understanding these differences matters when you're building polyglot microservices. The right tool for the job means choosing languages based on what each service needs to do.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;DevOps principle in action:&lt;/strong&gt; Observability matters from the start. The worker exposes statistics endpoints showing queue depth, completed jobs, and processing status. You can't operate what you can't observe.&lt;/p&gt;

&lt;p&gt;See the worker service &lt;a href="https://github.com/jeffgrahamcodes/secure-cloud-platform/tree/main/apps/worker-service" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  What This Taught Me About DevOps Architecture
&lt;/h2&gt;

&lt;p&gt;Building these services clarified several DevOps principles I'd only understood theoretically:&lt;/p&gt;

&lt;h3&gt;
  
  
  Microservices Enable Independent Deployment
&lt;/h3&gt;

&lt;p&gt;Each service has its own Dockerfile, dependencies, and release cycle. The API service can deploy without touching auth or worker. This independence is what enables continuous deployment at scale.&lt;/p&gt;

&lt;h3&gt;
  
  
  Containers Are the Unit of Deployment
&lt;/h3&gt;

&lt;p&gt;With Docker, "it works on my machine" becomes "it works in this container." The container is the deployment artifact. This consistency across development, staging, and production is foundational to DevOps practices.&lt;/p&gt;

&lt;h3&gt;
  
  
  Documentation Is Infrastructure
&lt;/h3&gt;

&lt;p&gt;Good README files, clear environment variable documentation, and example configurations aren't nice-to-haves. They're essential infrastructure. When you're running dozens of services, clear documentation is what prevents deployment failures at 2am.&lt;/p&gt;

&lt;h3&gt;
  
  
  Service Communication Requires Planning
&lt;/h3&gt;

&lt;p&gt;Right now these services run independently. But when I deploy them to Kubernetes, I'll need to think about:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Service discovery (how does the API find the auth service?)&lt;/li&gt;
&lt;li&gt;Network policies (which services can talk to which?)&lt;/li&gt;
&lt;li&gt;Failure handling (what happens when auth is down?)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These aren't problems that exist in monoliths. Microservices trade local complexity for distributed systems complexity.&lt;/p&gt;

&lt;h3&gt;
  
  
  Security Thinking Changes
&lt;/h3&gt;

&lt;p&gt;In a monolith, you secure the perimeter. In microservices, you need defense in depth:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Container security scanning&lt;/li&gt;
&lt;li&gt;Network policies between services&lt;/li&gt;
&lt;li&gt;Secret management per service&lt;/li&gt;
&lt;li&gt;Token-based authentication across service boundaries&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is why it's DevSecOps, not just DevOps. Security has to be woven into every architectural decision.&lt;/p&gt;

&lt;h2&gt;
  
  
  Architecture Decisions and Tradeoffs
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Why three services specifically?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Three is the minimum to demonstrate microservices patterns without overwhelming complexity. You have:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A stateless API (scales horizontally easily)&lt;/li&gt;
&lt;li&gt;A security service (single responsibility)&lt;/li&gt;
&lt;li&gt;A background worker (async patterns)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This mirrors real production architectures at smaller scale.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why Node and Python?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Polyglot microservices are common in production. Different services have different needs. Node's async model is great for I/O-bound APIs. Python's ecosystem is strong for data processing and background jobs. Learning to containerize both prepares me for real-world heterogeneous environments.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why no database yet?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I'm building in layers. First, get services containerized and running. Next, orchestrate them with Kubernetes. Then add databases, message queues, and observability tooling. Trying to do everything at once is how projects stall.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What about production concerns?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Each service's README documents production considerations I'm aware of but haven't implemented yet:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Database persistence&lt;/li&gt;
&lt;li&gt;Proper message queues&lt;/li&gt;
&lt;li&gt;Horizontal scaling&lt;/li&gt;
&lt;li&gt;Monitoring and alerting&lt;/li&gt;
&lt;li&gt;Log aggregation&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Acknowledging what you don't know is as important as demonstrating what you've learned.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's Next: Kubernetes
&lt;/h2&gt;

&lt;p&gt;These three services are ready for orchestration. Next up, I'm deploying them to a local Kubernetes cluster using Minikube. That means:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Writing Kubernetes manifests (Deployments, Services)&lt;/li&gt;
&lt;li&gt;Understanding pods, replicas, and service discovery&lt;/li&gt;
&lt;li&gt;Deploying all three services to K8s&lt;/li&gt;
&lt;li&gt;Implementing service-to-service communication&lt;/li&gt;
&lt;li&gt;Learning kubectl and cluster management&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;After that: Terraform for infrastructure as code, GitHub Actions for CI/CD, and security scanning throughout the pipeline.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Power of Building in Public
&lt;/h2&gt;

&lt;p&gt;Three days ago, I had an empty GitHub repository. Today, I have three working microservices, comprehensive documentation, and a deeper understanding of DevOps architecture than any tutorial could provide.&lt;/p&gt;

&lt;p&gt;Building in public creates accountability. Documenting as I learn forces clarity. Sharing code means writing it well enough that others could run it.&lt;/p&gt;

&lt;p&gt;If you're learning DevOps, I recommend this approach: pick a project, commit to building it publicly, and document everything. The act of explaining what you're learning solidifies the knowledge.&lt;/p&gt;

&lt;h2&gt;
  
  
  Want to Follow Along?
&lt;/h2&gt;

&lt;p&gt;The complete project is on GitHub: &lt;a href="https://github.com/jeffgrahamcodes/secure-cloud-platform" rel="noopener noreferrer"&gt;secure-cloud-platform&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Each service has its own README with setup instructions, API documentation, and production considerations. The project is a work in progress - I'm adding to it as I learn.&lt;/p&gt;

&lt;p&gt;I'm documenting this journey on &lt;a href="https://dev.to/jeffgrahamcodes"&gt;Dev.to&lt;/a&gt; and &lt;a href="https://www.linkedin.com/in/jeffgrahamcodes/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;. If you're also transitioning into DevOps or have feedback on the architecture, I'd love to hear from you.&lt;/p&gt;

&lt;p&gt;Next post: Deploying these microservices to Kubernetes - what I learn when three containers become three pods.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;About my journey:&lt;/strong&gt; Former Air Force officer and software engineer/solutions architect, now teaching middle school computer science while transitioning back into tech with a focus on DevSecOps. Building elite expertise in Infrastructure as Code, Kubernetes security, and cloud-native platforms. AWS certified (SA Pro, Security Specialty, Developer, SysOps). Learning in public, one commit at a time.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>microservices</category>
      <category>docker</category>
      <category>learning</category>
    </item>
    <item>
      <title>What is DevOps? A Definition from a Teacher Transitioning to DevSecOps</title>
      <dc:creator>Jeff Graham</dc:creator>
      <pubDate>Tue, 07 Oct 2025 00:04:03 +0000</pubDate>
      <link>https://dev.to/jeffgrahamcodes/what-is-devops-a-definition-from-a-teacher-transitioning-to-devsecops-3b6n</link>
      <guid>https://dev.to/jeffgrahamcodes/what-is-devops-a-definition-from-a-teacher-transitioning-to-devsecops-3b6n</guid>
      <description>&lt;h2&gt;
  
  
  The Problem DevOps Solves
&lt;/h2&gt;

&lt;p&gt;For decades, software development operated like a relay race with a broken baton pass. Developers would "throw code over the wall" to operations teams, who then struggled to deploy and maintain it in production. When things inevitably broke, the blame game began. This friction created slow release cycles, brittle systems, and frustrated teams on both sides.&lt;/p&gt;

&lt;p&gt;I'm witnessing this firsthand as I transition from teaching middle school computer science to pursuing a career in DevSecOps. As I study the landscape, I'm realizing that DevOps emerged as a fundamental rethinking of how we build and run software.&lt;/p&gt;

&lt;h2&gt;
  
  
  What DevOps Actually Is
&lt;/h2&gt;

&lt;p&gt;DevOps isn't just a set of tools or a job title—it's a cultural philosophy that tears down the wall between development and operations.&lt;/p&gt;

&lt;p&gt;At its core, DevOps means &lt;strong&gt;shared ownership of the entire software lifecycle&lt;/strong&gt;. Developers don't get to say "it worked on my machine" and walk away. Operations teams don't just keep the lights on without understanding what they're running. Everyone is responsible for delivering value to users—from the first line of code to the last byte served in production.&lt;/p&gt;

&lt;p&gt;This shared responsibility manifests through three key practices:&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Automation Everywhere
&lt;/h3&gt;

&lt;p&gt;Every code commit triggers automated testing, security scanning, and quality checks. Deployments happen through repeatable, auditable pipelines—not manual runbooks prone to human error. Infrastructure itself becomes code that can be versioned, tested, and deployed like any application.&lt;/p&gt;

&lt;p&gt;As someone with a software engineering background, this resonates deeply. The idea that infrastructure can be treated with the same rigor as application code is powerful.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Continuous Feedback Loops
&lt;/h3&gt;

&lt;p&gt;Monitoring, logging, and observability aren't afterthoughts—they're built in from day one. When issues arise, teams practice blameless post-mortems focused on improving systems, not finding scapegoats.&lt;/p&gt;

&lt;p&gt;This reminds me of my work as a Solutions Architect, where understanding system behavior was critical to making good architectural decisions.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Iterative Improvement
&lt;/h3&gt;

&lt;p&gt;Instead of massive quarterly releases that terrify everyone, teams ship small changes frequently. This reduces risk, accelerates learning, and keeps the feedback cycle tight between what we build and how users experience it.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Business Impact
&lt;/h2&gt;

&lt;p&gt;Companies practicing DevOps deploy code dramatically more frequently while maintaining stability and security. They recover from incidents faster and spend less time firefighting, freeing teams to build new capabilities instead of just keeping existing ones alive.&lt;/p&gt;

&lt;p&gt;But here's what's often missed: DevOps isn't about developers doing operations work or ops learning to code (though both help). It's about creating a culture where &lt;strong&gt;the success of the system is everyone's job&lt;/strong&gt;. Where "it's not my problem" becomes "how can I help?" Where deployment day stops being feared and starts being routine.&lt;/p&gt;

&lt;h2&gt;
  
  
  My Journey
&lt;/h2&gt;

&lt;p&gt;As I transition from teaching to DevSecOps, I'm building a foundation in:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Infrastructure as Code (Terraform)&lt;/li&gt;
&lt;li&gt;Container orchestration (Kubernetes)&lt;/li&gt;
&lt;li&gt;Cloud platforms (AWS—I hold SA Pro, Security Specialty, Developer Associate, and SysOps Admin certs)&lt;/li&gt;
&lt;li&gt;Security automation and scanning&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I'm documenting my journey by building a production-ready microservices platform that demonstrates DevSecOps principles in action. You can follow along at &lt;a href="https://github.com/jeffgrahamcodes/secure-cloud-platform" rel="noopener noreferrer"&gt;github.com/jeffgrahamcodes/secure-cloud-platform&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why This Matters to Me
&lt;/h2&gt;

&lt;p&gt;Coming from education, I've spent years breaking down complex concepts for middle schoolers. Now I'm realizing that DevOps represents the maturation of our industry—moving from individual craftspeople working in isolation to true engineering discipline with collaboration, measurement, and continuous improvement at its heart.&lt;/p&gt;

&lt;p&gt;The principles I taught in the classroom—collaboration, accountability, continuous learning—are the same principles that make DevOps teams successful.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's Next
&lt;/h2&gt;

&lt;p&gt;I'm on a mission to become a top 1% DevSecOps engineer, and part of that journey is solidifying my understanding of these fundamental concepts. This post is my current understanding of DevOps, but I know it will evolve as I gain hands-on experience.&lt;/p&gt;

&lt;p&gt;If you're further along in your DevOps journey, I'd love to hear your thoughts. What did I miss? What nuances become apparent only after you've lived it?&lt;/p&gt;

&lt;p&gt;And if you're also transitioning into DevOps, let's connect. We're in this together.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;About me:&lt;/strong&gt; Former Air Force officer, software engineer, and Solutions Architect now teaching middle school math and computer science. Currently making the leap back into tech with a focus on DevSecOps, infrastructure as code, and Kubernetes security. Follow my journey as I build in public.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Connect with me on &lt;a href="https://www.linkedin.com/in/jeffgrahamcodes/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt; | Follow my projects on &lt;a href="https://github.com/jeffgrahamcodes" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>career</category>
      <category>beginners</category>
      <category>learning</category>
    </item>
  </channel>
</rss>
