<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Precious Okpor</title>
    <description>The latest articles on DEV Community by Precious Okpor (@devpops).</description>
    <link>https://dev.to/devpops</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/devpops"/>
    <language>en</language>
    <item>
      <title>How I Containerised 5 Monoliths and Deployed Them to EKS</title>
      <dc:creator>Precious Okpor</dc:creator>
      <pubDate>Wed, 08 Apr 2026 15:18:55 +0000</pubDate>
      <link>https://dev.to/devpops/how-i-containerised-5-monoliths-and-deployed-them-to-eks-3p2</link>
      <guid>https://dev.to/devpops/how-i-containerised-5-monoliths-and-deployed-them-to-eks-3p2</guid>
      <description>&lt;p&gt;There's a blog post I've always found frustrating: the kind that shows you a perfect Dockerfile, a clean &lt;code&gt;terraform apply&lt;/code&gt;, and a screenshot of everything working on the first try. No errors. No wrong turns.&lt;/p&gt;

&lt;p&gt;This isn't that post.&lt;/p&gt;

&lt;p&gt;I'm a DevOps and Cloud Engineer — in practice, I do DevOps and SRE work: Kubernetes clusters, CI/CD pipelines, AWS infrastructure. I containerise things regularly. But I'd never sat down and worked through five different stacks back to back, treating each one as a distinct challenge.&lt;/p&gt;

&lt;p&gt;So I did. Here's what I built, what broke, and what I learned.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Setup
&lt;/h2&gt;

&lt;p&gt;Five demo apps. Each represents a different monolith archetype. Each has a &lt;code&gt;/health&lt;/code&gt; endpoint and one meaningful route — simple enough that the app isn't the distraction. AI helped with creating the apps so I could focus on the DevOps aspects of the project.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;App&lt;/th&gt;
&lt;th&gt;Stack&lt;/th&gt;
&lt;th&gt;What it teaches&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;app-node-api&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Node.js + Express&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;node_modules&lt;/code&gt;, &lt;code&gt;.dockerignore&lt;/code&gt; discipline&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;app-python-api&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Python + Flask&lt;/td&gt;
&lt;td&gt;Slim vs Alpine tradeoffs&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;app-nestjs-api&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;NestJS (TypeScript)&lt;/td&gt;
&lt;td&gt;Three-stage builds: deps → compile → runtime&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;app-react-spa&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;React + Nginx&lt;/td&gt;
&lt;td&gt;Static asset serving, SPA routing&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;app-go-service&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Go&lt;/td&gt;
&lt;td&gt;Distroless images, static binaries&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The full repo: &lt;code&gt;eks-monolith-migration&lt;/code&gt; — IaC, Dockerfiles, k8s manifests, CI/CD, all of it.&lt;/p&gt;




&lt;h2&gt;
  
  
  Phase 1: The Dockerfiles
&lt;/h2&gt;

&lt;h3&gt;
  
  
  App 1 — Node.js: The &lt;code&gt;.dockerignore&lt;/code&gt; Wake-Up Call
&lt;/h3&gt;

&lt;p&gt;Node.js is where most people make the biggest beginner mistake: not using &lt;code&gt;.dockerignore&lt;/code&gt;, so Docker sends your entire &lt;code&gt;node_modules&lt;/code&gt; as build context on every build.&lt;/p&gt;

&lt;p&gt;My &lt;code&gt;.dockerignore&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;node_modules
npm-debug.log
.env
.git
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;My Dockerfile: two-stage. Install deps in stage one, copy only what runs in stage two.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;node:24-alpine&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;AS&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;deps&lt;/span&gt;
&lt;span class="k"&gt;WORKDIR&lt;/span&gt;&lt;span class="s"&gt; /app&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; package*.json ./&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;npm ci &lt;span class="nt"&gt;--only&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;production

&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;node:24-alpine&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;AS&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;runtime&lt;/span&gt;
&lt;span class="k"&gt;WORKDIR&lt;/span&gt;&lt;span class="s"&gt; /app&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; --from=deps /app/node_modules ./node_modules&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; src/ ./src/&lt;/span&gt;
&lt;span class="k"&gt;ENV&lt;/span&gt;&lt;span class="s"&gt; NODE_ENV=production&lt;/span&gt;
&lt;span class="k"&gt;USER&lt;/span&gt;&lt;span class="s"&gt; node&lt;/span&gt;
&lt;span class="k"&gt;EXPOSE&lt;/span&gt;&lt;span class="s"&gt; 3001&lt;/span&gt;
&lt;span class="k"&gt;CMD&lt;/span&gt;&lt;span class="s"&gt; ["node", "src/index.js"]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;USER node&lt;/code&gt; is easy to forget and almost never done in tutorials. Running as root inside a container means a container escape gives an attacker root on the host. One line fixes it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Image size: 235MB&lt;/strong&gt; vs 1.1GB naive.&lt;/p&gt;




&lt;h3&gt;
  
  
  App 2 — Python/Flask: The Alpine Trap
&lt;/h3&gt;

&lt;p&gt;My first instinct: &lt;code&gt;python:3.12-alpine&lt;/code&gt;. Alpine is tiny. Tiny equals good.&lt;/p&gt;

&lt;p&gt;The problem: Alpine uses &lt;code&gt;musl libc&lt;/code&gt;. Many Python packages with C extensions (numpy, psycopg2, cryptography) either have no Alpine-compatible wheels or compile from source — turning a 30-second build into a 5-minute build. Sometimes it just breaks.&lt;/p&gt;

&lt;p&gt;Use &lt;code&gt;python:3.12-slim&lt;/code&gt; instead. Debian-based, glibc, pre-compiled wheels.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;python:3.12-slim&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;AS&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;builder&lt;/span&gt;
&lt;span class="k"&gt;WORKDIR&lt;/span&gt;&lt;span class="s"&gt; /app&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; requirements.txt .&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;pip &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;--no-cache-dir&lt;/span&gt; &lt;span class="nt"&gt;--prefix&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/install &lt;span class="nt"&gt;-r&lt;/span&gt; requirements.txt

&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;python:3.12-slim&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;AS&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;runtime&lt;/span&gt;
&lt;span class="k"&gt;WORKDIR&lt;/span&gt;&lt;span class="s"&gt; /app&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; --from=builder /install /usr/local&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; app.py .&lt;/span&gt;
&lt;span class="k"&gt;ENV&lt;/span&gt;&lt;span class="s"&gt; PYTHONDONTWRITEBYTECODE=1 \&lt;/span&gt;
    PYTHONUNBUFFERED=1
&lt;span class="k"&gt;RUN &lt;/span&gt;useradd &lt;span class="nt"&gt;-m&lt;/span&gt; appuser &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; USER appuser
&lt;span class="k"&gt;EXPOSE&lt;/span&gt;&lt;span class="s"&gt; 3002&lt;/span&gt;
&lt;span class="k"&gt;CMD&lt;/span&gt;&lt;span class="s"&gt; ["gunicorn", "--bind", "0.0.0.0:3002", "app:app"]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Two things worth noting:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;PYTHONUNBUFFERED=1&lt;/code&gt; — without this, stdout is buffered. Your logs go missing in Kubernetes until the buffer flushes.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;gunicorn&lt;/code&gt; instead of Flask's dev server — the dev server is single-threaded and literally prints "do not use in production" on startup. Take it at its word.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Image size: 186MB.&lt;/strong&gt;&lt;/p&gt;




&lt;h3&gt;
  
  
  App 3 — NestJS: The Three-Stage Build
&lt;/h3&gt;

&lt;p&gt;NestJS is TypeScript. TypeScript compiles to JavaScript. That means your build process is: install deps → compile → run. Three distinct stages.&lt;/p&gt;

&lt;p&gt;The trap: carrying your TypeScript compiler, devDependencies, and &lt;code&gt;.ts&lt;/code&gt; source files into the production image.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;node:24-alpine&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;AS&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;deps&lt;/span&gt;
&lt;span class="k"&gt;WORKDIR&lt;/span&gt;&lt;span class="s"&gt; /app&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; package*.json ./&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;npm ci

&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;node:24-alpine&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;AS&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;build&lt;/span&gt;
&lt;span class="k"&gt;WORKDIR&lt;/span&gt;&lt;span class="s"&gt; /app&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; --from=deps /app/node_modules ./node_modules&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; . .&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;npm run build

&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;node:24-alpine&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;AS&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;runtime&lt;/span&gt;
&lt;span class="k"&gt;WORKDIR&lt;/span&gt;&lt;span class="s"&gt; /app&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; package*.json ./&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;npm ci &lt;span class="nt"&gt;--only&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;production
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; --from=build /app/dist ./dist&lt;/span&gt;
&lt;span class="k"&gt;ENV&lt;/span&gt;&lt;span class="s"&gt; NODE_ENV=production&lt;/span&gt;
&lt;span class="k"&gt;USER&lt;/span&gt;&lt;span class="s"&gt; node&lt;/span&gt;
&lt;span class="k"&gt;EXPOSE&lt;/span&gt;&lt;span class="s"&gt; 3003&lt;/span&gt;
&lt;span class="k"&gt;CMD&lt;/span&gt;&lt;span class="s"&gt; ["node", "dist/main.js"]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The runtime stage does its own &lt;code&gt;npm ci --only=production&lt;/code&gt;. It doesn't copy &lt;code&gt;node_modules&lt;/code&gt; from the build stage — those include devDependencies. Fresh install, production only, then just the compiled &lt;code&gt;dist/&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Your TypeScript source never touches the production image. Your test runner isn't there. Your type definitions aren't there.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Image size: 295MB.&lt;/strong&gt;&lt;/p&gt;




&lt;h3&gt;
  
  
  App 4 — React + Nginx: The SPA Routing Gotcha
&lt;/h3&gt;

&lt;p&gt;React apps are static files. After &lt;code&gt;npm run build&lt;/code&gt;, you have an &lt;code&gt;index.html&lt;/code&gt; and some JS bundles. You don't need Node at runtime — you need a web server.&lt;/p&gt;

&lt;p&gt;Everyone knows this in theory. Fewer people get the Nginx config right.&lt;/p&gt;

&lt;p&gt;The issue: React Router. If a user navigates directly to &lt;code&gt;/about&lt;/code&gt; or refreshes on &lt;code&gt;/dashboard&lt;/code&gt;, Nginx looks for a file at that path. There isn't one. 404.&lt;/p&gt;

&lt;p&gt;The fix is one directive: &lt;code&gt;try_files $uri $uri/ /index.html;&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight nginx"&gt;&lt;code&gt;&lt;span class="k"&gt;server&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kn"&gt;listen&lt;/span&gt; &lt;span class="mi"&gt;3004&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="kn"&gt;root&lt;/span&gt; &lt;span class="n"&gt;/usr/share/nginx/html&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="kn"&gt;index&lt;/span&gt; &lt;span class="s"&gt;index.html&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

    &lt;span class="kn"&gt;location&lt;/span&gt; &lt;span class="n"&gt;/&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="kn"&gt;try_files&lt;/span&gt; &lt;span class="nv"&gt;$uri&lt;/span&gt; &lt;span class="nv"&gt;$uri&lt;/span&gt;&lt;span class="n"&gt;/&lt;/span&gt; &lt;span class="n"&gt;/index.html&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;node:24-alpine&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;AS&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;build&lt;/span&gt;
&lt;span class="k"&gt;WORKDIR&lt;/span&gt;&lt;span class="s"&gt; /app&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; package*.json ./&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;npm ci
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; . .&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;npm run build

&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;nginx:alpine&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;AS&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;runtime&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; --from=build /app/dist /usr/share/nginx/html&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; nginx/nginx.conf /etc/nginx/conf.d/default.conf&lt;/span&gt;
&lt;span class="k"&gt;EXPOSE&lt;/span&gt;&lt;span class="s"&gt; 3004&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Image size: 74.4MB.&lt;/strong&gt; Smallest of the five — Nginx Alpine is tiny and we're just serving static files.&lt;/p&gt;




&lt;h3&gt;
  
  
  App 5 — Go: The Satisfying One
&lt;/h3&gt;

&lt;p&gt;Go compiles to a single static binary. No runtime, no interpreter, no VM. Just a binary.&lt;/p&gt;

&lt;p&gt;This means you can build in one container and copy the binary into a container that has almost nothing — &lt;code&gt;gcr.io/distroless/static&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;golang:1.22-alpine&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;AS&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;builder&lt;/span&gt;
&lt;span class="k"&gt;WORKDIR&lt;/span&gt;&lt;span class="s"&gt; /app&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; go.mod go.sum ./&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;go mod download
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; . .&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;&lt;span class="nv"&gt;CGO_ENABLED&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;0 &lt;span class="nv"&gt;GOOS&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;linux go build &lt;span class="nt"&gt;-o&lt;/span&gt; server .

&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;gcr.io/distroless/static-debian12&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;AS&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;runtime&lt;/span&gt;
&lt;span class="k"&gt;WORKDIR&lt;/span&gt;&lt;span class="s"&gt; /app&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; --from=builder /app/server .&lt;/span&gt;
&lt;span class="k"&gt;EXPOSE&lt;/span&gt;&lt;span class="s"&gt; 3005&lt;/span&gt;
&lt;span class="k"&gt;CMD&lt;/span&gt;&lt;span class="s"&gt; ["/app/server"]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;CGO_ENABLED=0&lt;/code&gt; disables C bindings. &lt;code&gt;GOOS=linux&lt;/code&gt; targets Linux explicitly. Together they ensure the binary is truly static and will run in distroless.&lt;/p&gt;

&lt;p&gt;No shell in that container. No &lt;code&gt;ls&lt;/code&gt;, no &lt;code&gt;curl&lt;/code&gt;, no package manager. &lt;code&gt;kubectl exec&lt;/code&gt; into it and try to run &lt;code&gt;bash&lt;/code&gt; — nothing. This is the point. Near-zero attack surface.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Image size: 8.88MB.&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  The Before vs. After Table
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;App&lt;/th&gt;
&lt;th&gt;Naive&lt;/th&gt;
&lt;th&gt;Optimized&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Node.js API&lt;/td&gt;
&lt;td&gt;~1.1GB&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;235MB&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Python API&lt;/td&gt;
&lt;td&gt;~920MB&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;186MB&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;NestJS API&lt;/td&gt;
&lt;td&gt;~1.3GB&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;295MB&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;React SPA&lt;/td&gt;
&lt;td&gt;~400MB&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;74.4MB&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Go Service&lt;/td&gt;
&lt;td&gt;~800MB&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;8.88MB&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The Go number is not a typo. I was surprised myself.&lt;/p&gt;




&lt;h2&gt;
  
  
  Phase 2: Infrastructure with Terraform
&lt;/h2&gt;

&lt;p&gt;Three modules: &lt;code&gt;vpc&lt;/code&gt;, &lt;code&gt;eks&lt;/code&gt;, &lt;code&gt;ecr&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;The VPC creates public subnets for the load balancer, private subnets for worker nodes, and a NAT gateway so private nodes can pull images. Standard layout.&lt;/p&gt;

&lt;p&gt;The EKS module provisions a managed node group (&lt;code&gt;t3.medium&lt;/code&gt; — the minimum that comfortably runs EKS system pods alongside workloads) and enables OIDC, which is what makes IRSA work.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;IRSA — IAM Roles for Service Accounts&lt;/strong&gt; — is how you give pods AWS permissions without credentials anywhere. An IAM role attaches to a Kubernetes service account. Pods get temporary credentials via OIDC token exchange. No &lt;code&gt;AWS_ACCESS_KEY_ID&lt;/code&gt; in your manifests.&lt;/p&gt;

&lt;p&gt;The ECR module creates five repos with lifecycle policies:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;policy&lt;/span&gt; &lt;span class="err"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;jsonencode&lt;/span&gt;&lt;span class="err"&gt;(&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;rules&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[{&lt;/span&gt;
    &lt;span class="nx"&gt;rulePriority&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;
    &lt;span class="nx"&gt;description&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Keep last 10 images"&lt;/span&gt;
    &lt;span class="nx"&gt;selection&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;tagStatus&lt;/span&gt;   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"any"&lt;/span&gt;
      &lt;span class="nx"&gt;countType&lt;/span&gt;   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"imageCountMoreThan"&lt;/span&gt;
      &lt;span class="nx"&gt;countNumber&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="nx"&gt;action&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;type&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"expire"&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}]&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="err"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Without lifecycle policies, ECR accumulates images indefinitely. Ten images covers comfortable rollback. More than that is sentiment, not operations.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What broke:&lt;/strong&gt; The AWS Load Balancer Controller IAM policy. The controller needs a specific IAM policy to provision ALBs. If the IRSA annotation on the controller's service account doesn't match the IAM role ARN exactly, the controller runs but your Ingress resources never get an &lt;code&gt;ADDRESS&lt;/code&gt;. I spent an hour on this.&lt;/p&gt;




&lt;h2&gt;
  
  
  Phase 3: Kubernetes Manifests
&lt;/h2&gt;

&lt;p&gt;Two things I enforced on every deployment that most tutorials skip:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Resource requests and limits:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;requests&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;128Mi"&lt;/span&gt;
    &lt;span class="na"&gt;cpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;100m"&lt;/span&gt;
  &lt;span class="na"&gt;limits&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;256Mi"&lt;/span&gt;
    &lt;span class="na"&gt;cpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;500m"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Without requests, the scheduler can't place your pod intelligently. Without limits, a misbehaving pod starves everything else on the node.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Minimum two replicas via HPA:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;minReplicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;2&lt;/span&gt;
  &lt;span class="na"&gt;maxReplicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;5&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;One replica is a single point of failure. A node rotation takes down your service. Two replicas means you survive a pod restart gracefully.&lt;/p&gt;




&lt;h2&gt;
  
  
  Phase 4: GitOps with ArgoCD
&lt;/h2&gt;

&lt;p&gt;ArgoCD watches your Git repository for manifest changes and syncs them to the cluster. The cluster pulls its desired state from Git — Git is the source of truth. A rogue &lt;code&gt;kubectl apply&lt;/code&gt; directly on the cluster? ArgoCD reverts it on the next sync cycle.&lt;/p&gt;

&lt;p&gt;I used a single &lt;code&gt;ApplicationSet&lt;/code&gt; instead of five separate &lt;code&gt;Application&lt;/code&gt; resources:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;argoproj.io/v1alpha1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ApplicationSet&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;monolith-apps&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;argocd&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;generators&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;list&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;elements&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;app-node-api&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;app-python-api&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;app-nestjs-api&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;app-react-spa&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;app-go-service&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;{{app}}'&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;source&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;k8s/apps/{{app}}'&lt;/span&gt;
      &lt;span class="na"&gt;syncPolicy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;automated&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;prune&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
          &lt;span class="na"&gt;selfHeal&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;One manifest. Five apps. Adding a sixth is one line in the &lt;code&gt;elements&lt;/code&gt; list.&lt;/p&gt;

&lt;p&gt;Seeing all five apps show &lt;code&gt;Synced&lt;/code&gt; and &lt;code&gt;Healthy&lt;/code&gt; simultaneously in the ArgoCD dashboard was genuinely satisfying.&lt;/p&gt;




&lt;h2&gt;
  
  
  Phase 5: GitHub Actions CI/CD
&lt;/h2&gt;

&lt;p&gt;Two workflows:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;code&gt;ci-build-push.yml&lt;/code&gt;&lt;/strong&gt; — triggers on push to &lt;code&gt;main&lt;/code&gt;. Matrix build across all five apps, tags each image with the git commit SHA, pushes to ECR. Authenticates with AWS via OIDC — no static credentials stored anywhere.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;code&gt;cd-update-manifests.yml&lt;/code&gt;&lt;/strong&gt; — runs after the build. Updates the image tag in &lt;code&gt;k8s/apps/&amp;lt;app&amp;gt;/deployment.yaml&lt;/code&gt;, commits back to the repo. ArgoCD detects the drift and syncs within 30 seconds.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Authenticate to AWS&lt;/span&gt;
  &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;aws-actions/configure-aws-credentials@v4&lt;/span&gt;
  &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;role-to-assume&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;arn:aws:iam::${{ secrets.AWS_ACCOUNT_ID }}:role/github-actions-ecr&lt;/span&gt;
    &lt;span class="na"&gt;aws-region&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;us-east-1&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The commit SHA as the image tag matters. &lt;code&gt;latest&lt;/code&gt; tells you nothing about what's actually running. A commit SHA is immutable and traceable — you can answer "what code is in production?" with a single &lt;code&gt;kubectl get deployment&lt;/code&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  What I'd Do Differently
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Separate the manifests repo.&lt;/strong&gt; When app code and k8s manifests live together, the CI commit that updates image tags triggers another CI run on the same repo. With branch filtering, you avoid a loop, but a dedicated &lt;code&gt;*-k8s&lt;/code&gt; repo is cleaner.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Secrets management from the start.&lt;/strong&gt; I used hardcoded values for the demo. Retrofitting the External Secrets Operator + AWS Secrets Manager later is painful. Wire it up day one.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Distroless everywhere, not just Go.&lt;/strong&gt; Distroless images exist for Node.js and Python, too. I used Alpine variants for easier debugging — in production, I'd push toward distroless across the board.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd27r3i5kuk8fyjbtmgzl.png" alt="ArgoCD applications all healthy and synced" width="800" height="420"&gt;
&lt;/h2&gt;

&lt;h2&gt;
  
  
  Takeaways
&lt;/h2&gt;

&lt;p&gt;Five stacks. Five Dockerfiles. One EKS cluster. One ApplicationSet. One pipeline.&lt;/p&gt;

&lt;p&gt;The image optimisation alone — from multi-gigabyte naive images to sub-200MB across the board — is something concrete you can demonstrate. The Go service at 12MB in a distroless container with near-zero attack surface is something worth building just to see it work.&lt;/p&gt;

&lt;p&gt;The deeper lesson is GitOps. Self-healing deployments, Git as the source of truth, no manual &lt;code&gt;kubectl apply&lt;/code&gt; in production — these are what make Kubernetes manageable at scale, not just powerful on a laptop.&lt;/p&gt;

&lt;p&gt;Full repo: &lt;a href="https://github.com/poppyszn/eks-monolith-migration" rel="noopener noreferrer"&gt;github.com/poppyszn/eks-monolith-migration&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Questions on any specific part — the OIDC setup, the ApplicationSet pattern, the ALB Controller IAM issue, the Nginx SPA config — drop them in the comments.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>docker</category>
      <category>aws</category>
      <category>devops</category>
    </item>
  </channel>
</rss>
