<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Vivian Chiamaka Okose</title>
    <description>The latest articles on DEV Community by Vivian Chiamaka Okose (@vivian_okose).</description>
    <link>https://dev.to/vivian_okose</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/vivian_okose"/>
    <language>en</language>
    <item>
      <title>Multi-Stage Docker Builds: How I Cut a React Image from 760MB to 94MB</title>
      <dc:creator>Vivian Chiamaka Okose</dc:creator>
      <pubDate>Fri, 17 Apr 2026 15:07:31 +0000</pubDate>
      <link>https://dev.to/vivian_okose/multi-stage-docker-builds-how-i-cut-a-react-image-from-760mb-to-94mb-29c0</link>
      <guid>https://dev.to/vivian_okose/multi-stage-docker-builds-how-i-cut-a-react-image-from-760mb-to-94mb-29c0</guid>
      <description>

&lt;p&gt;I built two Docker images for the same React app this week.&lt;/p&gt;

&lt;p&gt;One was 760MB. The other was 94MB. Both loaded the exact same website in the browser.&lt;/p&gt;

&lt;p&gt;That 87.6% difference is the story of this post.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Setup
&lt;/h2&gt;

&lt;p&gt;This is Week 14 of my DevOps Micro Internship. The project: containerize a React app two ways, compare the results, and explain what changed and why it matters.&lt;/p&gt;

&lt;p&gt;I am running everything on an Azure VM (Ubuntu 24.04 LTS) with Docker auto-installed via cloud-init.&lt;/p&gt;

&lt;p&gt;The React app: &lt;a href="https://github.com/pravinmishraaws/my-react-app" rel="noopener noreferrer"&gt;https://github.com/pravinmishraaws/my-react-app&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  First: The .dockerignore
&lt;/h2&gt;

&lt;p&gt;Before writing a single Dockerfile, I created a .dockerignore to keep things that should never be in an image out of the build context:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;node_modules
build
.dockerignore
.git
.gitignore
*.md
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is especially important for node_modules. If you do not exclude it, Docker copies your entire local node_modules into the build context, which defeats the whole purpose of running npm ci inside the container.&lt;/p&gt;




&lt;h2&gt;
  
  
  Approach 1: Single-Stage Baseline (Dockerfile.single)
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; node:18-alpine&lt;/span&gt;
&lt;span class="k"&gt;WORKDIR&lt;/span&gt;&lt;span class="s"&gt; /app&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; package*.json ./&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;npm ci
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; . .&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;npm run build
&lt;span class="k"&gt;RUN &lt;/span&gt;npm &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-g&lt;/span&gt; serve
&lt;span class="k"&gt;EXPOSE&lt;/span&gt;&lt;span class="s"&gt; 3000&lt;/span&gt;
&lt;span class="k"&gt;CMD&lt;/span&gt;&lt;span class="s"&gt; ["serve", "-s", "build", "-l", "3000"]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This image does everything in one go. Install dependencies, build the app, serve it. Simple.&lt;/p&gt;

&lt;p&gt;The problem is that everything stays. Node.js, npm, all 1,342 packages, build tools. None of that is needed to serve a built React app. But it is all sitting there in the image.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Result: 760MB&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Build command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker build &lt;span class="nt"&gt;-f&lt;/span&gt; Dockerfile.single &lt;span class="nt"&gt;-t&lt;/span&gt; react-single:latest &lt;span class="nb"&gt;.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Run command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="nt"&gt;--name&lt;/span&gt; react-single &lt;span class="nt"&gt;-p&lt;/span&gt; 3000:3000 &lt;span class="nt"&gt;--restart&lt;/span&gt; unless-stopped react-single:latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Approach 2: Multi-Stage Build (Dockerfile)
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="c"&gt;# Stage 1 - build React app&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;node:18-alpine&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;AS&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;builder&lt;/span&gt;
&lt;span class="k"&gt;WORKDIR&lt;/span&gt;&lt;span class="s"&gt; /app&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; package*.json ./&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;npm ci
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; . .&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;npm run build

&lt;span class="c"&gt;# Stage 2 - serve with nginx&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; nginx:alpine&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; --from=builder /app/build /usr/share/nginx/html&lt;/span&gt;
&lt;span class="k"&gt;EXPOSE&lt;/span&gt;&lt;span class="s"&gt; 80&lt;/span&gt;
&lt;span class="k"&gt;CMD&lt;/span&gt;&lt;span class="s"&gt; ["nginx", "-g", "daemon off;"]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Two stages. Stage 1 builds the app. Stage 2 starts completely fresh with nginx:alpine and picks up only the finished build/ folder from Stage 1.&lt;/p&gt;

&lt;p&gt;Node.js never makes it into the final image. Neither do npm or any of those 1,342 packages.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Result: 94MB&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Build command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker build &lt;span class="nt"&gt;-t&lt;/span&gt; react-multi:latest &lt;span class="nb"&gt;.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Run command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="nt"&gt;--name&lt;/span&gt; react-multi &lt;span class="nt"&gt;-p&lt;/span&gt; 80:80 &lt;span class="nt"&gt;--restart&lt;/span&gt; unless-stopped react-multi:latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  The Comparison
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker images
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Image&lt;/th&gt;
&lt;th&gt;Size&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;react-single:latest&lt;/td&gt;
&lt;td&gt;760 MB&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;react-multi:latest&lt;/td&gt;
&lt;td&gt;94 MB&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Reduction&lt;/td&gt;
&lt;td&gt;87.6%&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Both containers ran simultaneously. Both loaded the same React app. The only difference was what was inside each image.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why This Matters Beyond the Numbers
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Security&lt;/strong&gt;: Every package not in the final image cannot be exploited. The multi-stage image has no Node.js, no npm, no build tools. An attacker who somehow gets into that container finds a bare Nginx server. Nothing else.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;CI/CD Speed&lt;/strong&gt;: Smaller images push and pull faster. If your pipeline deploys 10 times a day and each deployment pulls a 760MB image instead of a 94MB one, that is a significant amount of wasted time over weeks and months.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Layer Caching&lt;/strong&gt;: Notice that both Dockerfiles copy package.json before the rest of the source code. This is intentional. Docker caches each layer. If your dependencies have not changed, Docker skips the npm ci step entirely on the next build and jumps straight to copying your source. This alone can shave minutes off build times.&lt;/p&gt;




&lt;h2&gt;
  
  
  Running Both Simultaneously
&lt;/h2&gt;

&lt;p&gt;One of the most satisfying parts of this project was running both containers at the same time on the same VM:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker ps
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="go"&gt;CONTAINER ID   IMAGE                 PORTS                    NAMES
&lt;/span&gt;&lt;span class="gp"&gt;837fb6d5def2   react-multi:latest    0.0.0.0:80-&amp;gt;&lt;/span&gt;80/tcp       react-multi
&lt;span class="gp"&gt;66efa6b350bf   react-single:latest   0.0.0.0:3000-&amp;gt;&lt;/span&gt;3000/tcp   react-single
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Opening both in the browser showed the same React app on two different ports, proving the multi-stage approach produces an identical result in a fraction of the space.&lt;/p&gt;




&lt;h2&gt;
  
  
  Full Project
&lt;/h2&gt;

&lt;p&gt;GitHub: &lt;a href="https://github.com/vivianokose/cloud-vm-docker-deploy" rel="noopener noreferrer"&gt;https://github.com/vivianokose/cloud-vm-docker-deploy&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;See you in the next one.&lt;/p&gt;







&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft28qcjlb3w8wuvm8tu2g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft28qcjlb3w8wuvm8tu2g.png" alt="1" width="800" height="368"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4gd9u7pf492jtzzgsvnr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4gd9u7pf492jtzzgsvnr.png" alt="2" width="800" height="516"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm56qbq3u7h4dxorjsk1r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm56qbq3u7h4dxorjsk1r.png" alt="3" width="800" height="521"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcm3kreood2m33r4bzw96.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcm3kreood2m33r4bzw96.png" alt="4" width="800" height="365"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fktu3lasv9voz77nhwwiz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fktu3lasv9voz77nhwwiz.png" alt="5" width="800" height="312"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7zhfyfllzg1ked64ks21.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7zhfyfllzg1ked64ks21.png" alt="6" width="800" height="526"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdvvewnbogykl87m6lkhs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdvvewnbogykl87m6lkhs.png" alt="7" width="692" height="116"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnfn7zn9h123apuzotpzh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnfn7zn9h123apuzotpzh.png" alt="8" width="746" height="252"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj0vd0d0g2o3152yb7z6i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj0vd0d0g2o3152yb7z6i.png" alt="9" width="742" height="307"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmbi4unefjcnw25t3clvb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmbi4unefjcnw25t3clvb.png" alt="10" width="692" height="208"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftzelsd98l8g0ouei5ou9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftzelsd98l8g0ouei5ou9.png" alt="11" width="800" height="500"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffhy9zxrvcq54h40dzp64.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffhy9zxrvcq54h40dzp64.png" alt="12" width="800" height="472"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo2jl11ehyf8o1scjh2jq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo2jl11ehyf8o1scjh2jq.png" alt="13" width="800" height="166"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnveelfq91fakbdkwaw6j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnveelfq91fakbdkwaw6j.png" alt="14" width="800" height="67"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdf88st4kw1o11ewizdu4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdf88st4kw1o11ewizdu4.png" alt="15" width="800" height="361"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F12sceynxp7w1ly7bimw8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F12sceynxp7w1ly7bimw8.png" alt="16" width="800" height="361"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9gsu8k687vzumlqk5hd9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9gsu8k687vzumlqk5hd9.png" alt="17" width="731" height="166"&gt;&lt;/a&gt;&lt;/p&gt;




</description>
      <category>docker</category>
      <category>react</category>
      <category>devops</category>
      <category>cloudcomputing</category>
    </item>
    <item>
      <title>How I Deployed a Live Website Using Docker on Azure (And Let Cloud-Init Do the Heavy Lifting)</title>
      <dc:creator>Vivian Chiamaka Okose</dc:creator>
      <pubDate>Thu, 16 Apr 2026 20:39:44 +0000</pubDate>
      <link>https://dev.to/vivian_okose/how-i-deployed-a-live-website-using-docker-on-azure-and-let-cloud-init-do-the-heavy-lifting-184m</link>
      <guid>https://dev.to/vivian_okose/how-i-deployed-a-live-website-using-docker-on-azure-and-let-cloud-init-do-the-heavy-lifting-184m</guid>
      <description>&lt;h2&gt;
  
  
  What I Built
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;1 Linux VM on Azure (Ubuntu 24.04 LTS, Standard D2lds v6)&lt;/li&gt;
&lt;li&gt;A cloud-init script that installed Docker automatically on first boot&lt;/li&gt;
&lt;li&gt;A Dockerized static website served by Nginx&lt;/li&gt;
&lt;li&gt;A live URL accessible from anywhere in the world&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Step 1: Provisioning the Azure VM
&lt;/h2&gt;

&lt;p&gt;I created the VM through the Azure Portal with these settings:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Field&lt;/th&gt;
&lt;th&gt;Value&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Resource Group&lt;/td&gt;
&lt;td&gt;docker-project-rg&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;VM Name&lt;/td&gt;
&lt;td&gt;docker-vm&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Region&lt;/td&gt;
&lt;td&gt;UK South&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Image&lt;/td&gt;
&lt;td&gt;Ubuntu 24.04 LTS&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Size&lt;/td&gt;
&lt;td&gt;Standard D2lds v6&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Authentication&lt;/td&gt;
&lt;td&gt;SSH public key&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Under the Networking tab, I opened port 22 for SSH and port 80 for HTTP traffic by adding inbound rules to the Network Security Group.&lt;/p&gt;




&lt;h2&gt;
  
  
  Step 2: The Cloud-Init Script (This Is the Magic Part)
&lt;/h2&gt;

&lt;p&gt;Before launching the VM, I pasted this script under the Advanced tab in the Custom Data field:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;#cloud-config&lt;/span&gt;
&lt;span class="na"&gt;package_update&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;span class="na"&gt;package_upgrade&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;span class="na"&gt;packages&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;apt-transport-https&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;ca-certificates&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;curl&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;gnupg&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;lsb-release&lt;/span&gt;

&lt;span class="na"&gt;runcmd&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;apt-get update -y&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;apt-get install -y docker.io&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;systemctl enable docker&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;systemctl start docker&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;usermod -aG docker azureuser&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This script runs automatically the moment the VM boots. Docker installs itself, starts itself, and adds my user to the docker group. I had not even SSH'd in yet.&lt;/p&gt;

&lt;p&gt;When I later ran:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo cat&lt;/span&gt; /var/log/cloud-init-output.log | &lt;span class="nb"&gt;grep&lt;/span&gt; &lt;span class="nt"&gt;-i&lt;/span&gt; docker
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The log confirmed everything. Docker was installed by the startup script, not by me. That is infrastructure automation doing exactly what it is supposed to do.&lt;/p&gt;




&lt;h2&gt;
  
  
  Step 3: SSH Into the VM
&lt;/h2&gt;

&lt;p&gt;Once deployment completed, I grabbed my public IP from the Azure Portal and connected:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;chmod &lt;/span&gt;400 docker-vm_key.pem
ssh &lt;span class="nt"&gt;-i&lt;/span&gt; docker-vm_key.pem azureuser@4.234.163.212
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then I verified Docker was running:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker &lt;span class="nt"&gt;--version&lt;/span&gt;
&lt;span class="c"&gt;# Docker version 29.1.3&lt;/span&gt;

docker ps
&lt;span class="c"&gt;# Empty, no containers yet. But Docker is alive.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Step 4: Clone the Static Website
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git clone https://github.com/pravinmishraaws/Azure-Static-Website.git
&lt;span class="nb"&gt;cd &lt;/span&gt;Azure-Static-Website
&lt;span class="nb"&gt;ls&lt;/span&gt;
&lt;span class="c"&gt;# README.md  index.html&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Simple. Just an index.html. That is all we need.&lt;/p&gt;




&lt;h2&gt;
  
  
  Step 5: Write the Dockerfile
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; nginx:alpine&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;&lt;span class="nb"&gt;rm&lt;/span&gt; &lt;span class="nt"&gt;-rf&lt;/span&gt; /usr/share/nginx/html/&lt;span class="k"&gt;*&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; . /usr/share/nginx/html&lt;/span&gt;
&lt;span class="k"&gt;EXPOSE&lt;/span&gt;&lt;span class="s"&gt; 80&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let me break this down:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;FROM nginx:alpine&lt;/strong&gt; — we are using a lightweight version of Nginx as our base image. Alpine Linux is tiny, which keeps our container small and fast.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;RUN rm -rf /usr/share/nginx/html/&lt;/strong&gt;* — wipes out the default Nginx welcome page so our site shows instead.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;COPY . /usr/share/nginx/html&lt;/strong&gt; — copies everything in our current folder (including index.html) into the Nginx web root.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;EXPOSE 80&lt;/strong&gt; — tells Docker this container will accept traffic on port 80.&lt;/p&gt;




&lt;h2&gt;
  
  
  Step 6: Build the Image
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker build &lt;span class="nt"&gt;-t&lt;/span&gt; static-site:latest &lt;span class="nb"&gt;.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Docker pulls nginx:alpine, runs each step in the Dockerfile, and produces an image called static-site:latest. The whole process takes about a minute.&lt;/p&gt;

&lt;p&gt;After building, I checked the image sizes:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Image&lt;/th&gt;
&lt;th&gt;Size&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;nginx:alpine&lt;/td&gt;
&lt;td&gt;93.5 MB&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;static-site:latest&lt;/td&gt;
&lt;td&gt;92.9 MB&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Our image is actually slightly smaller than the base because we replaced Nginx's default content with our own lighter files.&lt;/p&gt;




&lt;h2&gt;
  
  
  Step 7: Run the Container
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="nt"&gt;--name&lt;/span&gt; static-site &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-p&lt;/span&gt; 80:80 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--restart&lt;/span&gt; unless-stopped &lt;span class="se"&gt;\&lt;/span&gt;
  static-site:latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Breaking down the flags:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;-d&lt;/code&gt; runs the container in the background&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;--name static-site&lt;/code&gt; gives it a friendly name&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;-p 80:80&lt;/code&gt; maps port 80 on the VM to port 80 inside the container&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;--restart unless-stopped&lt;/code&gt; means it comes back automatically after a VM reboot&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Step 8: Verify and Open in Browser
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker ps
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="go"&gt;CONTAINER ID   IMAGE                PORTS                    NAMES
&lt;/span&gt;&lt;span class="gp"&gt;83cb16a6cb44   static-site:latest   0.0.0.0:80-&amp;gt;&lt;/span&gt;80/tcp       static-site
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then I opened &lt;code&gt;http://4.234.163.212&lt;/code&gt; in my browser.&lt;/p&gt;

&lt;p&gt;The site loaded.&lt;/p&gt;

&lt;p&gt;I cannot fully explain how good that felt. Three months into learning DevOps, and I just served a live website from inside a Docker container running on a cloud VM I spun up myself.&lt;/p&gt;




&lt;h2&gt;
  
  
  What I Learned
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Cloud-init is a game changer.&lt;/strong&gt; The idea that a VM can configure itself on boot, without any human touching it, is what separates manual setups from real infrastructure automation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Containers are not complicated.&lt;/strong&gt; A Dockerfile is just a recipe. Build the recipe, get an image. Run the image, get a container. That is it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;nginx:alpine is perfect for static sites.&lt;/strong&gt; Small, fast, and it just works.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The --restart flag matters.&lt;/strong&gt; In production, you want your containers to survive reboots. Always add &lt;code&gt;--restart unless-stopped&lt;/code&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  Full Project
&lt;/h2&gt;

&lt;p&gt;GitHub: &lt;a href="https://github.com/vivianokose/cloud-vm-docker-deploy" rel="noopener noreferrer"&gt;https://github.com/vivianokose/cloud-vm-docker-deploy&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;See you in the next one.&lt;/p&gt;




</description>
      <category>docker</category>
      <category>azure</category>
      <category>devops</category>
      <category>cloudcomputing</category>
    </item>
    <item>
      <title>I Automated a Full Application Deployment with Azure DevOps, Terraform, and Ansible</title>
      <dc:creator>Vivian Chiamaka Okose</dc:creator>
      <pubDate>Fri, 10 Apr 2026 21:17:26 +0000</pubDate>
      <link>https://dev.to/vivian_okose/i-automated-a-full-application-deployment-with-azure-devops-terraform-and-ansible-1gd1</link>
      <guid>https://dev.to/vivian_okose/i-automated-a-full-application-deployment-with-azure-devops-terraform-and-ansible-1gd1</guid>
      <description>&lt;p&gt;Four projects. Several weeks. One capstone.&lt;/p&gt;

&lt;p&gt;This is the project where everything came together. Not just conceptually, but in practice, in a real pipeline, on real infrastructure, with real application code running live on Azure.&lt;/p&gt;

&lt;p&gt;We deployed the EpicBook application using two repositories and two pipelines. One pipeline provisioned the infrastructure with Terraform. The other deployed and configured the application with Ansible. Both ran on the self-hosted agent I built back in Project 1. By the end, the application was live at a public IP, deployed entirely through automation with zero manual steps on the server.&lt;/p&gt;

&lt;p&gt;Here is exactly how I built it, what went wrong, what I fixed, and what I would do differently next time.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why Two Repositories?
&lt;/h2&gt;

&lt;p&gt;This was the design decision I thought about the most before starting, and I think it is worth explaining properly because it is easy to miss why this matters.&lt;/p&gt;

&lt;p&gt;When infrastructure and application code live in the same repository, every code change, even a small frontend tweak, can potentially trigger an infrastructure rebuild. Different people on a team cannot manage infrastructure and application changes independently without stepping on each other. Rolling back a bad deployment becomes complicated because the infrastructure and application histories are tangled together.&lt;/p&gt;

&lt;p&gt;Separating them means each repository has one clear responsibility. The infra repo manages what exists on Azure. The app repo manages what runs on that infrastructure. Two repos, two concerns, two pipelines. The separation is intentional, and it reflects how real engineering teams actually work.&lt;/p&gt;




&lt;h2&gt;
  
  
  What the Infrastructure Pipeline Provisioned
&lt;/h2&gt;

&lt;p&gt;The infra pipeline ran Terraform and created everything the application needed to run:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Resource Group:&lt;/strong&gt; EpicBookRG&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Virtual Network:&lt;/strong&gt; epicbook-vnet (10.2.0.0/16)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Network Security Group&lt;/strong&gt; with rules open on ports 22, 80, and 3306&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Frontend VM:&lt;/strong&gt; EpicBookFrontendVM (Standard_D2s_v3)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Backend VM:&lt;/strong&gt; EpicBookBackendVM (Standard_D2s_v3)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Static Public IPs&lt;/strong&gt; for both VMs&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;After &lt;code&gt;terraform apply&lt;/code&gt; completed, the outputs were:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;frontend_public_ip = "20.25.48.139"
backend_public_ip  = "20.124.125.115"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;These IPs are what the Ansible inventory would later reference to configure each VM. Using static IPs here is deliberate. A dynamic IP changes every time the VM restarts, which would break the connection between the Terraform output and the Ansible inventory without any warning.&lt;/p&gt;




&lt;h2&gt;
  
  
  Setting Up Authentication: The Service Principal
&lt;/h2&gt;

&lt;p&gt;This tripped me up the first time I tried to run Terraform inside a pipeline, so I want to explain it clearly.&lt;/p&gt;

&lt;p&gt;Terraform cannot use your personal Azure login credentials inside an Azure DevOps pipeline. It needs a Service Principal, which is essentially a dedicated identity with specific permissions scoped to your subscription. You create it like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;az ad sp create-for-rbac &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--name&lt;/span&gt; &lt;span class="s2"&gt;"EpicBookTerraformSP"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--role&lt;/span&gt; Contributor &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--scopes&lt;/span&gt; /subscriptions/YOUR_SUBSCRIPTION_ID
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That command returns four values you need to hold onto:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;appId&lt;/code&gt; (this is your Client ID)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;password&lt;/code&gt; (this is your Client Secret)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;tenant&lt;/code&gt; (your Tenant ID)&lt;/li&gt;
&lt;li&gt;your Subscription ID&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These four values go into an &lt;strong&gt;Azure Resource Manager service connection&lt;/strong&gt; in Azure DevOps. Then in the pipeline, the &lt;code&gt;AzureCLI@2&lt;/code&gt; task with &lt;code&gt;addSpnToEnvironment: true&lt;/code&gt; exposes them as environment variables that Terraform reads automatically. You do not hardcode credentials anywhere. They flow in securely at runtime.&lt;/p&gt;

&lt;p&gt;If you skip the Service Principal setup and just try to run Terraform in a pipeline with your personal credentials, it will fail. This step is not optional.&lt;/p&gt;




&lt;h2&gt;
  
  
  SSH Keys and Secure Files
&lt;/h2&gt;

&lt;p&gt;The private SSH key for VM access was generated locally:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ssh-keygen &lt;span class="nt"&gt;-t&lt;/span&gt; rsa &lt;span class="nt"&gt;-b&lt;/span&gt; 4096 &lt;span class="nt"&gt;-f&lt;/span&gt; ~/.ssh/epicbook_key &lt;span class="nt"&gt;-N&lt;/span&gt; &lt;span class="s2"&gt;""&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The public key went into the Terraform VM configuration so the VMs were provisioned with it already authorized. The private key was uploaded to Azure DevOps Secure Files as &lt;code&gt;epicbook_key&lt;/code&gt;. It never touched the repository at any point.&lt;/p&gt;

&lt;p&gt;This is an important security habit to build early. Credentials and private keys should never live in a repository, even a private one. Secure Files in Azure DevOps is exactly what it sounds like: a safe place to store sensitive files that your pipeline can access at runtime without exposing them anywhere else.&lt;/p&gt;

&lt;p&gt;In the App Pipeline, the key is downloaded and permissions are set like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;task&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;DownloadSecureFile@1&lt;/span&gt;
  &lt;span class="na"&gt;inputs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;secureFile&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;epicbook_key'&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;SSHKey&lt;/span&gt;

&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;script&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;chmod 400 $(SSHKey.secureFilePath)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;chmod 400&lt;/code&gt; is required. SSH will refuse to use a private key that has loose permissions. If you skip this step, the Ansible connection to your VMs will fail with a permissions error.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Ansible Playbook
&lt;/h2&gt;

&lt;p&gt;With the VMs provisioned and the keys in place, Ansible handled everything on the servers. Four roles were applied across two VMs:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Configure Frontend VM&lt;/span&gt;
  &lt;span class="na"&gt;hosts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;frontend&lt;/span&gt;
  &lt;span class="na"&gt;become&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;yes&lt;/span&gt;
  &lt;span class="na"&gt;roles&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;common&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;epicbook_frontend&lt;/span&gt;

&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Configure Backend VM&lt;/span&gt;
  &lt;span class="na"&gt;hosts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;backend&lt;/span&gt;
  &lt;span class="na"&gt;become&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;yes&lt;/span&gt;
  &lt;span class="na"&gt;roles&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;common&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;epicbook_backend&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;common&lt;/code&gt; role ran on both VMs and handled shared setup tasks like package updates and base configuration. The &lt;code&gt;nginx&lt;/code&gt; role installed and configured Nginx on the frontend only. The &lt;code&gt;epicbook_frontend&lt;/code&gt; and &lt;code&gt;epicbook_backend&lt;/code&gt; roles deployed the actual application code to each server.&lt;/p&gt;

&lt;p&gt;Here is the actual PLAY RECAP from the pipeline run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;20.124.125.115  : ok=9    changed=7    unreachable=0    failed=0
20.25.48.139    : ok=11   changed=9    unreachable=0    failed=0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Zero failures on both VMs. Both fully configured in a single pipeline run.&lt;/p&gt;

&lt;p&gt;Reading a clean Ansible recap with &lt;code&gt;failed=0&lt;/code&gt; after building all of this from scratch is a genuinely satisfying thing.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Live Application
&lt;/h2&gt;

&lt;p&gt;After both pipelines completed, I opened the browser at &lt;code&gt;http://20.25.48.139&lt;/code&gt; and the EpicBook application was running:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;📚 EpicBook Application
Deployed by: Vivian Chiamaka Okose
Infrastructure: Terraform + Ansible + Azure DevOps
Server: Nginx on Ubuntu 22.04
Live on Azure
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;No manual SSH. No copying files by hand. No touching the server after the initial key setup. The pipelines handled everything from infrastructure creation to application deployment.&lt;/p&gt;




&lt;h2&gt;
  
  
  What I Would Do Differently
&lt;/h2&gt;

&lt;p&gt;Two things stand out as areas I would improve in a production setup:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Terraform remote state.&lt;/strong&gt; For this project, I ran Terraform locally to get stable outputs, then manually copied the VM IPs into the Ansible inventory. That worked here, but it does not scale and it introduces a manual step into what should be a fully automated process. The proper setup uses Azure Blob Storage as a Terraform backend so the state persists between pipeline runs. The IP outputs would then flow automatically into the App Pipeline without any human intervention. This is something I plan to implement in the next project.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Agent download workaround.&lt;/strong&gt; When setting up the self-hosted agent in Project 1, the standard CDN domain for the Azure Pipelines agent was unreachable from my network. The fix was using the Azure DevOps API directly to get the exact download URL for the agent package. It was a bit of digging to figure out, but it is actually the more reliable approach and I think more onboarding guides should document it.&lt;/p&gt;




&lt;h2&gt;
  
  
  Looking Back at the Full Series
&lt;/h2&gt;

&lt;p&gt;It is worth stepping back and seeing the whole picture:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Project 1&lt;/strong&gt; built the self-hosted agent that every subsequent pipeline ran on.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Project 2&lt;/strong&gt; deployed a static website to Nginx over SSH, which introduced the CopyFilesOverSSH task and the permissions fix for &lt;code&gt;/var/www/html&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Project 3&lt;/strong&gt; built a four-stage React pipeline with build, test, publish, and deploy stages, and introduced pipeline artifacts and the &lt;code&gt;dependsOn&lt;/code&gt; pattern.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Project 4&lt;/strong&gt; tied all of it together with real infrastructure automation, a Service Principal, Secure Files, Ansible roles, and two pipelines working in coordination.&lt;/p&gt;

&lt;p&gt;Each project was harder than the one before it. Each one built directly on what came before. That layered progression is something I would recommend to anyone learning DevOps. You do not have to understand everything at once. You just have to understand the next piece well enough to build on it.&lt;/p&gt;




&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr9okm6yw7q44foap4igy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr9okm6yw7q44foap4igy.png" alt="1" width="800" height="428"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdkct0332pagut66dv49w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdkct0332pagut66dv49w.png" alt="1" width="800" height="570"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Figtdhkg0yj5eykkctwqe.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Figtdhkg0yj5eykkctwqe.png" alt="1" width="800" height="572"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd56ulroo93cthov0jg20.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd56ulroo93cthov0jg20.png" alt="2" width="800" height="574"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkcvu69o3k7l0bba4jh7q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkcvu69o3k7l0bba4jh7q.png" alt="2" width="800" height="572"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn0g04pssquuh4e4hmuvn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn0g04pssquuh4e4hmuvn.png" alt="3" width="800" height="576"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyw4djl0h0peivobi0slj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyw4djl0h0peivobi0slj.png" alt="4" width="800" height="573"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbwr3uixlksnlp0fvyug3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbwr3uixlksnlp0fvyug3.png" alt="5" width="800" height="575"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F81ywqnvx3w8hqpifdw9z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F81ywqnvx3w8hqpifdw9z.png" alt="6" width="800" height="431"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5h0rg4glnplwhvfu3tyr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5h0rg4glnplwhvfu3tyr.png" alt="7" width="800" height="576"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F74wcc1ozjln3ynhrk8lo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F74wcc1ozjln3ynhrk8lo.png" alt="7" width="800" height="419"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F49wjzr9w0scra55ny5xk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F49wjzr9w0scra55ny5xk.png" alt="8" width="800" height="362"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flrxv8w3ip15u86x5c1y1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flrxv8w3ip15u86x5c1y1.png" alt="9" width="745" height="425"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F33baa2a7jmnvb925wwxi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F33baa2a7jmnvb925wwxi.png" alt="10" width="800" height="631"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi851mgjxkdtblxcgb7tl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi851mgjxkdtblxcgb7tl.png" alt="11" width="800" height="538"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi65eazvwc210jz1olnq6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi65eazvwc210jz1olnq6.png" alt="11" width="800" height="631"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa380xz5g75kk9fsyi0x4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa380xz5g75kk9fsyi0x4.png" alt="12" width="800" height="538"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8w1elao8k2w3jrlrzqkh.png" alt="13" width="800" height="542"&gt;
&lt;/h2&gt;




&lt;p&gt;If any part of this was useful to you, share it with someone who is learning. It might save them a few hours.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Vivian Chiamaka Okose is a DevOps Engineer who documents her learning in public.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>infrastructureascode</category>
      <category>cloudengineering</category>
      <category>azuredevops</category>
      <category>cicd</category>
    </item>
    <item>
      <title>Building a 4-Stage CI/CD Pipeline for a React App with Azure DevOps</title>
      <dc:creator>Vivian Chiamaka Okose</dc:creator>
      <pubDate>Fri, 10 Apr 2026 17:44:38 +0000</pubDate>
      <link>https://dev.to/vivian_okose/building-a-4-stage-cicd-pipeline-for-a-react-app-with-azure-devops-m23</link>
      <guid>https://dev.to/vivian_okose/building-a-4-stage-cicd-pipeline-for-a-react-app-with-azure-devops-m23</guid>
      <description>&lt;p&gt;There is a big difference between deploying static HTML files and deploying a React application. I learned that difference firsthand in this project, and it changed how I think about pipelines.&lt;/p&gt;

&lt;p&gt;This is Project 3 in my Azure DevOps series. In the previous project, I deployed a static finance website by copying HTML files directly to an Nginx server using SSH. Clean, simple, effective. But React is a different story. You cannot just copy the source files and call it done. The code has to be compiled first. And that compilation step is what makes a proper multi-stage pipeline not just useful, but necessary.&lt;/p&gt;

&lt;p&gt;Four stages. Build, Test, Publish, Deploy. Each one depends on the previous. If any stage fails, the rest do not run. That is CI/CD working exactly the way it is supposed to.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why React Cannot Be Deployed Like Static HTML
&lt;/h2&gt;

&lt;p&gt;This is worth understanding before we get into the pipeline itself.&lt;/p&gt;

&lt;p&gt;When you write a React app, you write in JSX. JSX is not something a browser can read directly. Before it can go live, it has to go through a build process that compiles everything into plain HTML, CSS, and JavaScript. Running &lt;code&gt;npm run build&lt;/code&gt; produces a &lt;code&gt;/build&lt;/code&gt; directory containing those optimized, browser-compatible files. That &lt;code&gt;/build&lt;/code&gt; folder is what actually gets deployed to the server, not the source code.&lt;/p&gt;

&lt;p&gt;So unlike Project 2 where I pushed the site files straight to Nginx, here the pipeline has to build the app first, verify the output, and only then copy it to the server. That extra step is what the first three stages of this pipeline are doing.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Four Stages Explained
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Stage 1: Build
&lt;/h3&gt;

&lt;p&gt;This is where everything starts. The pipeline installs Node.js 18, runs &lt;code&gt;npm install&lt;/code&gt; to pull in dependencies, then runs &lt;code&gt;npm run build&lt;/code&gt; to compile the React app. The resulting &lt;code&gt;/build&lt;/code&gt; directory is published as a pipeline artifact called &lt;code&gt;react_build&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Publishing it as an artifact is important. It means the compiled output is saved and can be downloaded by later stages, even if those stages run on a different agent. You are not recompiling the app at every stage. You build once, save the result, and pass it forward.&lt;/p&gt;

&lt;h3&gt;
  
  
  Stage 2: Test
&lt;/h3&gt;

&lt;p&gt;After the build succeeds, the pipeline runs the unit tests. There is one flag here that is easy to miss and very important to include:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npm test -- --watchAll=false
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Without &lt;code&gt;--watchAll=false&lt;/code&gt;, the test runner goes into watch mode. It sits there waiting for file changes that will never come because this is a CI environment, not a local dev machine. The pipeline just hangs. Adding this flag tells the test runner to run once, report the results, and exit. Always include this in any CI pipeline running React tests.&lt;/p&gt;

&lt;h3&gt;
  
  
  Stage 3: Publish
&lt;/h3&gt;

&lt;p&gt;This stage downloads the &lt;code&gt;react_build&lt;/code&gt; artifact and lists its contents. It might look like a minor step but it is doing something important: acting as a verification gate. Before anything touches the live server, you confirm that the right files are actually in the artifact. It is a sanity check that catches issues early, before they reach production.&lt;/p&gt;

&lt;h3&gt;
  
  
  Stage 4: Deploy
&lt;/h3&gt;

&lt;p&gt;The final stage downloads the artifact, copies the compiled React files to &lt;code&gt;/var/www/html&lt;/code&gt; on the Nginx VM over SSH, and then restarts Nginx to serve the updated content. The SSH service connection here (&lt;code&gt;ubuntu-nginx-ssh-react&lt;/code&gt;) points to the same VM provisioned in the earlier project using Terraform.&lt;/p&gt;




&lt;h2&gt;
  
  
  Something Important I Learned About Server Files
&lt;/h2&gt;

&lt;p&gt;Midway through this project, I thought about editing &lt;code&gt;index.html&lt;/code&gt; directly on the server to update the content being displayed. It seemed like the fastest way to make a small change.&lt;/p&gt;

&lt;p&gt;I am glad I paused and thought it through, because that would have been entirely wrong in a CI/CD setup.&lt;/p&gt;

&lt;p&gt;The next pipeline run would overwrite that change completely. The server is not the source of truth. The repository is. Any change that needs to go live should happen in the source code, in Azure Repos, through a commit. That commit triggers the pipeline, which rebuilds the app, runs the tests, and deploys a fresh version of the site. Your change goes through the full process before it ever hits the server.&lt;/p&gt;

&lt;p&gt;That is not just a best practice. It is the entire point of having a pipeline. CI/CD is designed to make the deployment process consistent and repeatable, and editing files directly on the server breaks that completely. Once you understand this, a lot of other DevOps concepts start to make more sense.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Full Pipeline YAML
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;trigger&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;branches&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;include&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;main&lt;/span&gt;

&lt;span class="na"&gt;pool&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;SelfHostedPool&lt;/span&gt;

&lt;span class="na"&gt;stages&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;stage&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Build&lt;/span&gt;
    &lt;span class="na"&gt;jobs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;job&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;BuildJob&lt;/span&gt;
        &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;checkout&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;self&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;task&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;NodeTool@0&lt;/span&gt;
            &lt;span class="na"&gt;inputs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="na"&gt;versionSpec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;18.x'&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;script&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
              &lt;span class="s"&gt;npm install&lt;/span&gt;
              &lt;span class="s"&gt;npm run build&lt;/span&gt;
            &lt;span class="na"&gt;displayName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Build React App&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;publish&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;build&lt;/span&gt;
            &lt;span class="na"&gt;artifact&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;react_build&lt;/span&gt;

  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;stage&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Test&lt;/span&gt;
    &lt;span class="na"&gt;dependsOn&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Build&lt;/span&gt;
    &lt;span class="na"&gt;jobs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;job&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;TestJob&lt;/span&gt;
        &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;task&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;NodeTool@0&lt;/span&gt;
            &lt;span class="na"&gt;inputs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="na"&gt;versionSpec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;18.x'&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;script&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
              &lt;span class="s"&gt;npm install&lt;/span&gt;
              &lt;span class="s"&gt;npm test -- --watchAll=false&lt;/span&gt;
            &lt;span class="na"&gt;displayName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Run Tests&lt;/span&gt;

  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;stage&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Publish&lt;/span&gt;
    &lt;span class="na"&gt;dependsOn&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Test&lt;/span&gt;
    &lt;span class="na"&gt;jobs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;job&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;PublishJob&lt;/span&gt;
        &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;download&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;current&lt;/span&gt;
            &lt;span class="na"&gt;artifact&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;react_build&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;script&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ls $(Pipeline.Workspace)/react_build&lt;/span&gt;
            &lt;span class="na"&gt;displayName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Verify Artifact Contents&lt;/span&gt;

  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;stage&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deploy&lt;/span&gt;
    &lt;span class="na"&gt;dependsOn&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Publish&lt;/span&gt;
    &lt;span class="na"&gt;jobs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;job&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;DeployJob&lt;/span&gt;
        &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;download&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;current&lt;/span&gt;
            &lt;span class="na"&gt;artifact&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;react_build&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;task&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;CopyFilesOverSSH@0&lt;/span&gt;
            &lt;span class="na"&gt;inputs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="na"&gt;sshEndpoint&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;ubuntu-nginx-ssh-react'&lt;/span&gt;
              &lt;span class="na"&gt;sourceFolder&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;$(Pipeline.Workspace)/react_build'&lt;/span&gt;
              &lt;span class="na"&gt;contents&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;**'&lt;/span&gt;
              &lt;span class="na"&gt;targetFolder&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;/var/www/html'&lt;/span&gt;
              &lt;span class="na"&gt;cleanTargetFolder&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;task&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;SSH@0&lt;/span&gt;
            &lt;span class="na"&gt;inputs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="na"&gt;sshEndpoint&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;ubuntu-nginx-ssh-react'&lt;/span&gt;
              &lt;span class="na"&gt;runOptions&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;inline'&lt;/span&gt;
              &lt;span class="na"&gt;inline&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;sudo&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;systemctl&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;restart&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;nginx'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A few things worth noting in this YAML:&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;dependsOn&lt;/code&gt; keyword is what creates the sequential chain. Each stage explicitly depends on the previous one, so if the build fails, the test stage never starts. If tests fail, nothing gets deployed. This is intentional. You do not want broken code reaching your server.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;cleanTargetFolder: true&lt;/code&gt; setting wipes &lt;code&gt;/var/www/html&lt;/code&gt; before copying new files. This ensures you are always deploying a clean build with no leftover files from previous runs. If you ran into permission errors on this in an earlier project (I did), make sure the &lt;code&gt;azureuser&lt;/code&gt; account owns &lt;code&gt;/var/www/html&lt;/code&gt; before the pipeline runs.&lt;/p&gt;




&lt;h2&gt;
  
  
  Result
&lt;/h2&gt;

&lt;p&gt;All four stages went green. React app live on Nginx. The custom content I added to verify the deployment was showing correctly:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Welcome to My React App (Finance APP)
Deployed by: Vivian Chiamaka Okose | Date: 08/04/2026
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Total pipeline time from commit to live: 2 minutes and 51 seconds.&lt;/p&gt;

&lt;p&gt;That is a fully compiled, tested, and deployed React application, triggered by a single push to main. No manual steps anywhere in the process.&lt;/p&gt;




&lt;h2&gt;
  
  
  Key Takeaways
&lt;/h2&gt;

&lt;p&gt;A few things I would want anyone following this to keep in mind:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Never edit files directly on the server in a CI/CD setup.&lt;/strong&gt; The pipeline will overwrite them on the next run. Your repo is the source of truth. Always.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Always use &lt;code&gt;--watchAll=false&lt;/code&gt; for React tests in CI.&lt;/strong&gt; Without it, your pipeline will hang indefinitely.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Use pipeline artifacts to pass build output between stages.&lt;/strong&gt; Build once, share the result. Do not recompile at every stage.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The &lt;code&gt;dependsOn&lt;/code&gt; keyword is how you enforce order.&lt;/strong&gt; Stages run in parallel by default in Azure DevOps. If you need them sequential, you have to say so explicitly.&lt;/p&gt;




&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa16lpa2wkmkbnlibv819.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa16lpa2wkmkbnlibv819.png" alt="1" width="800" height="430"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvbagoykzdfjicvk3dteu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvbagoykzdfjicvk3dteu.png" alt="2" width="800" height="267"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbwv1644c0qlbzy712w5k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbwv1644c0qlbzy712w5k.png" alt="3" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpglmc7c1hkv9ufarzdas.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpglmc7c1hkv9ufarzdas.png" alt="3" width="800" height="571"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6h9mx14mn3avmqvjochn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6h9mx14mn3avmqvjochn.png" alt="4" width="800" height="429"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fug9wibxzbkz27bwmnysg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fug9wibxzbkz27bwmnysg.png" alt="5" width="800" height="430"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6hxgoxv31y430b2evwzl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6hxgoxv31y430b2evwzl.png" alt="6" width="800" height="428"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2zfaw9zq7izewyvbvfyp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2zfaw9zq7izewyvbvfyp.png" alt="7" width="800" height="280"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7t6hfxhvardq27o28or7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7t6hfxhvardq27o28or7.png" alt="8" width="800" height="150"&gt;&lt;/a&gt;&lt;/p&gt;







&lt;p&gt;Three projects down in the Azure DevOps series. One more to go.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Vivian Chiamaka Okose is a DevOps Engineer building real pipelines and writing about what she learns.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>cicd</category>
      <category>automation</category>
      <category>terraform</category>
    </item>
    <item>
      <title>How I Deployed a Static Website with Azure DevOps, Terraform, Ansible, and Nginx</title>
      <dc:creator>Vivian Chiamaka Okose</dc:creator>
      <pubDate>Fri, 10 Apr 2026 16:12:25 +0000</pubDate>
      <link>https://dev.to/vivian_okose/how-i-deployed-a-static-website-with-azure-devops-terraform-ansible-and-nginx-331c</link>
      <guid>https://dev.to/vivian_okose/how-i-deployed-a-static-website-with-azure-devops-terraform-ansible-and-nginx-331c</guid>
      <description>&lt;p&gt;There is something about pushing code and watching a website update itself on a live server that never gets old. No manual uploads. No FTP. No SSH copying. Just a commit, and the pipeline takes care of everything else.&lt;/p&gt;

&lt;p&gt;This is Project 2 in my Azure DevOps series. In this one, I deployed a static finance website to an Ubuntu VM running Nginx, fully automated with a CI/CD pipeline that triggers on every push to main. Terraform handled the infrastructure. Ansible handled the server configuration. Azure DevOps handled the deployment. And one permissions error tried very hard to ruin my day.&lt;/p&gt;

&lt;p&gt;Here is exactly how I built it, what broke, and how I fixed it.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Stack
&lt;/h2&gt;

&lt;p&gt;Before we dive in, here is everything I used:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Azure DevOps&lt;/strong&gt; for the pipeline and code repository&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Terraform&lt;/strong&gt; to provision the Ubuntu VM on Azure&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Ansible&lt;/strong&gt; to install and configure Nginx on the server&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;SSH Service Connection&lt;/strong&gt; in Azure DevOps for secure file transfer&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;YAML pipeline&lt;/strong&gt; to automate the full deployment flow&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Step 1: Import the Repository into Azure Repos
&lt;/h2&gt;

&lt;p&gt;The first thing I did was import the finance website codebase into Azure Repos directly from GitHub:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;https://github.com/pravinmishraaws/Azure-Static-Website
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once imported, I confirmed that &lt;code&gt;index.html&lt;/code&gt; was visible in the repo. Simple step, but worth verifying before anything else.&lt;/p&gt;




&lt;h2&gt;
  
  
  Step 2: Provision the VM with Terraform
&lt;/h2&gt;

&lt;p&gt;Next, I wrote a Terraform configuration to spin up the infrastructure on Azure. The config created:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A Resource Group&lt;/li&gt;
&lt;li&gt;A Virtual Network and Subnet&lt;/li&gt;
&lt;li&gt;A Network Security Group (NSG) with ports 22 and 80 open&lt;/li&gt;
&lt;li&gt;A static Public IP address&lt;/li&gt;
&lt;li&gt;An Ubuntu 22.04 LTS VM&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;After running &lt;code&gt;terraform apply&lt;/code&gt;, I got the VM's public IP:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;vm_public_ip = "20.124.184.86"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That IP is what the pipeline would later deploy to, and what the final site would be accessible from.&lt;/p&gt;

&lt;p&gt;One thing worth noting: always use a &lt;strong&gt;static&lt;/strong&gt; public IP when setting up a deployment target. If you use a dynamic IP, it changes every time the VM restarts and your pipeline will lose its connection.&lt;/p&gt;




&lt;h2&gt;
  
  
  Step 3: Configure Nginx with Ansible
&lt;/h2&gt;

&lt;p&gt;With the VM up, I used Ansible to install and configure Nginx. The playbook was straightforward:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Install Nginx&lt;/span&gt;
  &lt;span class="na"&gt;apt&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
    &lt;span class="na"&gt;state&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;present&lt;/span&gt;
    &lt;span class="na"&gt;update_cache&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;yes&lt;/span&gt;

&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Start Nginx&lt;/span&gt;
  &lt;span class="na"&gt;systemd&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
    &lt;span class="na"&gt;state&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;started&lt;/span&gt;
    &lt;span class="na"&gt;enabled&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;yes&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;But there is a step most tutorials skip, and it is the one that will save you a headache later. After installing Nginx, I immediately ran these two commands:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo chown&lt;/span&gt; &lt;span class="nt"&gt;-R&lt;/span&gt; azureuser:azureuser /var/www/html
&lt;span class="nb"&gt;sudo chmod&lt;/span&gt; &lt;span class="nt"&gt;-R&lt;/span&gt; 755 /var/www/html
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here is why this matters: Nginx creates the &lt;code&gt;/var/www/html&lt;/code&gt; directory owned by &lt;strong&gt;root&lt;/strong&gt;. When the Azure DevOps pipeline tries to copy files into that folder, it runs as &lt;code&gt;azureuser&lt;/code&gt;, not root. That user does not have write access. The pipeline fails. I will come back to this in a moment because even with this step in Ansible, I still hit the error the first time around.&lt;/p&gt;




&lt;h2&gt;
  
  
  Step 4: Create the SSH Service Connection in Azure DevOps
&lt;/h2&gt;

&lt;p&gt;This is the connection that allows the pipeline to communicate with the VM over SSH.&lt;/p&gt;

&lt;p&gt;In Azure DevOps, go to &lt;strong&gt;Project Settings&lt;/strong&gt;, then &lt;strong&gt;Service Connections&lt;/strong&gt;, then click &lt;strong&gt;New Service Connection&lt;/strong&gt;. Choose &lt;strong&gt;SSH&lt;/strong&gt; and fill in:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Host&lt;/strong&gt;: your VM's public IP&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Port&lt;/strong&gt;: 22&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Username&lt;/strong&gt;: azureuser&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Password&lt;/strong&gt; (or private key)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Name&lt;/strong&gt;: &lt;code&gt;ubuntu-nginx-ssh&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;One thing that confused me at first: SSH service connections do not show a green verified badge the way other service connections do. That is completely normal. The only real test is whether the pipeline runs successfully. Do not let the missing badge throw you off.&lt;/p&gt;




&lt;h2&gt;
  
  
  Step 5: Write the Pipeline
&lt;/h2&gt;

&lt;p&gt;With the service connection in place, I wrote the YAML pipeline. It has two tasks: copy the website files to the server, then verify the deployment ran correctly.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;trigger&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;branches&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;include&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;main&lt;/span&gt;

&lt;span class="na"&gt;pool&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;SelfHostedPool&lt;/span&gt;

&lt;span class="na"&gt;stages&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;stage&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deploy&lt;/span&gt;
    &lt;span class="na"&gt;jobs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;job&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;DeployJob&lt;/span&gt;
        &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;checkout&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;self&lt;/span&gt;

          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;task&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;CopyFilesOverSSH@0&lt;/span&gt;
            &lt;span class="na"&gt;inputs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="na"&gt;sshEndpoint&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;ubuntu-nginx-ssh'&lt;/span&gt;
              &lt;span class="na"&gt;sourceFolder&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;$(Build.SourcesDirectory)'&lt;/span&gt;
              &lt;span class="na"&gt;contents&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;**'&lt;/span&gt;
              &lt;span class="na"&gt;targetFolder&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;/var/www/html'&lt;/span&gt;
              &lt;span class="na"&gt;cleanTargetFolder&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;

          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;task&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;SSH@0&lt;/span&gt;
            &lt;span class="na"&gt;inputs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="na"&gt;sshEndpoint&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;ubuntu-nginx-ssh'&lt;/span&gt;
              &lt;span class="na"&gt;runOptions&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;inline'&lt;/span&gt;
              &lt;span class="na"&gt;inline&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;ls&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;/var/www/html'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;cleanTargetFolder: true&lt;/code&gt; setting tells the pipeline to wipe the target directory before copying new files. This is what caused the permission error on the first run, because it tries to run &lt;code&gt;rm -rf&lt;/code&gt; on &lt;code&gt;/var/www/html&lt;/code&gt;, and if root still owns that folder, the pipeline user gets blocked.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Permission Error (And How to Fix It)
&lt;/h2&gt;

&lt;p&gt;The first pipeline run failed with this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;rm: cannot remove '/var/www/html/index.nginx-debian.html': Permission denied
Failed to clean the target folder. Command rm -rf '/var/www/html'/* exited with code 1.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Even though I had added the &lt;code&gt;chown&lt;/code&gt; command to the Ansible playbook, the error still appeared on this run because the VM had been provisioned before I added that step. The fix was simple: SSH into the VM manually and run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo chown&lt;/span&gt; &lt;span class="nt"&gt;-R&lt;/span&gt; azureuser:azureuser /var/www/html
&lt;span class="nb"&gt;sudo chmod&lt;/span&gt; &lt;span class="nt"&gt;-R&lt;/span&gt; 755 /var/www/html
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then I reran the pipeline. It went fully green.&lt;/p&gt;

&lt;p&gt;If you are following this setup from scratch and you run the Ansible playbook before the pipeline, you should not hit this error at all. But if you do, this is exactly what to check first.&lt;/p&gt;




&lt;h2&gt;
  
  
  Result
&lt;/h2&gt;

&lt;p&gt;The site went live at &lt;code&gt;http://20.124.184.86&lt;/code&gt;. Finance dashboard loading in the browser. Pipeline completed in 46 seconds from push to live.&lt;/p&gt;

&lt;p&gt;That moment of opening the browser and seeing the site running on my own VM, deployed by a pipeline I built myself, is one of those moments where the learning stops feeling abstract.&lt;/p&gt;




&lt;h2&gt;
  
  
  What I Took Away From This
&lt;/h2&gt;

&lt;p&gt;A few things I want to flag for anyone building something similar:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Permissions matter more than the pipeline.&lt;/strong&gt; You can write perfect YAML and still fail because the server will not allow the pipeline user to write to a folder. Always check file ownership on your deployment path before you wire anything up.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Static IPs are not optional.&lt;/strong&gt; A dynamic IP will break your service connection silently. Fix the IP before you configure anything that references it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;SSH service connections without a verified badge are fine.&lt;/strong&gt; The badge not turning green does not mean the connection is broken. The pipeline run is the real test.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Run your Ansible playbook before your first pipeline run.&lt;/strong&gt; The order matters. Server should be fully configured before the pipeline ever tries to touch it.&lt;/p&gt;




&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5noknzyxwww72ddmsc0d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5noknzyxwww72ddmsc0d.png" alt="1" width="800" height="570"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw1lfb0p8bahvj9e0c1o0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw1lfb0p8bahvj9e0c1o0.png" alt="1" width="800" height="579"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyk2f6atr4vdt2tkqvva4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyk2f6atr4vdt2tkqvva4.png" alt="2" width="800" height="236"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzgv64m67bz6ylx1ikpy2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzgv64m67bz6ylx1ikpy2.png" alt="3" width="800" height="430"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ferd11mqd6u3oxtkhlknu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ferd11mqd6u3oxtkhlknu.png" alt="3" width="800" height="576"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo9dwtatsix104tc4tx8a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo9dwtatsix104tc4tx8a.png" alt="4" width="800" height="572"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzlx57ugng3tue21jui9a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzlx57ugng3tue21jui9a.png" alt="5" width="800" height="431"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9kdmhxinv10vmztk44ei.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9kdmhxinv10vmztk44ei.png" alt="6" width="800" height="430"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7hkbjbf46docawec9l05.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7hkbjbf46docawec9l05.png" alt="7" width="800" height="430"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fstbnk2cpls4cd9nc74cx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fstbnk2cpls4cd9nc74cx.png" alt="8" width="800" height="430"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkciw2273hi847tv93ro8.png" alt="8" width="800" height="430"&gt;
&lt;/h2&gt;




&lt;p&gt;This is Project 2 of 4 in the Azure DevOps series. Two down, two more to go.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Vivian Chiamaka Okose is a DevOps Engineer documenting her hands-on learning journey through real projects.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>azure</category>
      <category>automaton</category>
      <category>terraform</category>
    </item>
    <item>
      <title>How I Set Up a Self-Hosted Azure DevOps Agent on Ubuntu (And What I Learned the Hard Way)</title>
      <dc:creator>Vivian Chiamaka Okose</dc:creator>
      <pubDate>Fri, 10 Apr 2026 12:56:49 +0000</pubDate>
      <link>https://dev.to/vivian_okose/how-i-set-up-a-self-hosted-azure-devops-agent-on-ubuntu-and-what-i-learned-the-hard-way-i30</link>
      <guid>https://dev.to/vivian_okose/how-i-set-up-a-self-hosted-azure-devops-agent-on-ubuntu-and-what-i-learned-the-hard-way-i30</guid>
      <description>&lt;p&gt;When you are just starting out with Azure DevOps, the managed Microsoft-hosted agents seem like the easy choice. Clean environment, no setup, just works.&lt;/p&gt;

&lt;p&gt;Then you hit the parallelism limit on a free plan and everything queues.&lt;/p&gt;

&lt;p&gt;That is what sent me down the path of setting up my own self-hosted agent. Here is exactly what I built and what tripped me up along the way.&lt;/p&gt;

&lt;h2&gt;
  
  
  What a Self-Hosted Agent Actually Is
&lt;/h2&gt;

&lt;p&gt;An agent is the thing that executes your pipeline jobs. When Azure DevOps runs a pipeline, it needs a machine to do the work.&lt;/p&gt;

&lt;p&gt;Microsoft-hosted agents are temporary VMs that spin up, do the job, and disappear. Convenient but limited.&lt;/p&gt;

&lt;p&gt;A self-hosted agent is one you run yourself, on infrastructure you control. It stays running between jobs, keeps its installed software, and has no parallelism limits.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Built
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Ubuntu 22.04 VM on Azure (Standard_D2s_v3)&lt;/li&gt;
&lt;li&gt;A custom agent pool called SelfHostedPool&lt;/li&gt;
&lt;li&gt;Azure Pipelines agent v4.271.0 installed as a system service&lt;/li&gt;
&lt;li&gt;A test pipeline that confirmed everything was working&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Step 1: Create a Personal Access Token
&lt;/h2&gt;

&lt;p&gt;In Azure DevOps, click your profile picture at the top right, then go to Personal Access Tokens. Create a new one with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Agent Pools: Read and Manage&lt;/li&gt;
&lt;li&gt;Build: Read and Execute&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Copy it immediately. You will not see it again.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 2: Create the Agent Pool
&lt;/h2&gt;

&lt;p&gt;Go to Organization Settings, then Pipelines, then Agent Pools. Click Add Pool.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Type: Self-hosted&lt;/li&gt;
&lt;li&gt;Name: SelfHostedPool&lt;/li&gt;
&lt;li&gt;Grant access to all pipelines: checked&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Step 3: Provision the VM
&lt;/h2&gt;

&lt;p&gt;I created a Standard_D2s_v3 Ubuntu 22.04 VM on Azure via the portal. The key thing is opening port 443 for outbound communication. The agent talks to Azure DevOps over HTTPS.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 4: Get the Agent Download URL
&lt;/h2&gt;

&lt;p&gt;This is where I ran into my first real obstacle. The standard download domain was not resolving. The fix was to get the download URL directly from the Azure DevOps API:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-s&lt;/span&gt; &lt;span class="s2"&gt;"https://dev.azure.com/YOUR_ORG/_apis/distributedtask/packages/agent/linux-x64?top=1&amp;amp;api-version=3.0"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-u&lt;/span&gt; :YOUR_PAT | &lt;span class="nb"&gt;grep&lt;/span&gt; &lt;span class="s2"&gt;"downloadUrl"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This gives you the exact URL for the latest agent version registered for your organization. Use that URL to download.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 5: Install and Configure
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; ~/azagent &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;cd&lt;/span&gt; ~/azagent
curl &lt;span class="nt"&gt;-fSL&lt;/span&gt; &lt;span class="s2"&gt;"DOWNLOAD_URL_FROM_ABOVE"&lt;/span&gt; &lt;span class="nt"&gt;-o&lt;/span&gt; vsts-agent-linux-x64-4.271.0.tar.gz
&lt;span class="nb"&gt;tar &lt;/span&gt;zxvf vsts-agent-linux-x64-4.271.0.tar.gz
./config.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When prompted, enter your organization URL, PAT, and pool name. Then start it as a service:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo&lt;/span&gt; ./svc.sh &lt;span class="nb"&gt;install
sudo&lt;/span&gt; ./svc.sh start
&lt;span class="nb"&gt;sudo&lt;/span&gt; ./svc.sh status
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Active and running.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 6: Verify in Azure DevOps
&lt;/h2&gt;

&lt;p&gt;Go to Organization Settings, then Agent Pools, then SelfHostedPool. Your agent should appear as Online.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Test Pipeline
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;trigger&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;main&lt;/span&gt;

&lt;span class="na"&gt;pool&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;SelfHostedPool&lt;/span&gt;

&lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;script&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;uname -a&lt;/span&gt;
    &lt;span class="na"&gt;displayName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;System Info&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;script&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;whoami&lt;/span&gt;
    &lt;span class="na"&gt;displayName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Current User&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;script&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;df -h&lt;/span&gt;
    &lt;span class="na"&gt;displayName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Disk Usage&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Run it. Watch all three steps execute on your VM. That green checkmark means your pipeline is running on infrastructure you built.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Learned
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Running as a system service is not optional for real use.&lt;/strong&gt; If you run the agent interactively with &lt;code&gt;./run.sh&lt;/code&gt;, it dies when your SSH session ends. Install it as a service so it runs permanently.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Port 443 must be open outbound, not inbound.&lt;/strong&gt; The agent calls Azure DevOps, not the other way around.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The API download method is more reliable than guessing version numbers.&lt;/strong&gt; Always use it.&lt;/p&gt;

&lt;p&gt;This agent pool powered every pipeline in my Azure DevOps series. Projects 2, 3, and 4 all ran on it. Building it first was the right call.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Vivian Chiamaka Okose is a DevOps Engineer documenting her learning journey in public.&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz05htgo511edro3p8yew.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz05htgo511edro3p8yew.png" alt="1" width="800" height="428"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjh5u265qu1f0yk0pitu3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjh5u265qu1f0yk0pitu3.png" alt="2" width="800" height="413"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhw6id58xg4emukl11dcf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhw6id58xg4emukl11dcf.png" alt="3" width="800" height="410"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faj2dbf0n5f7zl09a6bd1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faj2dbf0n5f7zl09a6bd1.png" alt="4" width="800" height="411"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg7gx7u2dhke7szlxoy02.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg7gx7u2dhke7szlxoy02.png" alt="5" width="800" height="161"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw7hawjsfed3rq1p77fmz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw7hawjsfed3rq1p77fmz.png" alt="6" width="800" height="163"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv0sj88fziieo9xqgovke.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv0sj88fziieo9xqgovke.png" alt="7" width="800" height="289"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9sratl0eblkb3aw1s2h2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9sratl0eblkb3aw1s2h2.png" alt="8" width="800" height="349"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgiv3rqmau78h0odvgwd3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgiv3rqmau78h0odvgwd3.png" alt="9" width="800" height="320"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fngygv9o6m8qlkcxssltv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fngygv9o6m8qlkcxssltv.png" alt="10" width="800" height="324"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpkol2d8rktb8jua8gdac.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpkol2d8rktb8jua8gdac.png" alt="11" width="800" height="101"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fak4ftbh7vpzql2ny48xw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fak4ftbh7vpzql2ny48xw.png" alt="12" width="800" height="264"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6hh02dp2u6ssr45nns6b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6hh02dp2u6ssr45nns6b.png" alt="13" width="800" height="489"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb3dvsy7bg7ovhhvc1j4u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb3dvsy7bg7ovhhvc1j4u.png" alt="14" width="800" height="311"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjbwvhpaf2l8ebj9ztjft.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjbwvhpaf2l8ebj9ztjft.png" alt="15" width="800" height="408"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmn0dzl2sjsbiejvekgu3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmn0dzl2sjsbiejvekgu3.png" alt="16" width="800" height="475"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvww6llveruzxprlxlk71.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvww6llveruzxprlxlk71.png" alt="17" width="800" height="472"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftb0myu7ejisgq7t4v1qw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftb0myu7ejisgq7t4v1qw.png" alt="18" width="800" height="409"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>azure</category>
      <category>cicd</category>
      <category>linux</category>
    </item>
    <item>
      <title>From Zero to Production: Deploying Applications on Azure with Ansible and Terraform</title>
      <dc:creator>Vivian Chiamaka Okose</dc:creator>
      <pubDate>Wed, 01 Apr 2026 07:32:43 +0000</pubDate>
      <link>https://dev.to/vivian_okose/from-zero-to-production-deploying-applications-on-azure-with-ansible-and-terraform-1l61</link>
      <guid>https://dev.to/vivian_okose/from-zero-to-production-deploying-applications-on-azure-with-ansible-and-terraform-1l61</guid>
      <description>&lt;p&gt;There's a point in every DevOps engineer's journey where things start to click. This week was that moment for me. Five assignments, one Azure subscription, and a whole lot of terminal output later — I now understand why teams reach for Ansible the moment they need to configure more than one server.&lt;/p&gt;

&lt;p&gt;This post walks through everything I built this week: setting up a production-ready Ansible workstation, automating a fleet of 4 Azure VMs with ad-hoc commands, deploying a static website with a multi-play playbook, and finally deploying two applications using Terraform + Ansible together — including a production-grade role-based setup.&lt;/p&gt;

&lt;h2&gt;
  
  
  Assignment 1: Building a Production-Ready Ansible Workstation
&lt;/h2&gt;

&lt;p&gt;Before touching a single server, real teams standardise their local environment. That means isolated dependencies, consistent editor settings, and automated quality checks that run before every commit.&lt;/p&gt;

&lt;p&gt;The first thing I did was create an isolated Python virtual environment:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;python3 &lt;span class="nt"&gt;-m&lt;/span&gt; venv .venv &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;source&lt;/span&gt; .venv/bin/activate
pip &lt;span class="nb"&gt;install &lt;/span&gt;ansible ansible-lint yamllint pre-commit
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Why a venv? Because installing Ansible globally with sudo pip is a trap. Different projects need different Ansible versions, and a global install means one project can silently break another. The venv keeps everything contained and reproducible — anyone can clone the repo, run &lt;code&gt;pip install -r requirements.txt&lt;/code&gt;, and get the exact same setup.&lt;/p&gt;

&lt;p&gt;I then configured VS Code with the Red Hat Ansible extension, set up an &lt;code&gt;ansible.cfg&lt;/code&gt; with team-standard defaults, generated an ED25519 SSH key, and wired up pre-commit hooks to run yamllint automatically before every commit. From that point on, badly formatted YAML couldn't even make it into the repo.&lt;/p&gt;

&lt;h2&gt;
  
  
  Assignment 2: Fleet Automation with Ad-Hoc Commands
&lt;/h2&gt;

&lt;p&gt;With the workstation ready, it was time to actually talk to some servers. I provisioned 4 Azure Ubuntu VMs with Terraform — all in a single &lt;code&gt;main.tf&lt;/code&gt; using a count loop:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;variable&lt;/span&gt; &lt;span class="s2"&gt;"vm_roles"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;default&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"web1"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"web2"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"app1"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"db1"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Each VM got its own public IP, network interface, and NSG association. The SSH key was injected at provisioning time via the &lt;code&gt;admin_ssh_key&lt;/code&gt; block — no passwords, ever.&lt;/p&gt;

&lt;p&gt;After provisioning, I created a custom inventory with proper groups:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ini"&gt;&lt;code&gt;&lt;span class="nn"&gt;[web]&lt;/span&gt;
&lt;span class="err"&gt;40.85.254.41&lt;/span&gt;
&lt;span class="err"&gt;20.104.112.32&lt;/span&gt;

&lt;span class="nn"&gt;[app]&lt;/span&gt;
&lt;span class="err"&gt;20.48.180.237&lt;/span&gt;

&lt;span class="nn"&gt;[db]&lt;/span&gt;
&lt;span class="err"&gt;20.48.183.157&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then came the ad-hoc commands. This is where Ansible really shines for quick fleet operations:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Ping all hosts&lt;/span&gt;
ansible all &lt;span class="nt"&gt;-i&lt;/span&gt; inventory.ini &lt;span class="nt"&gt;-m&lt;/span&gt; ping

&lt;span class="c"&gt;# Check uptime across the fleet&lt;/span&gt;
ansible all &lt;span class="nt"&gt;-i&lt;/span&gt; inventory.ini &lt;span class="nt"&gt;-m&lt;/span&gt; &lt;span class="nb"&gt;command&lt;/span&gt; &lt;span class="nt"&gt;-a&lt;/span&gt; &lt;span class="s2"&gt;"uptime"&lt;/span&gt;

&lt;span class="c"&gt;# Install nginx on web servers only&lt;/span&gt;
ansible web &lt;span class="nt"&gt;-i&lt;/span&gt; inventory.ini &lt;span class="nt"&gt;-m&lt;/span&gt; apt &lt;span class="nt"&gt;-a&lt;/span&gt; &lt;span class="s2"&gt;"update_cache=yes name=nginx state=present"&lt;/span&gt; &lt;span class="nt"&gt;--become&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;--become&lt;/code&gt; flag is how Ansible escalates to root for privileged operations. Without it, package installs would fail with permission errors.&lt;/p&gt;

&lt;h2&gt;
  
  
  Assignment 3: Multi-Play Playbook for Web Deployment
&lt;/h2&gt;

&lt;p&gt;Ad-hoc commands are great for quick tasks, but anything repeatable belongs in a playbook. This assignment introduced multi-play structure — separating install, deploy, and verify into distinct plays.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Install and Configure Web Server&lt;/span&gt;
  &lt;span class="na"&gt;hosts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;web&lt;/span&gt;
  &lt;span class="na"&gt;become&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="na"&gt;tasks&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Install nginx&lt;/span&gt;
      &lt;span class="na"&gt;apt&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
        &lt;span class="na"&gt;state&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;present&lt;/span&gt;

&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deploy Static Website Content&lt;/span&gt;
  &lt;span class="na"&gt;hosts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;web&lt;/span&gt;
  &lt;span class="na"&gt;become&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="na"&gt;handlers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;reload nginx&lt;/span&gt;
      &lt;span class="na"&gt;service&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
        &lt;span class="na"&gt;state&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;reloaded&lt;/span&gt;
  &lt;span class="na"&gt;tasks&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deploy index.html&lt;/span&gt;
      &lt;span class="na"&gt;copy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;src&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;files/index.html&lt;/span&gt;
        &lt;span class="na"&gt;dest&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/var/www/html/index.html&lt;/span&gt;
        &lt;span class="na"&gt;owner&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;www-data&lt;/span&gt;
        &lt;span class="na"&gt;mode&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;0644'&lt;/span&gt;
      &lt;span class="na"&gt;notify&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;reload nginx&lt;/span&gt;

&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Verify Deployment&lt;/span&gt;
  &lt;span class="na"&gt;hosts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;localhost&lt;/span&gt;
  &lt;span class="na"&gt;connection&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;local&lt;/span&gt;
  &lt;span class="na"&gt;tasks&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Check HTTP &lt;/span&gt;&lt;span class="m"&gt;200&lt;/span&gt;
      &lt;span class="na"&gt;uri&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;url&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;http://{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;web_ip&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;
        &lt;span class="na"&gt;status_code&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;200&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Why split into three plays? Because each play has a different responsibility. Play 1 handles infrastructure-level concerns (is the web server installed?). Play 2 handles application concerns (is the right content deployed?). Play 3 handles verification from the outside, the way a user would actually experience it.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;copy&lt;/code&gt; module pushes files from the controller to the remote hosts. The file lives on your machine, Ansible handles the transfer. No Git clones needed on the target servers.&lt;/p&gt;

&lt;h2&gt;
  
  
  Assignment 4: Mini Finance Site with Terraform + Ansible
&lt;/h2&gt;

&lt;p&gt;This assignment introduced the clean separation that production teams live by: Terraform provisions infrastructure, Ansible configures it.&lt;/p&gt;

&lt;p&gt;Terraform created the full Azure stack — resource group, VNet, subnet, NSG with ports 22 and 80, public IP, and a single Ubuntu VM. The output gave me the IP address which fed directly into the Ansible inventory.&lt;/p&gt;

&lt;p&gt;One thing I learned here — the git module in Ansible keeps the SSH connection open during the clone, which can time out on slow connections. The workaround is using the shell module with a shallow clone:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Clone repo&lt;/span&gt;
  &lt;span class="na"&gt;shell&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;git clone --depth 1 https://github.com/repo /var/www/html&lt;/span&gt;
  &lt;span class="na"&gt;args&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;creates&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/var/www/html/index.html&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;creates&lt;/code&gt; argument makes this idempotent — if the file already exists, the task is skipped.&lt;/p&gt;

&lt;h2&gt;
  
  
  Assignment 5: Production-Grade EpicBook with Ansible Roles
&lt;/h2&gt;

&lt;p&gt;This was the most complex assignment — and the most realistic. Instead of tasks in a single playbook, everything was organised into roles:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ansible/
├── roles/
│   ├── common/       # system updates, baseline packages, SSH hardening
│   ├── nginx/        # install, Jinja2 config template, site management
│   └── epicbook/     # app directory, repo clone, ownership, reload handler
├── group_vars/
│   └── web.yml       # shared variables across roles
└── site.yml          # role orchestration
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;site.yml&lt;/code&gt; becomes beautifully simple:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Prepare system&lt;/span&gt;
  &lt;span class="na"&gt;hosts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;web&lt;/span&gt;
  &lt;span class="na"&gt;become&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="na"&gt;roles&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;common&lt;/span&gt;

&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Install Nginx&lt;/span&gt;
  &lt;span class="na"&gt;hosts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;web&lt;/span&gt;
  &lt;span class="na"&gt;become&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="na"&gt;roles&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;

&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deploy EpicBook&lt;/span&gt;
  &lt;span class="na"&gt;hosts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;web&lt;/span&gt;
  &lt;span class="na"&gt;become&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="na"&gt;roles&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;epicbook&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The nginx role used a Jinja2 template for the server block — meaning the document root path comes from a variable, not hardcoded into the config. Change the variable, re-run the playbook, and the config updates automatically.&lt;/p&gt;

&lt;p&gt;The real proof of quality was the idempotency check. Running the playbook a second time returned mostly &lt;code&gt;ok&lt;/code&gt; with zero failures and one intentional &lt;code&gt;skipped&lt;/code&gt; for the clone task. That's the standard. A playbook that changes things on every run is a liability.&lt;/p&gt;

&lt;h2&gt;
  
  
  Key Takeaways
&lt;/h2&gt;

&lt;p&gt;Ansible's strength isn't just automation — it's automation you can reason about. Each task either changes something or it doesn't, and you can see exactly which at a glance. Roles take that further by making your automation reusable across projects. The common role I wrote this week could drop into any future project and just work.&lt;/p&gt;

&lt;p&gt;The Terraform + Ansible combination is genuinely powerful. Terraform gives you consistent, reproducible infrastructure. Ansible gives you consistent, reproducible configuration. Together they cover the full lifecycle from "cloud resources don't exist" to "application is running and verified."&lt;/p&gt;

&lt;p&gt;All code is available on GitHub: &lt;a href="https://github.com/vivianokose/ansible-devops-week12" rel="noopener noreferrer"&gt;ansible-devops-week12&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ansible</category>
      <category>terraform</category>
      <category>azure</category>
      <category>devops</category>
    </item>
    <item>
      <title>Building Production-Grade Infrastructure on Azure with Terraform: A Complete Three-Tier Architecture</title>
      <dc:creator>Vivian Chiamaka Okose</dc:creator>
      <pubDate>Sat, 21 Mar 2026 07:15:37 +0000</pubDate>
      <link>https://dev.to/vivian_okose/building-production-grade-infrastructure-on-azure-with-terraform-a-complete-three-tier-4ie0</link>
      <guid>https://dev.to/vivian_okose/building-production-grade-infrastructure-on-azure-with-terraform-a-complete-three-tier-4ie0</guid>
      <description>&lt;h2&gt;
  
  
  Building Production-Grade Infrastructure on Azure with Terraform: A Complete Three-Tier Architecture
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;By Vivian Chiamaka Okose&lt;/strong&gt;&lt;br&gt;
&lt;em&gt;Tags: #terraform #azure #devops #threetier #applicationgateway #mysql #nextjs #nodejs #iac #cloud #security&lt;/em&gt;&lt;/p&gt;



&lt;p&gt;This is the project where everything clicked.&lt;/p&gt;

&lt;p&gt;Not because it went smoothly -- it absolutely did not. A VM got compromised by a cryptominer midway through. Regional capacity restrictions blocked MySQL provisioning twice. An Application Gateway TLS policy error required a specific fix. An IP address changed mid-deployment and locked me out of my own server.&lt;/p&gt;

&lt;p&gt;But every one of those problems had a solution. And working through each one taught me something that no tutorial could replicate.&lt;/p&gt;

&lt;p&gt;This is the story of Assignment 5: deploying the Book Review App on Azure using a production-grade three-tier Terraform architecture.&lt;/p&gt;


&lt;h2&gt;
  
  
  The Architecture
&lt;/h2&gt;

&lt;p&gt;This project uses a proper three-tier design with strict security boundaries between each layer:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Network Layer:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;One VNet with CIDR &lt;code&gt;10.0.0.0/16&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;6 subnets across the address space: 2 web tier, 2 app tier, 2 database tier, plus a dedicated Application Gateway subnet&lt;/li&gt;
&lt;li&gt;NSGs per tier with explicit inbound rules&lt;/li&gt;
&lt;li&gt;Private DNS Zone for MySQL VNet integration&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Compute Layer:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Web Tier VM running Next.js frontend via Nginx reverse proxy&lt;/li&gt;
&lt;li&gt;App Tier VM running Node.js backend on port 5000 via PM2&lt;/li&gt;
&lt;li&gt;Public Application Gateway fronting the web tier&lt;/li&gt;
&lt;li&gt;Internal Load Balancer fronting the app tier&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Database Layer:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Azure MySQL Flexible Server with private VNet integration&lt;/li&gt;
&lt;li&gt;Not publicly accessible -- reachable only through the VNet&lt;/li&gt;
&lt;li&gt;Delegated subnets for MySQL service&lt;/li&gt;
&lt;/ul&gt;


&lt;h2&gt;
  
  
  Why Multiple Terraform Files Matter
&lt;/h2&gt;

&lt;p&gt;Previous projects used a single &lt;code&gt;main.tf&lt;/code&gt;. This project used seven:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform-bookreview-azure/
  main.tf           # Provider and resource group
  networking.tf     # VNet, subnets, NSGs, DNS
  compute.tf        # VMs, NICs, public IPs
  loadbalancer.tf   # App Gateway and Internal LB
  database.tf       # MySQL Flexible Server
  variables.tf      # All configurable values
  outputs.tf        # Endpoints and IPs
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is not organisational preference -- it is a practical necessity at this scale. When the Application Gateway takes 9 minutes to provision and the MySQL server takes 7 minutes, you need to be able to read the plan output and understand which file to look at when something fails. A 500-line &lt;code&gt;main.tf&lt;/code&gt; becomes unreadable. Separated files make the dependency chain visible.&lt;/p&gt;

&lt;p&gt;The variables file also eliminates a security risk. Sensitive values like database passwords are defined once with &lt;code&gt;sensitive = true&lt;/code&gt;, which prevents them from ever appearing in plan output or logs.&lt;/p&gt;




&lt;h2&gt;
  
  
  The NSG Design
&lt;/h2&gt;

&lt;p&gt;Each tier has its own Network Security Group with rules specific to its role.&lt;/p&gt;

&lt;p&gt;Web tier NSG allows HTTP (80), HTTPS (443) and SSH from a specific IP only:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;security_rule&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt;                       &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Allow-SSH"&lt;/span&gt;
  &lt;span class="nx"&gt;priority&lt;/span&gt;                   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;120&lt;/span&gt;
  &lt;span class="nx"&gt;direction&lt;/span&gt;                  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Inbound"&lt;/span&gt;
  &lt;span class="nx"&gt;access&lt;/span&gt;                     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Allow"&lt;/span&gt;
  &lt;span class="nx"&gt;protocol&lt;/span&gt;                   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Tcp"&lt;/span&gt;
  &lt;span class="nx"&gt;source_port_range&lt;/span&gt;          &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"*"&lt;/span&gt;
  &lt;span class="nx"&gt;destination_port_range&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"22"&lt;/span&gt;
  &lt;span class="nx"&gt;source_address_prefix&lt;/span&gt;      &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;my_ip&lt;/span&gt;
  &lt;span class="nx"&gt;destination_address_prefix&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"*"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;App tier NSG allows port 3001 only from the web subnet CIDR:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;security_rule&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt;                       &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Allow-App-Port"&lt;/span&gt;
  &lt;span class="nx"&gt;priority&lt;/span&gt;                   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;100&lt;/span&gt;
  &lt;span class="nx"&gt;direction&lt;/span&gt;                  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Inbound"&lt;/span&gt;
  &lt;span class="nx"&gt;access&lt;/span&gt;                     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Allow"&lt;/span&gt;
  &lt;span class="nx"&gt;protocol&lt;/span&gt;                   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Tcp"&lt;/span&gt;
  &lt;span class="nx"&gt;source_port_range&lt;/span&gt;          &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"*"&lt;/span&gt;
  &lt;span class="nx"&gt;destination_port_range&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"3001"&lt;/span&gt;
  &lt;span class="nx"&gt;source_address_prefix&lt;/span&gt;      &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"10.0.1.0/24"&lt;/span&gt;
  &lt;span class="nx"&gt;destination_address_prefix&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"*"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The database tier has no NSG rule for public access at all. MySQL is accessible only through the VNet integration. There is no port 3306 open to any external IP.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Errors That Taught Me the Most
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Error 1: MySQL Provisioning Disabled in UK South
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="err"&gt;Status:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"ProvisioningDisabled"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="err"&gt;Message:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Provisioning is restricted in this region."&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Azure restricts MySQL Flexible Server provisioning in certain regions for free tier subscriptions. The fix was switching to West Europe. This is the kind of constraint you only discover by trying -- no documentation lists it clearly.&lt;/p&gt;

&lt;h3&gt;
  
  
  Error 2: Application Gateway Deprecated TLS Policy
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ApplicationGatewayDeprecatedTlsVersionUsedInSslPolicy
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Azure now requires an explicit modern TLS policy on Application Gateways. Adding this block resolved it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;ssl_policy&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;policy_type&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Predefined"&lt;/span&gt;
  &lt;span class="nx"&gt;policy_name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"AppGwSslPolicy20220101"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Error 3: Zone Not Available
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="err"&gt;Status:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"ZoneNotAvailableForRegion"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Specifying &lt;code&gt;zone = "1"&lt;/code&gt; for the MySQL server failed because that zone was not available. Removing the zone specification entirely let Azure pick an available zone automatically.&lt;/p&gt;

&lt;h3&gt;
  
  
  Error 4: Read Replica Not Supported on Burstable Tier
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ReplicationNotSupportedForBurstableEdition
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;B_Standard_B1ms&lt;/code&gt; burstable tier does not support read replicas. This requires the General Purpose tier which costs significantly more. For a learning environment, the replica was removed. In production, you would use &lt;code&gt;GP_Standard_D2ds_v4&lt;/code&gt; or similar.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Security Incident
&lt;/h2&gt;

&lt;p&gt;Midway through the first deployment, the PM2 logs showed something alarming:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;xmrig-6.21.0/xmrig &lt;span class="nt"&gt;-o&lt;/span&gt; pool.supportxmr.com:443
scanner_linux &lt;span class="nt"&gt;-t&lt;/span&gt; 1000
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The VM had been compromised by a cryptominer within hours of deployment. The attack vector was password authentication on port 22 open to &lt;code&gt;0.0.0.0/0&lt;/code&gt;. Automated bots scan the entire internet for open SSH ports and attempt common passwords. The password &lt;code&gt;BookReview@1234!&lt;/code&gt; was not weak by typical standards, but brute force tools eventually found it.&lt;/p&gt;

&lt;p&gt;The response was immediate:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Exit the compromised VM&lt;/li&gt;
&lt;li&gt;Run &lt;code&gt;terraform destroy&lt;/code&gt; to wipe all infrastructure&lt;/li&gt;
&lt;li&gt;Rebuild with SSH key authentication and IP-restricted NSG rules&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The rebuilt configuration:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"azurerm_linux_virtual_machine"&lt;/span&gt; &lt;span class="s2"&gt;"web_vm"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;disable_password_authentication&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;

  &lt;span class="nx"&gt;admin_ssh_key&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;username&lt;/span&gt;   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;admin_username&lt;/span&gt;
    &lt;span class="nx"&gt;public_key&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;admin_ssh_public_key&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And the NSG SSH rule restricted to a specific IP:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;source_address_prefix&lt;/span&gt; &lt;span class="err"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;var&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;my_ip&lt;/span&gt;  &lt;span class="c1"&gt;# "102.90.126.10/32"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;SSH keys are mathematically impossible to brute force. A 4096-bit RSA key has more possible values than atoms in the observable universe. Combined with IP restriction, the attack surface becomes essentially zero.&lt;/p&gt;

&lt;p&gt;This was not a lab exercise in security. It was a real attack on a real VM that I responded to in real time. That experience is worth more than any security tutorial.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Application Deployment
&lt;/h2&gt;

&lt;p&gt;After the secure rebuild, the deployment went cleanly:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Backend startup confirmation:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Database 'book_review_db' connected successfully with SSL!
✅ Database schema updated successfully!
👤 Sample users added!
📚 Sample books added!
✍️ Sample reviews added!
🚀 Server running on port 5000
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Nginx configuration proxying port 80 to Next.js on 3000 and API calls to backend on 5000:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight nginx"&gt;&lt;code&gt;&lt;span class="k"&gt;server&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kn"&gt;listen&lt;/span&gt; &lt;span class="mi"&gt;80&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="kn"&gt;server_name&lt;/span&gt; &lt;span class="s"&gt;_&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

    &lt;span class="kn"&gt;location&lt;/span&gt; &lt;span class="n"&gt;/api/&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="kn"&gt;proxy_pass&lt;/span&gt; &lt;span class="s"&gt;http://localhost:5000/api/&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="kn"&gt;proxy_http_version&lt;/span&gt; &lt;span class="mf"&gt;1.1&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="kn"&gt;proxy_set_header&lt;/span&gt; &lt;span class="s"&gt;Host&lt;/span&gt; &lt;span class="nv"&gt;$host&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="kn"&gt;location&lt;/span&gt; &lt;span class="n"&gt;/&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="kn"&gt;proxy_pass&lt;/span&gt; &lt;span class="s"&gt;http://localhost:3000&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="kn"&gt;proxy_http_version&lt;/span&gt; &lt;span class="mf"&gt;1.1&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="kn"&gt;proxy_set_header&lt;/span&gt; &lt;span class="s"&gt;Upgrade&lt;/span&gt; &lt;span class="nv"&gt;$http_upgrade&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="kn"&gt;proxy_set_header&lt;/span&gt; &lt;span class="s"&gt;Connection&lt;/span&gt; &lt;span class="s"&gt;"upgrade"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="kn"&gt;proxy_set_header&lt;/span&gt; &lt;span class="s"&gt;Host&lt;/span&gt; &lt;span class="nv"&gt;$host&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The app loaded with books, reviews, user registration and login all working end to end.&lt;/p&gt;




&lt;h2&gt;
  
  
  What I Now Understand About Production Infrastructure
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Tier isolation is not optional.&lt;/strong&gt; Each layer having its own NSG with explicit inbound rules means a compromise at the web tier cannot automatically reach the database. Defence in depth is built into the network topology itself.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Terraform file organisation is architecture documentation.&lt;/strong&gt; When someone new joins a team, &lt;code&gt;networking.tf&lt;/code&gt; tells them the network design. &lt;code&gt;database.tf&lt;/code&gt; tells them the data tier. The files are the architecture diagram expressed in executable code.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;IP addresses change.&lt;/strong&gt; Dynamic IPs on home connections mean NSG rules need updating whenever your IP changes. In production this is handled with VPN gateways or jump boxes. In a learning environment, it means keeping &lt;code&gt;terraform apply&lt;/code&gt; fast and knowing how to update a single variable without destroying everything.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The rebuild is not failure.&lt;/strong&gt; Destroying compromised infrastructure and rebuilding from scratch with security fixes applied is exactly what the immutable infrastructure pattern is designed for. Terraform made that rebuild take 25 minutes instead of days.&lt;/p&gt;




&lt;h2&gt;
  
  
  What Is Next
&lt;/h2&gt;

&lt;p&gt;This was the fifth and final project in my Terraform series. The full series covered Azure VMs, AWS EC2 with custom VPC, React app deployment, full-stack EpicBook with RDS, and this production three-tier architecture.&lt;/p&gt;

&lt;p&gt;Each project built on the last. The networking patterns from Assignment 2 informed the VPC design in Assignment 4. The Nginx SPA configuration from Assignment 3 carried directly into Assignment 5. The RDS security group design from Assignment 4 shaped the MySQL VNet integration approach here.&lt;/p&gt;

&lt;p&gt;That is what a curriculum looks like when it is designed properly. And that is what I am taking forward into the next chapter of this DevOps journey.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;I document cloud infrastructure projects in public. Follow along for Terraform, AWS, Azure, and real-world DevOps engineering.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;GitHub: &lt;a href="https://github.com/vivianokose" rel="noopener noreferrer"&gt;vivianokose&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>terraform</category>
      <category>azure</category>
      <category>devops</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Building a Three-Tier Bookstore App on AWS from Scratch: Infrastructure, Deployment, and Every Debug Along the Way</title>
      <dc:creator>Vivian Chiamaka Okose</dc:creator>
      <pubDate>Fri, 20 Mar 2026 17:28:00 +0000</pubDate>
      <link>https://dev.to/vivian_okose/building-a-three-tier-bookstore-app-on-aws-from-scratch-infrastructure-deployment-and-every-41en</link>
      <guid>https://dev.to/vivian_okose/building-a-three-tier-bookstore-app-on-aws-from-scratch-infrastructure-deployment-and-every-41en</guid>
      <description>&lt;h2&gt;
  
  
  Building a Three-Tier Bookstore App on AWS from Scratch: Infrastructure, Deployment, and Every Debug Along the Way
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;By Vivian Chiamaka Okose&lt;/strong&gt;&lt;br&gt;
&lt;em&gt;Tags: #aws #terraform #rds #nodejs #devops #threetier #mysql #nginx #cloud #iac&lt;/em&gt;&lt;/p&gt;



&lt;p&gt;I recently completed a project that I would describe as the most satisfying and the most frustrating thing I have built so far in my DevOps journey -- in equal measure.&lt;/p&gt;

&lt;p&gt;The goal was to deploy The EpicBook, a full-stack bookstore application, on AWS using Terraform. Not just spin up a VM and call it done. A proper three-tier architecture: network layer, compute layer, and a managed database layer, all defined as infrastructure as code, all connected through deliberate security boundaries.&lt;/p&gt;

&lt;p&gt;This is the story of how I built it, what broke along the way, and what I now understand about cloud architecture that I did not before.&lt;/p&gt;


&lt;h2&gt;
  
  
  The Architecture
&lt;/h2&gt;

&lt;p&gt;Before writing a single line of code, I mapped out what needed to exist:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Network tier:&lt;/strong&gt; A custom VPC with a public subnet for the EC2 instance and a private subnet for the RDS database. Internet Gateway, Route Table, and Route Table Association for public internet access. Two security groups -- one for EC2 and one for RDS -- with a deliberate trust boundary between them.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Compute tier:&lt;/strong&gt; An EC2 t3.micro instance running Ubuntu 22.04 in the public subnet. Node.js 18 via nvm, Nginx as a reverse proxy, and the EpicBook Node.js application running on port 8080.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Database tier:&lt;/strong&gt; Amazon RDS MySQL 8.0 in the private subnet. Not publicly accessible. Reachable only from the EC2 security group on port 3306.&lt;/p&gt;

&lt;p&gt;Eleven Terraform resources total. One configuration, three files.&lt;/p&gt;


&lt;h2&gt;
  
  
  Why Three Files Instead of One?
&lt;/h2&gt;

&lt;p&gt;For the first three projects in this series, I kept everything in a single main.tf. That works fine for small deployments. For this project, I split the configuration into three files:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;variables.tf&lt;/code&gt; for all configurable values (region, CIDR blocks, instance types, credentials)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;main.tf&lt;/code&gt; for all resource definitions&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;outputs.tf&lt;/code&gt; for what gets printed after deployment&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The reason this matters: variables.tf eliminates hardcoded values scattered across resources. If I need to change the AWS region or the database password, I change one line in variables.tf and it propagates everywhere automatically. The &lt;code&gt;sensitive = true&lt;/code&gt; flag on the database password means Terraform never prints it in plan output or logs:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;variable&lt;/span&gt; &lt;span class="s2"&gt;"db_password"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;description&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Master password for RDS"&lt;/span&gt;
  &lt;span class="nx"&gt;type&lt;/span&gt;        &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;string&lt;/span&gt;
  &lt;span class="nx"&gt;default&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"EpicBook123!"&lt;/span&gt;
  &lt;span class="nx"&gt;sensitive&lt;/span&gt;   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is not just organisation. It is the foundation of maintainable infrastructure code.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Security Design That Matters Most
&lt;/h2&gt;

&lt;p&gt;This is the part of the project I am most proud of from an engineering perspective.&lt;/p&gt;

&lt;p&gt;The RDS security group does not allow access from a CIDR block. It allows access specifically from the EC2 security group:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_security_group"&lt;/span&gt; &lt;span class="s2"&gt;"rds_sg"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt;        &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"epicbook-rds-sg"&lt;/span&gt;
  &lt;span class="nx"&gt;description&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Allow MySQL from EC2 only"&lt;/span&gt;
  &lt;span class="nx"&gt;vpc_id&lt;/span&gt;      &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_vpc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;epicbook_vpc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;

  &lt;span class="nx"&gt;ingress&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;description&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"MySQL from EC2"&lt;/span&gt;
    &lt;span class="nx"&gt;from_port&lt;/span&gt;       &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;3306&lt;/span&gt;
    &lt;span class="nx"&gt;to_port&lt;/span&gt;         &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;3306&lt;/span&gt;
    &lt;span class="nx"&gt;protocol&lt;/span&gt;        &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"tcp"&lt;/span&gt;
    &lt;span class="nx"&gt;security_groups&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;aws_security_group&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;ec2_sg&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The difference is significant. A CIDR-based rule allows any resource in that IP range to reach the database. A security group reference allows only resources that belong to that specific security group. Even if another VM appeared in the same subnet with the same IP range, it could not reach the database unless it was explicitly assigned the EC2 security group.&lt;/p&gt;

&lt;p&gt;This is zero-trust networking applied at the infrastructure layer. The database is unreachable from the internet, unreachable from other VPC resources, and reachable only from the specific compute layer we defined.&lt;/p&gt;




&lt;h2&gt;
  
  
  Deployment: What Went Smoothly
&lt;/h2&gt;

&lt;p&gt;The Terraform apply ran cleanly:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight terraform"&gt;&lt;code&gt;&lt;span class="nx"&gt;Apply&lt;/span&gt; &lt;span class="nx"&gt;complete&lt;/span&gt;&lt;span class="err"&gt;!&lt;/span&gt; &lt;span class="nx"&gt;Resources&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;11&lt;/span&gt; &lt;span class="nx"&gt;added&lt;/span&gt;&lt;span class="err"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt; &lt;span class="nx"&gt;changed&lt;/span&gt;&lt;span class="err"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt; &lt;span class="nx"&gt;destroyed&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;

&lt;span class="nx"&gt;Outputs&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt;
&lt;span class="nx"&gt;ec2_public_ip&lt;/span&gt;  &lt;span class="err"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"16.28.105.178"&lt;/span&gt;
&lt;span class="nx"&gt;rds_endpoint&lt;/span&gt;   &lt;span class="err"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"epicbook-rds.c9yasg0i2xal.af-south-1.rds.amazonaws.com:3306"&lt;/span&gt;
&lt;span class="nx"&gt;rds_port&lt;/span&gt;       &lt;span class="err"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;3306&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The RDS instance took about 5 minutes to provision, which is normal. AWS is provisioning a managed database with storage, parameter groups, and subnet associations. That takes time.&lt;/p&gt;

&lt;p&gt;The RDS connection test from EC2 worked immediately:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;mysql &lt;span class="nt"&gt;-h&lt;/span&gt; epicbook-rds.c9yasg0i2xal.af-south-1.rds.amazonaws.com &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-u&lt;/span&gt; admin &lt;span class="nt"&gt;-pEpicBook123&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt; &lt;span class="nt"&gt;-P&lt;/span&gt; 3306 epicbook

Welcome to the MySQL monitor.
Server version: 8.0.44
mysql&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That connection working on the first try confirmed the security group design was correct.&lt;/p&gt;




&lt;h2&gt;
  
  
  Deployment: What Did Not Go Smoothly
&lt;/h2&gt;

&lt;p&gt;Here is where the real learning happened.&lt;/p&gt;

&lt;h3&gt;
  
  
  Problem 1: The apt Lock
&lt;/h3&gt;

&lt;p&gt;Ubuntu's default apt repository ships Node.js v12, which is too old for the EpicBook application. When I tried to upgrade via NodeSource, the setup script ran &lt;code&gt;apt install&lt;/code&gt; internally, which conflicted with a background system update process that had locked the package manager:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="go"&gt;E: Could not get lock /var/lib/dpkg/lock-frontend.
It is held by process 24847 (apt)
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The fix was to bypass apt entirely using nvm, the Node Version Manager:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-o-&lt;/span&gt; https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.7/install.sh | bash
&lt;span class="nb"&gt;source&lt;/span&gt; ~/.bashrc
nvm &lt;span class="nb"&gt;install &lt;/span&gt;18
nvm use 18
node &lt;span class="nt"&gt;-v&lt;/span&gt;  &lt;span class="c"&gt;# v18.20.8&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;nvm downloads Node.js directly from the official distribution and installs it in the user's home directory. No package manager. No lock conflicts. No sudo required. This is the correct approach for application servers where you need a specific Node.js version and cannot wait for the package manager.&lt;/p&gt;

&lt;h3&gt;
  
  
  Problem 2: Hardcoded Database Names in SQL Files
&lt;/h3&gt;

&lt;p&gt;The EpicBook repository includes SQL files for creating tables and seeding data. Those files were written for a database named &lt;code&gt;bookstore&lt;/code&gt;, but the RDS database I provisioned is named &lt;code&gt;epicbook&lt;/code&gt;. Running the files directly produced errors:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="n"&gt;ERROR&lt;/span&gt; &lt;span class="mi"&gt;1049&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;42000&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt; &lt;span class="k"&gt;Unknown&lt;/span&gt; &lt;span class="k"&gt;database&lt;/span&gt; &lt;span class="s1"&gt;'bookstore'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The fix was sed, a command-line tool for stream editing text:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sed&lt;/span&gt; &lt;span class="s1"&gt;'s/bookstore/epicbook/g'&lt;/span&gt; db/BuyTheBook_Schema.sql &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; db/epicbook_schema.sql
&lt;span class="nb"&gt;sed&lt;/span&gt; &lt;span class="s1"&gt;'s/bookstore/epicbook/g'&lt;/span&gt; db/author_seed.sql &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; db/epicbook_author_seed.sql
&lt;span class="nb"&gt;sed&lt;/span&gt; &lt;span class="s1"&gt;'s/bookstore/epicbook/g'&lt;/span&gt; db/books_seed.sql &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; db/epicbook_books_seed.sql
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This replaced every occurrence of &lt;code&gt;bookstore&lt;/code&gt; with &lt;code&gt;epicbook&lt;/code&gt; across all three files in seconds. The result: 53 authors and 54 books loaded cleanly into RDS.&lt;/p&gt;

&lt;h3&gt;
  
  
  Problem 3: Application Configuration
&lt;/h3&gt;

&lt;p&gt;The config/config.json file shipped with the repository pointed to &lt;code&gt;127.0.0.1&lt;/code&gt; (localhost). Running the app without updating this would have produced a connection refused error because there is no local MySQL server -- the database is RDS.&lt;/p&gt;

&lt;p&gt;The updated configuration:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"development"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"username"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"admin"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"password"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"EpicBook123!"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"database"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"epicbook"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"host"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"epicbook-rds.c9yasg0i2xal.af-south-1.rds.amazonaws.com"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"dialect"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"mysql"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"port"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;3306&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Always verify application configuration against infrastructure outputs before starting a service. The outputs.tf file exists precisely for this reason.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Result
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;going to html route
App listening on PORT 8080
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Visiting &lt;code&gt;http://16.28.105.178&lt;/code&gt; loaded The EpicBook homepage with the full book catalogue from RDS. The cart accepted items, calculated totals, and processed orders end to end.&lt;/p&gt;

&lt;p&gt;The Nginx reverse proxy configuration that made this work:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight nginx"&gt;&lt;code&gt;&lt;span class="k"&gt;server&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kn"&gt;listen&lt;/span&gt; &lt;span class="mi"&gt;80&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="kn"&gt;server_name&lt;/span&gt; &lt;span class="s"&gt;_&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

    &lt;span class="kn"&gt;location&lt;/span&gt; &lt;span class="n"&gt;/&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="kn"&gt;proxy_pass&lt;/span&gt; &lt;span class="s"&gt;http://localhost:8080&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="kn"&gt;proxy_http_version&lt;/span&gt; &lt;span class="mf"&gt;1.1&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="kn"&gt;proxy_set_header&lt;/span&gt; &lt;span class="s"&gt;Upgrade&lt;/span&gt; &lt;span class="nv"&gt;$http_upgrade&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="kn"&gt;proxy_set_header&lt;/span&gt; &lt;span class="s"&gt;Connection&lt;/span&gt; &lt;span class="s"&gt;"upgrade"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="kn"&gt;proxy_set_header&lt;/span&gt; &lt;span class="s"&gt;Host&lt;/span&gt; &lt;span class="nv"&gt;$host&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="kn"&gt;proxy_cache_bypass&lt;/span&gt; &lt;span class="nv"&gt;$http_upgrade&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Port 80 receives public traffic. Nginx forwards it to port 8080 where Node.js is running. The application never needs to run as root or bind to a privileged port.&lt;/p&gt;




&lt;h2&gt;
  
  
  What I Now Understand Differently
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Security groups as identity, not location.&lt;/strong&gt; Referencing a security group in an ingress rule is fundamentally different from specifying a CIDR block. It grants access based on what a resource is, not where it is. This is more secure and more maintainable.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;nvm over apt for Node.js.&lt;/strong&gt; On production servers, package manager locks, version staleness, and permission issues make apt a poor choice for language runtime management. nvm, pyenv, rbenv -- the language-specific version managers exist for good reasons.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;sed for operational configuration fixes.&lt;/strong&gt; When application configuration files have hardcoded values that do not match your infrastructure, sed is the fast, scriptable, auditable fix. It is a tool every DevOps engineer should be comfortable with.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Outputs.tf is a first-class deliverable.&lt;/strong&gt; The RDS endpoint, EC2 IP, and port values printed after apply are not just convenient -- they are the interface between your infrastructure layer and your application configuration layer. Design them deliberately.&lt;/p&gt;




&lt;h2&gt;
  
  
  What Is Next
&lt;/h2&gt;

&lt;p&gt;The final project in this series is the production-grade version: a six-subnet three-tier architecture across two availability zones, public and internal load balancers, RDS with multi-AZ and read replicas. Everything from this project applies -- the security group design, the variable structure, the application deployment patterns -- just at production scale.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;I build and document cloud infrastructure projects in public. Follow along for Terraform, AWS, Azure, and real-world DevOps from someone who started in biochemistry and is building toward cloud engineering one deployment at a time.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;GitHub: &lt;a href="https://github.com/vivianokose" rel="noopener noreferrer"&gt;vivianokose&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>terraform</category>
      <category>aws</category>
      <category>node</category>
      <category>devops</category>
    </item>
    <item>
      <title>Nginx + React: How to Serve a Single Page Application Correctly (And Why Most Tutorials Get It Wrong)</title>
      <dc:creator>Vivian Chiamaka Okose</dc:creator>
      <pubDate>Thu, 19 Mar 2026 19:20:16 +0000</pubDate>
      <link>https://dev.to/vivian_okose/nginx-react-how-to-serve-a-single-page-application-correctly-and-why-most-tutorials-get-it-4iio</link>
      <guid>https://dev.to/vivian_okose/nginx-react-how-to-serve-a-single-page-application-correctly-and-why-most-tutorials-get-it-4iio</guid>
      <description>&lt;h2&gt;
  
  
  Nginx + React: How to Serve a Single Page Application Correctly (And Why Most Tutorials Get It Wrong)
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;By Vivian Chiamaka Okose&lt;/strong&gt;&lt;br&gt;
&lt;em&gt;Tags: #nginx #react #terraform #azure #devops #spa #nodejs #cloud #beginners&lt;/em&gt;&lt;/p&gt;



&lt;p&gt;There is one Nginx configuration line that every React developer deploying to a Linux server needs to know. Most tutorials skip past it without explanation. I learned it the hard way -- by understanding exactly what breaks without it.&lt;/p&gt;

&lt;p&gt;This is the story of Assignment 3 of my Terraform series: provisioning an Azure VM with Terraform, deploying a React application on it, and configuring Nginx to serve it correctly. Along the way I hit a Node.js version incompatibility and learned why &lt;code&gt;try_files $uri /index.html&lt;/code&gt; is not optional for SPAs.&lt;/p&gt;


&lt;h2&gt;
  
  
  What I Built
&lt;/h2&gt;

&lt;p&gt;Using Terraform, I provisioned the following on Microsoft Azure:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A Resource Group, Virtual Network, and Subnet&lt;/li&gt;
&lt;li&gt;A Network Security Group with explicit SSH and HTTP rules&lt;/li&gt;
&lt;li&gt;An NSG Association linking the security group to the subnet&lt;/li&gt;
&lt;li&gt;A Static Public IP and Network Interface&lt;/li&gt;
&lt;li&gt;An Ubuntu 20.04 LTS VM (Standard_D2ads_v7) at &lt;code&gt;20.90.152.5&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Then on the VM itself:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Upgraded Node.js from v10 to v18.20.8 LTS&lt;/li&gt;
&lt;li&gt;Installed and configured Nginx&lt;/li&gt;
&lt;li&gt;Cloned, personalised, and built a React application&lt;/li&gt;
&lt;li&gt;Deployed it to Nginx and configured SPA routing&lt;/li&gt;
&lt;li&gt;Verified it live in the browser with my name on the page&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Eight Terraform resources. One live React application. One important Nginx lesson.&lt;/p&gt;


&lt;h2&gt;
  
  
  The New Infrastructure Piece: Network Security Groups
&lt;/h2&gt;

&lt;p&gt;Assignment 3 introduced something Assignment 1 did not have -- an explicit firewall.&lt;/p&gt;

&lt;p&gt;In Azure, a Network Security Group (NSG) is the resource that controls inbound and outbound traffic rules at the subnet or NIC level. Without one, your VM has no defined firewall rules. This is a security risk in production.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"azurerm_network_security_group"&lt;/span&gt; &lt;span class="s2"&gt;"nsg"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt;                &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"react-nsg"&lt;/span&gt;
  &lt;span class="nx"&gt;location&lt;/span&gt;            &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;azurerm_resource_group&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;rg&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;location&lt;/span&gt;
  &lt;span class="nx"&gt;resource_group_name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;azurerm_resource_group&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;rg&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt;

  &lt;span class="nx"&gt;security_rule&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;name&lt;/span&gt;                       &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Allow-SSH"&lt;/span&gt;
    &lt;span class="nx"&gt;priority&lt;/span&gt;                   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;100&lt;/span&gt;
    &lt;span class="nx"&gt;direction&lt;/span&gt;                  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Inbound"&lt;/span&gt;
    &lt;span class="nx"&gt;access&lt;/span&gt;                     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Allow"&lt;/span&gt;
    &lt;span class="nx"&gt;protocol&lt;/span&gt;                   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Tcp"&lt;/span&gt;
    &lt;span class="nx"&gt;source_port_range&lt;/span&gt;          &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"*"&lt;/span&gt;
    &lt;span class="nx"&gt;destination_port_range&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"22"&lt;/span&gt;
    &lt;span class="nx"&gt;source_address_prefix&lt;/span&gt;      &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"*"&lt;/span&gt;
    &lt;span class="nx"&gt;destination_address_prefix&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"*"&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="nx"&gt;security_rule&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;name&lt;/span&gt;                       &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Allow-HTTP"&lt;/span&gt;
    &lt;span class="nx"&gt;priority&lt;/span&gt;                   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;110&lt;/span&gt;
    &lt;span class="nx"&gt;direction&lt;/span&gt;                  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Inbound"&lt;/span&gt;
    &lt;span class="nx"&gt;access&lt;/span&gt;                     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Allow"&lt;/span&gt;
    &lt;span class="nx"&gt;protocol&lt;/span&gt;                   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Tcp"&lt;/span&gt;
    &lt;span class="nx"&gt;source_port_range&lt;/span&gt;          &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"*"&lt;/span&gt;
    &lt;span class="nx"&gt;destination_port_range&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"80"&lt;/span&gt;
    &lt;span class="nx"&gt;source_address_prefix&lt;/span&gt;      &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"*"&lt;/span&gt;
    &lt;span class="nx"&gt;destination_address_prefix&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"*"&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"azurerm_subnet_network_security_group_association"&lt;/span&gt; &lt;span class="s2"&gt;"nsg_assoc"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;subnet_id&lt;/span&gt;                 &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;azurerm_subnet&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;subnet&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;
  &lt;span class="nx"&gt;network_security_group_id&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;azurerm_network_security_group&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;nsg&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The critical detail here is the association resource. Creating the NSG alone does nothing -- you have to explicitly link it to the subnet. This is the same pattern as AWS Route Table Associations from Assignment 2. Both clouds require an explicit linking step that is easy to forget and produces silent failures when missing.&lt;/p&gt;




&lt;h2&gt;
  
  
  Problem 1: Node.js Version Incompatibility
&lt;/h2&gt;

&lt;p&gt;After SSH-ing into the VM, I installed Node.js and npm using the default Ubuntu apt package:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt update &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;sudo &lt;/span&gt;apt &lt;span class="nb"&gt;install &lt;/span&gt;nodejs npm git &lt;span class="nt"&gt;-y&lt;/span&gt;
node &lt;span class="nt"&gt;-v&lt;/span&gt;  &lt;span class="c"&gt;# v10.19.0&lt;/span&gt;
npm &lt;span class="nt"&gt;-v&lt;/span&gt;   &lt;span class="c"&gt;# 6.14.4&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Running &lt;code&gt;npm install&lt;/code&gt; produced a wall of warnings:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npm WARN notsup Unsupported engine for react-scripts@5.0.1:
wanted: {"node":"&amp;gt;=14.0.0"} (current: {"node":"10.19.0"})
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The React app requires Node.js 14 or higher. Ubuntu 20.04's default repository ships Node.js 10 because it prioritises stability. This is a real-world operational gap that catches many developers who assume the default package manager always ships current versions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The fix:&lt;/strong&gt; install from the NodeSource repository which ships maintained Node.js versions:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-fsSL&lt;/span&gt; https://deb.nodesource.com/setup_18.x | &lt;span class="nb"&gt;sudo&lt;/span&gt; &lt;span class="nt"&gt;-E&lt;/span&gt; bash -
&lt;span class="nb"&gt;sudo &lt;/span&gt;apt-get &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; nodejs
node &lt;span class="nt"&gt;-v&lt;/span&gt;  &lt;span class="c"&gt;# v18.20.8&lt;/span&gt;
npm &lt;span class="nt"&gt;-v&lt;/span&gt;   &lt;span class="c"&gt;# 10.8.2&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then clean and reinstall dependencies:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;rm&lt;/span&gt; &lt;span class="nt"&gt;-rf&lt;/span&gt; node_modules
npm &lt;span class="nb"&gt;install&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Clean install, no compatibility errors.&lt;/p&gt;




&lt;h2&gt;
  
  
  Problem 2 (That I Avoided): The SPA Routing Problem
&lt;/h2&gt;

&lt;p&gt;This is the lesson I want to spend time on because it is the one most tutorials gloss over.&lt;/p&gt;

&lt;p&gt;A React application is a Single Page Application. There is exactly one HTML file -- &lt;code&gt;index.html&lt;/code&gt; -- and React's JavaScript router intercepts navigation events in the browser to simulate different pages without ever making a new server request.&lt;/p&gt;

&lt;p&gt;When you type &lt;code&gt;http://yourserver.com/about&lt;/code&gt; in the browser, here is what happens:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Without the correct Nginx config:&lt;/strong&gt; Nginx receives a request for &lt;code&gt;/about&lt;/code&gt;. It looks for a file called &lt;code&gt;about&lt;/code&gt; or &lt;code&gt;about/index.html&lt;/code&gt; in &lt;code&gt;/var/www/html&lt;/code&gt;. That file does not exist. Nginx returns a 404. Your React app never loads.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;With the correct Nginx config:&lt;/strong&gt; Nginx receives a request for &lt;code&gt;/about&lt;/code&gt;. It checks for the file, does not find it, falls back to &lt;code&gt;index.html&lt;/code&gt;, and returns that. React loads, reads the URL, and renders the About component. Everything works.&lt;/p&gt;

&lt;p&gt;The configuration that makes this happen is one directive:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight nginx"&gt;&lt;code&gt;&lt;span class="k"&gt;server&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kn"&gt;listen&lt;/span&gt; &lt;span class="mi"&gt;80&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="kn"&gt;server_name&lt;/span&gt; &lt;span class="s"&gt;_&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="kn"&gt;root&lt;/span&gt; &lt;span class="n"&gt;/var/www/html&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="kn"&gt;index&lt;/span&gt; &lt;span class="s"&gt;index.html&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

    &lt;span class="kn"&gt;location&lt;/span&gt; &lt;span class="n"&gt;/&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="kn"&gt;try_files&lt;/span&gt; &lt;span class="nv"&gt;$uri&lt;/span&gt; &lt;span class="n"&gt;/index.html&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="kn"&gt;error_page&lt;/span&gt; &lt;span class="mi"&gt;404&lt;/span&gt; &lt;span class="n"&gt;/index.html&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;try_files $uri /index.html&lt;/code&gt; means: "try to find a file matching the URI, and if you can not find one, serve index.html instead."&lt;/p&gt;

&lt;p&gt;This is not optional. Without it, your React app works fine when users enter from the homepage, but any direct link, browser refresh, or bookmark to an inner page returns a 404. In production this is a user experience failure that can be hard to diagnose if you do not know what to look for.&lt;/p&gt;

&lt;p&gt;This pattern is identical for Vue.js, Angular, and any other SPA framework. Learn it once, use it everywhere.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Build and Deploy Flow
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Personalise the app&lt;/span&gt;
&lt;span class="nb"&gt;cd&lt;/span&gt; ~/my-react-app/src
nano App.js
&lt;span class="c"&gt;# Updated: Deployed by: Vivian Chiamaka Okose | Date: 19/03/2026&lt;/span&gt;

&lt;span class="c"&gt;# Install dependencies and build&lt;/span&gt;
&lt;span class="nb"&gt;cd&lt;/span&gt; ..
npm &lt;span class="nb"&gt;install
&lt;/span&gt;npm run build

&lt;span class="c"&gt;# Output:&lt;/span&gt;
&lt;span class="c"&gt;# Compiled successfully.&lt;/span&gt;
&lt;span class="c"&gt;# 61.25 kB  build/static/js/main.d869c525.js&lt;/span&gt;

&lt;span class="c"&gt;# Deploy to Nginx web root&lt;/span&gt;
&lt;span class="nb"&gt;sudo rm&lt;/span&gt; &lt;span class="nt"&gt;-rf&lt;/span&gt; /var/www/html/&lt;span class="k"&gt;*&lt;/span&gt;
&lt;span class="nb"&gt;sudo cp&lt;/span&gt; &lt;span class="nt"&gt;-r&lt;/span&gt; build/&lt;span class="k"&gt;*&lt;/span&gt; /var/www/html/
&lt;span class="nb"&gt;sudo chown&lt;/span&gt; &lt;span class="nt"&gt;-R&lt;/span&gt; www-data:www-data /var/www/html
&lt;span class="nb"&gt;sudo chmod&lt;/span&gt; &lt;span class="nt"&gt;-R&lt;/span&gt; 755 /var/www/html
&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl restart nginx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then visiting &lt;code&gt;http://20.90.152.5&lt;/code&gt; in the browser:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Welcome to My React App
This app is running on Nginx!
Deployed by: Vivian Chiamaka Okose
Date: 19/03/2026
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Infrastructure I built with Terraform. Application I deployed manually. My name on the page. Live on the internet.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Complete main.tf
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;terraform&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;required_providers&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;azurerm&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;source&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"hashicorp/azurerm"&lt;/span&gt;
      &lt;span class="nx"&gt;version&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"~&amp;gt; 3.0"&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;provider&lt;/span&gt; &lt;span class="s2"&gt;"azurerm"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;features&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;resource_group&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;prevent_deletion_if_contains_resources&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"azurerm_resource_group"&lt;/span&gt; &lt;span class="s2"&gt;"rg"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"react-app-rg"&lt;/span&gt;
  &lt;span class="nx"&gt;location&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"UK South"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"azurerm_virtual_network"&lt;/span&gt; &lt;span class="s2"&gt;"vnet"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt;                &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"react-vnet"&lt;/span&gt;
  &lt;span class="nx"&gt;location&lt;/span&gt;            &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;azurerm_resource_group&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;rg&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;location&lt;/span&gt;
  &lt;span class="nx"&gt;resource_group_name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;azurerm_resource_group&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;rg&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt;
  &lt;span class="nx"&gt;address_space&lt;/span&gt;       &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"10.0.0.0/16"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"azurerm_subnet"&lt;/span&gt; &lt;span class="s2"&gt;"subnet"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt;                 &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"react-subnet"&lt;/span&gt;
  &lt;span class="nx"&gt;resource_group_name&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;azurerm_resource_group&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;rg&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt;
  &lt;span class="nx"&gt;virtual_network_name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;azurerm_virtual_network&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;vnet&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt;
  &lt;span class="nx"&gt;address_prefixes&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"10.0.1.0/24"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"azurerm_network_security_group"&lt;/span&gt; &lt;span class="s2"&gt;"nsg"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt;                &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"react-nsg"&lt;/span&gt;
  &lt;span class="nx"&gt;location&lt;/span&gt;            &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;azurerm_resource_group&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;rg&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;location&lt;/span&gt;
  &lt;span class="nx"&gt;resource_group_name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;azurerm_resource_group&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;rg&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt;

  &lt;span class="nx"&gt;security_rule&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;name&lt;/span&gt;                       &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Allow-SSH"&lt;/span&gt;
    &lt;span class="nx"&gt;priority&lt;/span&gt;                   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;100&lt;/span&gt;
    &lt;span class="nx"&gt;direction&lt;/span&gt;                  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Inbound"&lt;/span&gt;
    &lt;span class="nx"&gt;access&lt;/span&gt;                     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Allow"&lt;/span&gt;
    &lt;span class="nx"&gt;protocol&lt;/span&gt;                   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Tcp"&lt;/span&gt;
    &lt;span class="nx"&gt;source_port_range&lt;/span&gt;          &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"*"&lt;/span&gt;
    &lt;span class="nx"&gt;destination_port_range&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"22"&lt;/span&gt;
    &lt;span class="nx"&gt;source_address_prefix&lt;/span&gt;      &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"*"&lt;/span&gt;
    &lt;span class="nx"&gt;destination_address_prefix&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"*"&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="nx"&gt;security_rule&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;name&lt;/span&gt;                       &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Allow-HTTP"&lt;/span&gt;
    &lt;span class="nx"&gt;priority&lt;/span&gt;                   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;110&lt;/span&gt;
    &lt;span class="nx"&gt;direction&lt;/span&gt;                  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Inbound"&lt;/span&gt;
    &lt;span class="nx"&gt;access&lt;/span&gt;                     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Allow"&lt;/span&gt;
    &lt;span class="nx"&gt;protocol&lt;/span&gt;                   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Tcp"&lt;/span&gt;
    &lt;span class="nx"&gt;source_port_range&lt;/span&gt;          &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"*"&lt;/span&gt;
    &lt;span class="nx"&gt;destination_port_range&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"80"&lt;/span&gt;
    &lt;span class="nx"&gt;source_address_prefix&lt;/span&gt;      &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"*"&lt;/span&gt;
    &lt;span class="nx"&gt;destination_address_prefix&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"*"&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"azurerm_subnet_network_security_group_association"&lt;/span&gt; &lt;span class="s2"&gt;"nsg_assoc"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;subnet_id&lt;/span&gt;                 &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;azurerm_subnet&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;subnet&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;
  &lt;span class="nx"&gt;network_security_group_id&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;azurerm_network_security_group&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;nsg&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"azurerm_public_ip"&lt;/span&gt; &lt;span class="s2"&gt;"public_ip"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt;                &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"react-public-ip"&lt;/span&gt;
  &lt;span class="nx"&gt;location&lt;/span&gt;            &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;azurerm_resource_group&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;rg&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;location&lt;/span&gt;
  &lt;span class="nx"&gt;resource_group_name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;azurerm_resource_group&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;rg&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt;
  &lt;span class="nx"&gt;allocation_method&lt;/span&gt;   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Static"&lt;/span&gt;
  &lt;span class="nx"&gt;sku&lt;/span&gt;                 &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Standard"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"azurerm_network_interface"&lt;/span&gt; &lt;span class="s2"&gt;"nic"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt;                &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"react-nic"&lt;/span&gt;
  &lt;span class="nx"&gt;location&lt;/span&gt;            &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;azurerm_resource_group&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;rg&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;location&lt;/span&gt;
  &lt;span class="nx"&gt;resource_group_name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;azurerm_resource_group&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;rg&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt;

  &lt;span class="nx"&gt;ip_configuration&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;name&lt;/span&gt;                          &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"internal"&lt;/span&gt;
    &lt;span class="nx"&gt;subnet_id&lt;/span&gt;                     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;azurerm_subnet&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;subnet&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;
    &lt;span class="nx"&gt;private_ip_address_allocation&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Dynamic"&lt;/span&gt;
    &lt;span class="nx"&gt;public_ip_address_id&lt;/span&gt;          &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;azurerm_public_ip&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;public_ip&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"azurerm_virtual_machine"&lt;/span&gt; &lt;span class="s2"&gt;"vm"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt;                             &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"react-vm"&lt;/span&gt;
  &lt;span class="nx"&gt;location&lt;/span&gt;                         &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;azurerm_resource_group&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;rg&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;location&lt;/span&gt;
  &lt;span class="nx"&gt;resource_group_name&lt;/span&gt;              &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;azurerm_resource_group&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;rg&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt;
  &lt;span class="nx"&gt;network_interface_ids&lt;/span&gt;            &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;azurerm_network_interface&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;nic&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
  &lt;span class="nx"&gt;vm_size&lt;/span&gt;                          &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Standard_D2ads_v7"&lt;/span&gt;
  &lt;span class="nx"&gt;delete_os_disk_on_termination&lt;/span&gt;    &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="nx"&gt;delete_data_disks_on_termination&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;

  &lt;span class="nx"&gt;os_profile&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;computer_name&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"react-vm"&lt;/span&gt;
    &lt;span class="nx"&gt;admin_username&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"azureuser"&lt;/span&gt;
    &lt;span class="nx"&gt;admin_password&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"P@ssw0rd1234!"&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="nx"&gt;os_profile_linux_config&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;disable_password_authentication&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="nx"&gt;storage_image_reference&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;publisher&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Canonical"&lt;/span&gt;
    &lt;span class="nx"&gt;offer&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"0001-com-ubuntu-server-focal"&lt;/span&gt;
    &lt;span class="nx"&gt;sku&lt;/span&gt;       &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"20_04-lts-gen2"&lt;/span&gt;
    &lt;span class="nx"&gt;version&lt;/span&gt;   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"latest"&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="nx"&gt;storage_os_disk&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;name&lt;/span&gt;              &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"react-os-disk"&lt;/span&gt;
    &lt;span class="nx"&gt;caching&lt;/span&gt;           &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"ReadWrite"&lt;/span&gt;
    &lt;span class="nx"&gt;create_option&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"FromImage"&lt;/span&gt;
    &lt;span class="nx"&gt;managed_disk_type&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Standard_LRS"&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;output&lt;/span&gt; &lt;span class="s2"&gt;"public_ip_address"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;description&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"The public IP address of the VM"&lt;/span&gt;
  &lt;span class="nx"&gt;value&lt;/span&gt;       &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;azurerm_public_ip&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;public_ip&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;ip_address&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Key Takeaways
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;NSG + Association is a two-step pattern.&lt;/strong&gt; Creating the NSG defines the rules. The association activates them on the subnet. Both steps are required.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Ubuntu's default Node.js is outdated.&lt;/strong&gt; Always install from NodeSource or nvm for any modern JavaScript application.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;code&gt;try_files $uri /index.html&lt;/code&gt; is mandatory for React SPAs.&lt;/strong&gt; Without it, direct navigation and browser refreshes return 404 errors. This applies to all SPA frameworks.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;npm install warnings are not errors.&lt;/strong&gt; The &lt;code&gt;deprecated&lt;/code&gt; and &lt;code&gt;warn&lt;/code&gt; messages are informational. Watch for &lt;code&gt;notsup&lt;/code&gt; messages -- those indicate genuine compatibility issues that need fixing before the build.&lt;/p&gt;




&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmlviohkeqinojbfqtgsv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmlviohkeqinojbfqtgsv.png" alt="Azure" width="800" height="248"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzqcriespve6nsomx9nyj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzqcriespve6nsomx9nyj.png" alt="Azure" width="800" height="432"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa7xzpkpnmf1432tvxn6r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa7xzpkpnmf1432tvxn6r.png" alt="Azure" width="800" height="415"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foqmikurtrolducuvx0sj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foqmikurtrolducuvx0sj.png" alt="Azure" width="800" height="392"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqts6v9owatcj73hqb3h8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqts6v9owatcj73hqb3h8.png" alt="Azure" width="800" height="412"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fap18awm7vd7yi9a12om3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fap18awm7vd7yi9a12om3.png" alt="Azure" width="797" height="192"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftfe4tt4nv3kslfwcchpi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftfe4tt4nv3kslfwcchpi.png" alt="Azure" width="800" height="222"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8t7pe9brl5r9q7teczw6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8t7pe9brl5r9q7teczw6.png" alt="Azure" width="800" height="430"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzbilwmlwkq0d13aizcmn.png" alt="Azure" width="800" height="406"&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;I am documenting my complete DevOps learning journey in public -- one deployment at a time.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;GitHub: &lt;a href="https://github.com/vivianokose" rel="noopener noreferrer"&gt;vivianokose&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>terraform</category>
      <category>azure</category>
      <category>devops</category>
      <category>cloudcomputing</category>
    </item>
    <item>
      <title>My Second Terraform Deployment: Building a Complete AWS Network from Scratch (And What I Learned That Surprised Me)</title>
      <dc:creator>Vivian Chiamaka Okose</dc:creator>
      <pubDate>Tue, 17 Mar 2026 15:50:33 +0000</pubDate>
      <link>https://dev.to/vivian_okose/my-second-terraform-deployment-building-a-complete-aws-network-from-scratch-and-what-i-learned-2nc4</link>
      <guid>https://dev.to/vivian_okose/my-second-terraform-deployment-building-a-complete-aws-network-from-scratch-and-what-i-learned-2nc4</guid>
      <description>&lt;h2&gt;
  
  
  &lt;strong&gt;By Vivian Chiamaka Okose&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;I previously talked about how I learned how to provision a virtual machine on Azure. This present task taught me how networking actually works.&lt;/p&gt;

&lt;p&gt;That might sound like an exaggeration, but it is not. When you have to build every single layer of a cloud network by hand -- in code -- and watch each piece slot into place before the next one can exist, something clicks that no amount of reading diagrams ever produces.&lt;/p&gt;

&lt;p&gt;This is the story of my second Terraform deployment: an AWS EC2 instance inside a custom VPC, accessible via SSH, running Nginx. Eight resources, one configuration file, and a few lessons I will carry into every cloud project I work on.&lt;/p&gt;




&lt;h2&gt;
  
  
  What I Built
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;A custom VPC with CIDR &lt;code&gt;10.0.0.0/16&lt;/code&gt; and DNS support enabled&lt;/li&gt;
&lt;li&gt;A public subnet (&lt;code&gt;10.0.1.0/24&lt;/code&gt;) in &lt;code&gt;af-south-1a&lt;/code&gt; with auto-assign public IP&lt;/li&gt;
&lt;li&gt;A private subnet (&lt;code&gt;10.0.2.0/24&lt;/code&gt;) in &lt;code&gt;af-south-1b&lt;/code&gt; for future backend resources&lt;/li&gt;
&lt;li&gt;An Internet Gateway attached to the VPC&lt;/li&gt;
&lt;li&gt;A Route Table routing all outbound traffic through the IGW&lt;/li&gt;
&lt;li&gt;A Route Table Association linking the public subnet to that route table&lt;/li&gt;
&lt;li&gt;A Security Group allowing SSH (port 22) and HTTP (port 80)&lt;/li&gt;
&lt;li&gt;An EC2 instance running Ubuntu 22.04, accessible via SSH with Nginx installed&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Eight resources. All defined in Terraform. All deployed in under two minutes.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Four-Piece AWS Networking Puzzle
&lt;/h2&gt;

&lt;p&gt;This is the single most important thing I learned in this assignment.&lt;/p&gt;

&lt;p&gt;In AWS, internet connectivity for a resource requires four things working together:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. A VPC&lt;/strong&gt; -- your isolated private network. Think of it as your building.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. An Internet Gateway (IGW)&lt;/strong&gt; -- the physical door connecting your building to the outside world. Without this, nothing gets in or out, regardless of what IP addresses you assign.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. A Route Table with an internet route&lt;/strong&gt; -- the signpost that says "all traffic going outside? Use the front door (IGW)." The route &lt;code&gt;0.0.0.0/0 -&amp;gt; igw&lt;/code&gt; is that signpost.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. A Route Table Association&lt;/strong&gt; -- the act of putting that signpost specifically in your public subnet. Without this, the signpost exists but your subnet is not reading it.&lt;/p&gt;

&lt;p&gt;Miss any one of these four and your EC2 has no internet access even with a public IP assigned. The public IP is just a label -- the routing infrastructure is what makes it actually reachable.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="c1"&gt;# The door&lt;/span&gt;
&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_internet_gateway"&lt;/span&gt; &lt;span class="s2"&gt;"igw"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;vpc_id&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_vpc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;main&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;# The signpost&lt;/span&gt;
&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_route_table"&lt;/span&gt; &lt;span class="s2"&gt;"public_rt"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;vpc_id&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_vpc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;main&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;
  &lt;span class="nx"&gt;route&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;cidr_block&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"0.0.0.0/0"&lt;/span&gt;
    &lt;span class="nx"&gt;gateway_id&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_internet_gateway&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;igw&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;# Putting the signpost in the right subnet&lt;/span&gt;
&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_route_table_association"&lt;/span&gt; &lt;span class="s2"&gt;"public_assoc"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;subnet_id&lt;/span&gt;      &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_subnet&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;public&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;
  &lt;span class="nx"&gt;route_table_id&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_route_table&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;public_rt&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once this pattern clicked, I could see it everywhere in cloud architecture diagrams. It is one of those fundamentals that everything else builds on.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Complete main.tf
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;terraform&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;required_providers&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;aws&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;source&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"hashicorp/aws"&lt;/span&gt;
      &lt;span class="nx"&gt;version&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"~&amp;gt; 5.0"&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;provider&lt;/span&gt; &lt;span class="s2"&gt;"aws"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;region&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"af-south-1"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_vpc"&lt;/span&gt; &lt;span class="s2"&gt;"main"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;cidr_block&lt;/span&gt;           &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"10.0.0.0/16"&lt;/span&gt;
  &lt;span class="nx"&gt;enable_dns_hostnames&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="nx"&gt;enable_dns_support&lt;/span&gt;   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="nx"&gt;tags&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;Name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"terraform-vpc"&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_subnet"&lt;/span&gt; &lt;span class="s2"&gt;"public"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;vpc_id&lt;/span&gt;                  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_vpc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;main&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;
  &lt;span class="nx"&gt;cidr_block&lt;/span&gt;              &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"10.0.1.0/24"&lt;/span&gt;
  &lt;span class="nx"&gt;map_public_ip_on_launch&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="nx"&gt;availability_zone&lt;/span&gt;       &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"af-south-1a"&lt;/span&gt;
  &lt;span class="nx"&gt;tags&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;Name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"terraform-public-subnet"&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_subnet"&lt;/span&gt; &lt;span class="s2"&gt;"private"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;vpc_id&lt;/span&gt;            &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_vpc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;main&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;
  &lt;span class="nx"&gt;cidr_block&lt;/span&gt;        &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"10.0.2.0/24"&lt;/span&gt;
  &lt;span class="nx"&gt;availability_zone&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"af-south-1b"&lt;/span&gt;
  &lt;span class="nx"&gt;tags&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;Name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"terraform-private-subnet"&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_internet_gateway"&lt;/span&gt; &lt;span class="s2"&gt;"igw"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;vpc_id&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_vpc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;main&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;
  &lt;span class="nx"&gt;tags&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;Name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"terraform-igw"&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_route_table"&lt;/span&gt; &lt;span class="s2"&gt;"public_rt"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;vpc_id&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_vpc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;main&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;
  &lt;span class="nx"&gt;route&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;cidr_block&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"0.0.0.0/0"&lt;/span&gt;
    &lt;span class="nx"&gt;gateway_id&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_internet_gateway&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;igw&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="nx"&gt;tags&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;Name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"terraform-public-rt"&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_route_table_association"&lt;/span&gt; &lt;span class="s2"&gt;"public_assoc"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;subnet_id&lt;/span&gt;      &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_subnet&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;public&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;
  &lt;span class="nx"&gt;route_table_id&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_route_table&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;public_rt&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_security_group"&lt;/span&gt; &lt;span class="s2"&gt;"ec2_sg"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt;        &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"terraform-ec2-sg"&lt;/span&gt;
  &lt;span class="nx"&gt;description&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Allow SSH and HTTP"&lt;/span&gt;
  &lt;span class="nx"&gt;vpc_id&lt;/span&gt;      &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_vpc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;main&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;

  &lt;span class="nx"&gt;ingress&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;description&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"SSH"&lt;/span&gt;
    &lt;span class="nx"&gt;from_port&lt;/span&gt;   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;22&lt;/span&gt;
    &lt;span class="nx"&gt;to_port&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;22&lt;/span&gt;
    &lt;span class="nx"&gt;protocol&lt;/span&gt;    &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"tcp"&lt;/span&gt;
    &lt;span class="nx"&gt;cidr_blocks&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"0.0.0.0/0"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="nx"&gt;ingress&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;description&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"HTTP"&lt;/span&gt;
    &lt;span class="nx"&gt;from_port&lt;/span&gt;   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;80&lt;/span&gt;
    &lt;span class="nx"&gt;to_port&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;80&lt;/span&gt;
    &lt;span class="nx"&gt;protocol&lt;/span&gt;    &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"tcp"&lt;/span&gt;
    &lt;span class="nx"&gt;cidr_blocks&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"0.0.0.0/0"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="nx"&gt;egress&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;from_port&lt;/span&gt;   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;
    &lt;span class="nx"&gt;to_port&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;
    &lt;span class="nx"&gt;protocol&lt;/span&gt;    &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"-1"&lt;/span&gt;
    &lt;span class="nx"&gt;cidr_blocks&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"0.0.0.0/0"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="nx"&gt;tags&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;Name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"terraform-ec2-sg"&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_instance"&lt;/span&gt; &lt;span class="s2"&gt;"ec2"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;ami&lt;/span&gt;                         &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"ami-0f256846cac23da94"&lt;/span&gt;
  &lt;span class="nx"&gt;instance_type&lt;/span&gt;               &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"t3.micro"&lt;/span&gt;
  &lt;span class="nx"&gt;subnet_id&lt;/span&gt;                   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_subnet&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;public&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;
  &lt;span class="nx"&gt;vpc_security_group_ids&lt;/span&gt;      &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;aws_security_group&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;ec2_sg&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
  &lt;span class="nx"&gt;associate_public_ip_address&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="nx"&gt;key_name&lt;/span&gt;                    &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"terraform-ec2-key"&lt;/span&gt;
  &lt;span class="nx"&gt;tags&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;Name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"terraform-ec2"&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;output&lt;/span&gt; &lt;span class="s2"&gt;"ec2_public_ip"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;description&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Public IP of the EC2 instance"&lt;/span&gt;
  &lt;span class="nx"&gt;value&lt;/span&gt;       &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_instance&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;ec2&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;public_ip&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  SSH Key Pairs and Immutable Infrastructure
&lt;/h2&gt;

&lt;p&gt;AWS EC2 uses SSH key pairs for authentication rather than passwords. I created the key pair before applying the configuration:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; ~/.ssh
aws ec2 create-key-pair &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--key-name&lt;/span&gt; terraform-ec2-key &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--region&lt;/span&gt; af-south-1 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--query&lt;/span&gt; &lt;span class="s2"&gt;"KeyMaterial"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--output&lt;/span&gt; text &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; ~/.ssh/terraform-ec2-key.pem
&lt;span class="nb"&gt;chmod &lt;/span&gt;400 ~/.ssh/terraform-ec2-key.pem
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Something interesting: &lt;code&gt;key_name&lt;/code&gt; is an immutable attribute on EC2 instances. You cannot attach a key pair to an already-running instance -- Terraform has to destroy and recreate it. This is immutable infrastructure in action. Some changes are too fundamental to apply in place. In production, this means a planned maintenance window. Understanding the difference between mutable and immutable attributes is an essential Terraform operations skill.&lt;/p&gt;




&lt;h2&gt;
  
  
  Deployment and Verification
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Apply complete! Resources: 8 added, 0 changed, 0 destroyed.
Outputs:
ec2_public_ip = "13.246.221.152"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;SSH into the instance:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ssh &lt;span class="nt"&gt;-i&lt;/span&gt; ~/.ssh/terraform-ec2-key.pem ubuntu@13.246.221.152
&lt;span class="c"&gt;# Welcome to Ubuntu 22.04.4 LTS&lt;/span&gt;
&lt;span class="c"&gt;# ubuntu@ip-10-0-1-134:~$&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The private IP &lt;code&gt;10.0.1.134&lt;/code&gt; confirmed the instance was sitting exactly where I placed it -- inside the &lt;code&gt;10.0.1.0/24&lt;/code&gt; public subnet.&lt;/p&gt;

&lt;p&gt;After installing Nginx and visiting &lt;code&gt;http://13.246.221.152&lt;/code&gt;, the welcome page loaded. Infrastructure I built with code, serving traffic over the public internet.&lt;/p&gt;




&lt;h2&gt;
  
  
  AWS vs Azure: A Quick Comparison
&lt;/h2&gt;

&lt;p&gt;Having completed both assignments back to back, the contrast is clear. Azure handles some connectivity implicitly. AWS is explicit about everything -- every routing layer is your responsibility to declare. This is more verbose but builds genuine understanding because nothing is hidden from you.&lt;/p&gt;

&lt;p&gt;Neither is better in absolute terms. But the AWS approach forces you to understand networking deeply, which makes you a better cloud engineer on any platform.&lt;/p&gt;




&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4kl972ojf5ci7p17zg17.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4kl972ojf5ci7p17zg17.png" alt="init" width="800" height="348"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feqlw22xq3e3n356en194.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feqlw22xq3e3n356en194.png" alt="terraform plan" width="800" height="365"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0ktc3napttuqkmwpgvsk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0ktc3napttuqkmwpgvsk.png" alt="Terraform apply" width="800" height="376"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbzk8tiqn6236rkav5d8e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbzk8tiqn6236rkav5d8e.png" alt="EC2 running" width="800" height="188"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhtzinh8bd17tdc0idwfv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhtzinh8bd17tdc0idwfv.png" alt="aws vpc" width="800" height="380"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwpu9h3c4ahk4x086y6b3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwpu9h3c4ahk4x086y6b3.png" alt="SSH connection" width="800" height="484"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvrta1ptobq3t903ctk9f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvrta1ptobq3t903ctk9f.png" alt="Nginx status" width="800" height="306"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp1rhov8dh00urxhydi7x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp1rhov8dh00urxhydi7x.png" alt="nginx on browser" width="800" height="284"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcqk01pusyb2oct6zhbt7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcqk01pusyb2oct6zhbt7.png" alt="terraform destroy" width="800" height="423"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9ybfbk2q4fkn8chkgrxv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9ybfbk2q4fkn8chkgrxv.png" alt="EC2 terminated" width="800" height="192"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgv5b1rmcy9yljjn0d5ic.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgv5b1rmcy9yljjn0d5ic.png" alt="main.tf code snippet" width="800" height="449"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;I am documenting my full DevOps learning journey in public. Follow along for Terraform, AWS, Azure, and real-world infrastructure lessons.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;GitHub: &lt;a href="https://github.com/vivianokose" rel="noopener noreferrer"&gt;vivianokose&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>terraform</category>
      <category>aws</category>
      <category>devops</category>
      <category>networking</category>
    </item>
    <item>
      <title>How I Provisioned My First Azure VM with Terraform (And the 5 Errors That Taught Me More Than Any Tutorial)</title>
      <dc:creator>Vivian Chiamaka Okose</dc:creator>
      <pubDate>Mon, 16 Mar 2026 20:54:27 +0000</pubDate>
      <link>https://dev.to/vivian_okose/how-i-provisioned-my-first-azure-vm-with-terraform-and-the-5-errors-that-taught-me-more-than-any-32fe</link>
      <guid>https://dev.to/vivian_okose/how-i-provisioned-my-first-azure-vm-with-terraform-and-the-5-errors-that-taught-me-more-than-any-32fe</guid>
      <description>&lt;p&gt;&lt;strong&gt;By Vivian Chiamaka Okose&lt;/strong&gt;&lt;br&gt;
&lt;em&gt;Published on dev.to | Hashnode | Medium&lt;/em&gt;&lt;br&gt;
&lt;em&gt;Tags: #terraform #azure #devops #iac #beginners #cloud&lt;/em&gt;&lt;/p&gt;



&lt;p&gt;I come from a background in biochemistry and biotechnology. A year ago, "infrastructure" to me meant lab equipment and sample storage. Today, I just provisioned a fully networked Azure virtual machine using nothing but code -- and destroyed it just as cleanly when I was done.&lt;/p&gt;

&lt;p&gt;This is the story of how that happened, including every error I hit along the way.&lt;/p&gt;


&lt;h2&gt;
  
  
  What Is Terraform and Why Does It Matter?
&lt;/h2&gt;

&lt;p&gt;Before I get into the how, let me explain the what.&lt;/p&gt;

&lt;p&gt;Terraform is an Infrastructure as Code (IaC) tool built by HashiCorp. Instead of clicking around in the Azure portal to create resources, you write a configuration file that &lt;em&gt;describes&lt;/em&gt; what you want your infrastructure to look like, and Terraform figures out how to make it happen. Every resource, every network setting, every dependency -- all defined in code, all version-controllable, all reproducible.&lt;/p&gt;

&lt;p&gt;This matters because clicking around in a cloud console is not scalable. If you need to spin up the same environment ten times across ten different projects, you can not manually recreate it each time without introducing inconsistencies. With Terraform, you write it once and deploy it as many times as you need.&lt;/p&gt;

&lt;p&gt;That is the power of Infrastructure as Code.&lt;/p&gt;


&lt;h2&gt;
  
  
  What I Built
&lt;/h2&gt;

&lt;p&gt;For this assignment, I provisioned a complete virtual machine setup on Microsoft Azure using Terraform. Here is what the final infrastructure looked like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Resource Group&lt;/strong&gt; -- a logical container for all related resources&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Virtual Network (VNet)&lt;/strong&gt; -- the private network space (10.0.0.0/16)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Subnet&lt;/strong&gt; -- a segment carved out of the VNet (10.0.1.0/24)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Public IP&lt;/strong&gt; -- a static, externally reachable IP address&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Network Interface Card (NIC)&lt;/strong&gt; -- the bridge connecting the VM to the network&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Virtual Machine&lt;/strong&gt; -- Ubuntu 20.04 LTS Gen2 running on Standard_D2ads_v7&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Six resources, all provisioned from a single &lt;code&gt;main.tf&lt;/code&gt; file.&lt;/p&gt;


&lt;h2&gt;
  
  
  Setting Up the Environment
&lt;/h2&gt;

&lt;p&gt;I run WSL2 Ubuntu on Windows, so the first step was installing Terraform and the Azure CLI directly in my WSL terminal.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Installing Terraform:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt-get update &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;sudo &lt;/span&gt;apt-get &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; gnupg software-properties-common
wget &lt;span class="nt"&gt;-O-&lt;/span&gt; https://apt.releases.hashicorp.com/gpg | gpg &lt;span class="nt"&gt;--dearmor&lt;/span&gt; | &lt;span class="nb"&gt;sudo tee&lt;/span&gt; /usr/share/keyrings/hashicorp-archive-keyring.gpg &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; /dev/null
&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com &lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;lsb_release &lt;span class="nt"&gt;-cs&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt; main"&lt;/span&gt; | &lt;span class="nb"&gt;sudo tee&lt;/span&gt; /etc/apt/sources.list.d/hashicorp.list
&lt;span class="nb"&gt;sudo &lt;/span&gt;apt update &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;sudo &lt;/span&gt;apt-get &lt;span class="nb"&gt;install &lt;/span&gt;terraform &lt;span class="nt"&gt;-y&lt;/span&gt;
terraform &lt;span class="nt"&gt;-v&lt;/span&gt;
&lt;span class="c"&gt;# Terraform v1.14.7&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Installing and authenticating Azure CLI:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-sL&lt;/span&gt; https://aka.ms/InstallAzureCLIDeb | &lt;span class="nb"&gt;sudo &lt;/span&gt;bash
az login
az account show
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After &lt;code&gt;az login&lt;/code&gt;, I authenticated through the browser device flow and confirmed my subscription was active. With that done, Terraform had everything it needed to talk to Azure.&lt;/p&gt;




&lt;h2&gt;
  
  
  The main.tf File
&lt;/h2&gt;

&lt;p&gt;Here is the complete configuration I ended up with after troubleshooting (more on that shortly):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;terraform&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;required_providers&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;azurerm&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;source&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"hashicorp/azurerm"&lt;/span&gt;
      &lt;span class="nx"&gt;version&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"~&amp;gt; 3.0"&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;provider&lt;/span&gt; &lt;span class="s2"&gt;"azurerm"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;features&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;resource_group&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;prevent_deletion_if_contains_resources&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"azurerm_resource_group"&lt;/span&gt; &lt;span class="s2"&gt;"rg"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"terraform-azure-vm-rg"&lt;/span&gt;
  &lt;span class="nx"&gt;location&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"UK South"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"azurerm_virtual_network"&lt;/span&gt; &lt;span class="s2"&gt;"vnet"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt;                &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"terraform-vnet"&lt;/span&gt;
  &lt;span class="nx"&gt;location&lt;/span&gt;            &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;azurerm_resource_group&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;rg&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;location&lt;/span&gt;
  &lt;span class="nx"&gt;resource_group_name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;azurerm_resource_group&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;rg&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt;
  &lt;span class="nx"&gt;address_space&lt;/span&gt;       &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"10.0.0.0/16"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"azurerm_subnet"&lt;/span&gt; &lt;span class="s2"&gt;"subnet"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt;                 &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"terraform-subnet"&lt;/span&gt;
  &lt;span class="nx"&gt;resource_group_name&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;azurerm_resource_group&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;rg&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt;
  &lt;span class="nx"&gt;virtual_network_name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;azurerm_virtual_network&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;vnet&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt;
  &lt;span class="nx"&gt;address_prefixes&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"10.0.1.0/24"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"azurerm_public_ip"&lt;/span&gt; &lt;span class="s2"&gt;"public_ip"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt;                &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"terraform-public-ip"&lt;/span&gt;
  &lt;span class="nx"&gt;location&lt;/span&gt;            &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;azurerm_resource_group&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;rg&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;location&lt;/span&gt;
  &lt;span class="nx"&gt;resource_group_name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;azurerm_resource_group&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;rg&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt;
  &lt;span class="nx"&gt;allocation_method&lt;/span&gt;   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Static"&lt;/span&gt;
  &lt;span class="nx"&gt;sku&lt;/span&gt;                 &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Standard"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"azurerm_network_interface"&lt;/span&gt; &lt;span class="s2"&gt;"nic"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt;                &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"terraform-nic"&lt;/span&gt;
  &lt;span class="nx"&gt;location&lt;/span&gt;            &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;azurerm_resource_group&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;rg&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;location&lt;/span&gt;
  &lt;span class="nx"&gt;resource_group_name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;azurerm_resource_group&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;rg&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt;

  &lt;span class="nx"&gt;ip_configuration&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;name&lt;/span&gt;                          &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"internal"&lt;/span&gt;
    &lt;span class="nx"&gt;subnet_id&lt;/span&gt;                     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;azurerm_subnet&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;subnet&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;
    &lt;span class="nx"&gt;private_ip_address_allocation&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Dynamic"&lt;/span&gt;
    &lt;span class="nx"&gt;public_ip_address_id&lt;/span&gt;          &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;azurerm_public_ip&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;public_ip&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"azurerm_virtual_machine"&lt;/span&gt; &lt;span class="s2"&gt;"vm"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt;                             &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"terraform-vm"&lt;/span&gt;
  &lt;span class="nx"&gt;location&lt;/span&gt;                         &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;azurerm_resource_group&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;rg&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;location&lt;/span&gt;
  &lt;span class="nx"&gt;resource_group_name&lt;/span&gt;              &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;azurerm_resource_group&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;rg&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt;
  &lt;span class="nx"&gt;network_interface_ids&lt;/span&gt;            &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;azurerm_network_interface&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;nic&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
  &lt;span class="nx"&gt;vm_size&lt;/span&gt;                          &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Standard_D2ads_v7"&lt;/span&gt;
  &lt;span class="nx"&gt;delete_os_disk_on_termination&lt;/span&gt;    &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="nx"&gt;delete_data_disks_on_termination&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;

  &lt;span class="nx"&gt;os_profile&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;computer_name&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"terraform-vm"&lt;/span&gt;
    &lt;span class="nx"&gt;admin_username&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"azureuser"&lt;/span&gt;
    &lt;span class="nx"&gt;admin_password&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"P@ssw0rd1234!"&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="nx"&gt;os_profile_linux_config&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;disable_password_authentication&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="nx"&gt;storage_image_reference&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;publisher&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Canonical"&lt;/span&gt;
    &lt;span class="nx"&gt;offer&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"0001-com-ubuntu-server-focal"&lt;/span&gt;
    &lt;span class="nx"&gt;sku&lt;/span&gt;       &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"20_04-lts-gen2"&lt;/span&gt;
    &lt;span class="nx"&gt;version&lt;/span&gt;   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"latest"&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="nx"&gt;storage_os_disk&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;name&lt;/span&gt;              &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"terraform-os-disk"&lt;/span&gt;
    &lt;span class="nx"&gt;caching&lt;/span&gt;           &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"ReadWrite"&lt;/span&gt;
    &lt;span class="nx"&gt;create_option&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"FromImage"&lt;/span&gt;
    &lt;span class="nx"&gt;managed_disk_type&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Standard_LRS"&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;output&lt;/span&gt; &lt;span class="s2"&gt;"public_ip_address"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;description&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"The public IP address of the VM"&lt;/span&gt;
  &lt;span class="nx"&gt;value&lt;/span&gt;       &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;azurerm_public_ip&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;public_ip&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;ip_address&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Notice how resources reference each other using dot notation -- &lt;code&gt;azurerm_resource_group.rg.location&lt;/code&gt; instead of hardcoding "UK South" everywhere. This is not just clean code; it means if you change the location in one place, it updates throughout the entire configuration automatically.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Deployment Flow
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;terraform init    &lt;span class="c"&gt;# Download the AzureRM provider plugin&lt;/span&gt;
terraform plan    &lt;span class="c"&gt;# Preview what will be created (dry run)&lt;/span&gt;
terraform apply   &lt;span class="c"&gt;# Actually deploy to Azure&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;terraform plan&lt;/code&gt; output is one of my favourite things about this tool. Before touching a single resource, it shows you exactly what it intends to create, change, or destroy -- marked with &lt;code&gt;+&lt;/code&gt;, &lt;code&gt;~&lt;/code&gt;, or &lt;code&gt;-&lt;/code&gt;. You can review and catch mistakes before they cost you money or cause an outage.&lt;/p&gt;




&lt;h2&gt;
  
  
  The 5 Errors That Actually Taught Me DevOps
&lt;/h2&gt;

&lt;p&gt;Here is where things got real. I did not get a clean deployment on the first try. I got five errors, and each one taught me something important.&lt;/p&gt;

&lt;h3&gt;
  
  
  Error 1: Basic SKU Public IP Quota
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;IPv4BasicSkuPublicIpCountLimitReached: Cannot create more than 0 IPv4
Basic SKU public IP addresses for this subscription in this region.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;What happened:&lt;/strong&gt; Azure free-tier subscriptions have a quota of zero Basic SKU public IPs. The fix was adding &lt;code&gt;sku = "Standard"&lt;/code&gt; to the public IP resource. One line. Lesson: always check your subscription quotas before deploying.&lt;/p&gt;

&lt;h3&gt;
  
  
  Error 2: VM Size Capacity Restriction
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;SkuNotAvailable: The requested VM size Standard_B1s is currently not
available in location 'eastus'.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;What happened:&lt;/strong&gt; The B-series VMs are restricted on free-tier subscriptions. Rather than guessing another size, I queried Azure directly:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;az vm list-skus &lt;span class="nt"&gt;--location&lt;/span&gt; uksouth &lt;span class="nt"&gt;--resource-type&lt;/span&gt; virtualMachines &lt;span class="nt"&gt;--output&lt;/span&gt; table | &lt;span class="nb"&gt;grep&lt;/span&gt; &lt;span class="s2"&gt;"None"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This returned every VM size with no restrictions on my subscription, and I picked &lt;code&gt;Standard_D2ads_v7&lt;/code&gt; -- a small, affordable D-series with AMD processor. Always let the platform tell you what is available rather than guessing.&lt;/p&gt;

&lt;h3&gt;
  
  
  Error 3: Hypervisor Generation Mismatch
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;BadRequest: The selected VM size 'Standard_D2ads_v7' cannot boot
Hypervisor Generation '1'.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;What happened:&lt;/strong&gt; Modern VM sizes like D2ads_v7 require Generation 2 images, but Ubuntu 18.04 LTS is a Generation 1 image. Mixing them causes a boot failure at the hypervisor level. The fix was switching to Ubuntu 20.04 LTS Gen2 -- a newer, more secure image that is Gen2 compatible.&lt;/p&gt;

&lt;h3&gt;
  
  
  Error 4: Platform Image Not Found
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;PlatformImageNotFound: The platform image
'Canonical:UbuntuServer:20_04-lts-gen2:latest' is not available.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;What happened:&lt;/strong&gt; Azure's image naming is inconsistent across regions. The offer name &lt;code&gt;UbuntuServer&lt;/code&gt; is a legacy name that does not include Gen2 images in UK South. I queried the available images directly:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;az vm image list &lt;span class="nt"&gt;--location&lt;/span&gt; uksouth &lt;span class="nt"&gt;--publisher&lt;/span&gt; Canonical &lt;span class="nt"&gt;--offer&lt;/span&gt; 0001-com-ubuntu-server-focal &lt;span class="nt"&gt;--all&lt;/span&gt; &lt;span class="nt"&gt;--output&lt;/span&gt; table | &lt;span class="nb"&gt;grep&lt;/span&gt; &lt;span class="s2"&gt;"gen2"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The correct offer was &lt;code&gt;0001-com-ubuntu-server-focal&lt;/code&gt; with SKU &lt;code&gt;20_04-lts-gen2&lt;/code&gt;. Never assume image names -- always verify for your region.&lt;/p&gt;

&lt;h3&gt;
  
  
  Error 5: OS Disk Blocking Destroy
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Error: deleting Resource Group "terraform-azure-vm-rg": the Resource Group
still contains Resources.
/Microsoft.Compute/disks/terraform-os-disk
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;What happened:&lt;/strong&gt; When Azure creates a VM, it automatically provisions an OS disk as a child resource. Since our Terraform configuration did not explicitly manage that disk, it was not tracked in Terraform state -- so Terraform refused to delete the resource group containing it. The fix was adding two flags directly to the VM resource:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;delete_os_disk_on_termination&lt;/span&gt;    &lt;span class="err"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;span class="nx"&gt;delete_data_disks_on_termination&lt;/span&gt; &lt;span class="err"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  The Successful Deployment
&lt;/h2&gt;

&lt;p&gt;After all five fixes, the final &lt;code&gt;terraform apply&lt;/code&gt; ran cleanly:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Apply complete! Resources: 6 added, 0 changed, 0 destroyed.

Outputs:
public_ip_address = "51.11.128.165"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Verification via Azure CLI:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;az vm list &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="nt"&gt;--query&lt;/span&gt; &lt;span class="s2"&gt;"[].{Name:name, Status:powerState}"&lt;/span&gt; &lt;span class="nt"&gt;--output&lt;/span&gt; table

Name          Status
&lt;span class="nt"&gt;------------&lt;/span&gt;  &lt;span class="nt"&gt;----------&lt;/span&gt;
terraform-vm  VM running
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And then a clean destroy:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;terraform destroy
&lt;span class="c"&gt;# Destroy complete! Resources: 1 destroyed.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Key Concepts I Now Understand Deeply
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Declarative vs Imperative:&lt;/strong&gt; Terraform is declarative -- you describe the desired end state, not the steps to get there. Terraform computes the steps automatically based on resource dependencies.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Providers:&lt;/strong&gt; Plugins that teach Terraform how to communicate with a specific cloud platform. The &lt;code&gt;azurerm&lt;/code&gt; provider is what lets Terraform understand Azure-specific resources.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;State:&lt;/strong&gt; Terraform maintains a state file that maps your configuration to real-world resources. This is how it knows what exists, what needs to change, and what to destroy.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Resource References:&lt;/strong&gt; Using &lt;code&gt;azurerm_resource_group.rg.location&lt;/code&gt; instead of hardcoded values keeps configurations flexible and consistent across every resource.&lt;/p&gt;




&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr67iyxrxig61mfo9ei93.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr67iyxrxig61mfo9ei93.png" alt="Azure" width="800" height="137"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0a4kd5g2cgotvm9ryp8y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0a4kd5g2cgotvm9ryp8y.png" alt="Azure" width="800" height="178"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fybi45y7n5d6wdmj0u9eu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fybi45y7n5d6wdmj0u9eu.png" alt="Azure" width="513" height="92"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl4852m3hb2k39ryymbkb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl4852m3hb2k39ryymbkb.png" alt="Azure" width="800" height="284"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbp9vbuqf0odzwxkadhq8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbp9vbuqf0odzwxkadhq8.png" alt="Azure" width="800" height="211"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgbricrx43y8rogsuvr3c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgbricrx43y8rogsuvr3c.png" alt="Azure" width="800" height="449"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frqtedcr6n7nbjnp6nw9y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frqtedcr6n7nbjnp6nw9y.png" alt="Azure" width="800" height="449"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fucs27gk7g5yjbyy9lud8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fucs27gk7g5yjbyy9lud8.png" alt="Azure" width="800" height="489"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq9vijirwyfovh2t2o2we.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq9vijirwyfovh2t2o2we.png" alt="Azure" width="800" height="486"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwlqtmcqmlg7z10w1p0u4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwlqtmcqmlg7z10w1p0u4.png" alt="Azure" width="800" height="402"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F53m0h6b2yopvaekhdcek.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F53m0h6b2yopvaekhdcek.png" alt="Azure" width="800" height="348"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4qtr808vgimw2ikdz5tn.png" alt="Azure" width="800" height="368"&gt;
&lt;/h2&gt;

&lt;h2&gt;
  
  
  What Is Next
&lt;/h2&gt;

&lt;p&gt;This was Assignment 1 of a five-assignment Terraform series. Next up: deploying an EC2 instance on AWS inside a custom VPC with public and private subnets. The networking complexity goes up significantly, and I am here for it.&lt;/p&gt;

&lt;p&gt;If you are just starting your DevOps journey, my biggest takeaway from this exercise is this: do not fear the errors. Every error message is documentation. Read it carefully, query the platform for what it actually supports, and fix one thing at a time. That systematic approach is what DevOps engineering is really about.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Follow along as I document this full Terraform journey. I write about DevOps, cloud infrastructure, and what it actually looks like to transition into tech from a completely different background.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;GitHub: &lt;a href="https://github.com/vivianokose" rel="noopener noreferrer"&gt;vivianokose&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>terraform</category>
      <category>azure</category>
      <category>devops</category>
      <category>infrastructureascode</category>
    </item>
  </channel>
</rss>
