I spend a lot of time architecting production Kubernetes environments on AKS, but I often get asked by teammates: "Where do I even start with containers and CI/CD?"
This post is my answer to that. It's a complete lab start to finish that covers containerizing a simple web app, pushing it to Azure Container Registry, and wiring up a GitHub Actions workflow that automatically deploys to Azure Container Instances on every push. No click-ops, no manual FTP uploads, just a clean automated pipeline.
The whole thing should take you around 40–50 minutes on first run.
Why ACI and Not AKS?
Fair question. In production, I'd use AKS for anything serious. But ACI is genuinely perfect for demos, dev environments, and internal tooling it's serverless containers with zero cluster overhead. You push an image, Azure runs it. That simplicity makes it an ideal first step for people wrapping their head around the CI/CD loop.
What We're Building
Here's the pipeline:
- A minimal Node.js web app (HTML/CSS/JS, no framework)
- A Dockerfile to containerize it
- An Azure Container Registry (ACR) to store the image
- An Azure Container Instance (ACI) to run it
- A GitHub Actions workflow that ties it all together push to
main, and your app updates automatically
Prerequisites:
- Docker Desktop installed and running
- A GitHub account and a repo
- An Azure subscription (free trial works)
- Azure CLI installed
Step 1 The Application
I'm deliberately keeping the app trivial. The goal here is understanding the deployment pipeline, not React routing or API design.
index.html
<!DOCTYPE html>
<html>
<head>
<title>My CI/CD Demo App</title>
<link rel="stylesheet" href="style.css">
</head>
<body>
<div class="container">
<h1>Hello from Azure!</h1>
<p>Deployed automatically via GitHub Actions</p>
<div class="version">Version: 1.0.0</div>
<div class="stack">Node.js · Docker · Azure Container Instances</div>
</div>
</body>
</html>
style.css
* { box-sizing: border-box; }
body {
font-family: 'Segoe UI', sans-serif;
display: flex;
justify-content: center;
align-items: center;
min-height: 100vh;
margin: 0;
background: linear-gradient(135deg, #0078d4 0%, #001a3a 100%);
color: white;
}
.container { text-align: center; padding: 2rem; }
h1 { font-size: 2.5rem; margin-bottom: 0.5rem; }
p { font-size: 1.2rem; opacity: 0.85; }
.version {
margin-top: 2rem;
padding: 1rem 2rem;
background: rgba(255,255,255,0.12);
border-radius: 8px;
border: 1px solid rgba(255,255,255,0.2);
}
.stack { margin-top: 1rem; font-size: 0.85rem; opacity: 0.7; }
server.js
const http = require('http');
const fs = require('fs');
const path = require('path');
const server = http.createServer((req, res) => {
let filePath = req.url === '/' ? 'index.html' : req.url.substring(1);
const ext = path.extname(filePath);
const contentTypes = {
'.html': 'text/html',
'.css': 'text/css',
'.js': 'application/javascript'
};
const contentType = contentTypes[ext] || 'text/plain';
fs.readFile(filePath, (err, content) => {
if (err) {
res.writeHead(404);
res.end('Not found');
} else {
res.writeHead(200, { 'Content-Type': contentType + '; charset=utf-8' });
res.end(content);
}
});
});
const port = process.env.PORT || 3000;
server.listen(port, () => console.log(`Listening on port ${port}`));
package.json
{
"name": "cicd-demo-app",
"version": "1.0.0",
"main": "server.js",
"scripts": {
"start": "node server.js"
}
}
Step 2 Containerize with Docker
Add a Dockerfile at the root of your project:
FROM node:20-alpine
WORKDIR /app
COPY package.json ./
RUN npm install --production
COPY server.js ./
COPY index.html ./
COPY style.css ./
EXPOSE 3000
CMD ["npm", "start"]
What each line does:
-
FROM node:20-alpinelightweight Linux base image with Node pre-installed -
WORKDIR /appsets the working directory inside the container -
COPY package.json→RUN npm installinstalls dependencies first (layer caching optimization) -
COPYremaining files copies your app code -
EXPOSE 3000documents the port your app listens on -
CMDthe command that runs when the container starts
One thing I find useful to emphasize: I didn't have Node.js installed locally when I built this. The container brings its own runtime. That's exactly the point of Docker reproducible environments that eliminate "works on my machine" problems.
Build and test locally:
docker build -t cicd-demo-app .
docker run -d -p 3000:3000 --name cicd-demo cicd-demo-app
Open http://localhost:3000 and verify your app is running. Once confirmed, stop the container:
docker stop cicd-demo && docker rm cicd-demo
Step 3 Set Up Azure Resources
Everything from here runs in Azure CLI. Start by setting some variables so you're not repeating yourself:
RG_NAME="rg-cicd-demo"
ACR_NAME="acrcicddemodemo" # Must be globally unique, lowercase only
LOCATION="eastus"
SUBSCRIPTION_ID=$(az account show --query id -o tsv)
Create the Resource Group and Container Registry:
az group create --name $RG_NAME --location $LOCATION
az acr create \
--resource-group $RG_NAME \
--name $ACR_NAME \
--sku Basic
ACR is essentially your private Docker Hub hosted inside Azure. Every image your pipeline builds gets stored here before it's deployed.
Create a Service Principal for GitHub Actions:
GitHub Actions needs an identity to authenticate with Azure and push images, create/update containers, etc. A Service Principal scoped to your resource group is the right approach here:
az ad sp create-for-rbac \
--name "sp-github-actions-cicd" \
--role contributor \
--scopes /subscriptions/$SUBSCRIPTION_ID/resourceGroups/$RG_NAME
The output JSON contains four values keep them safe:
{
"appId": "...", → AZURE_CLIENT_ID
"password": "...", → AZURE_CLIENT_SECRET
"tenant": "..." → AZURE_TENANT_ID
}
And your AZURE_SUBSCRIPTION_ID is just your subscription ID from earlier.
Add secrets to your GitHub repository:
Go to your repo → Settings → Secrets and variables → Actions → New repository secret
Create these four secrets:
AZURE_CLIENT_IDAZURE_CLIENT_SECRETAZURE_TENANT_IDAZURE_SUBSCRIPTION_ID
This is critical for security. We never want credentials hardcoded in workflow files, especially in a public repo. GitHub Secrets are encrypted at rest and only exposed to workflow runs.
Step 4 The GitHub Actions Workflow
Create the file .github/workflows/deploy.yml in your repository:
name: Build and Deploy to Azure
on:
push:
branches:
- main
workflow_dispatch:
jobs:
build-and-deploy:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Azure login
uses: azure/login@v2
with:
creds: |
{
"clientId": "${{ secrets.AZURE_CLIENT_ID }}",
"clientSecret": "${{ secrets.AZURE_CLIENT_SECRET }}",
"subscriptionId": "${{ secrets.AZURE_SUBSCRIPTION_ID }}",
"tenantId": "${{ secrets.AZURE_TENANT_ID }}"
}
- name: Login to Azure Container Registry
run: az acr login --name acrcicddemodemo
- name: Build and push image to ACR
run: |
az acr build \
--registry acrcicddemodemo \
--image cicd-demo-app:${{ github.sha }} \
--image cicd-demo-app:latest \
--file Dockerfile \
.
- name: Deploy to Azure Container Instances
run: |
az container create \
--resource-group rg-cicd-demo \
--name cicd-demo-app \
--image acrcicddemodemo.azurecr.io/cicd-demo-app:latest \
--registry-login-server acrcicddemodemo.azurecr.io \
--registry-username ${{ secrets.AZURE_CLIENT_ID }} \
--registry-password ${{ secrets.AZURE_CLIENT_SECRET }} \
--dns-name-label cicd-demo-app \
--ports 80 \
--os-type Linux \
--cpu 1 \
--memory 1 \
--environment-variables PORT=80
Breaking down what happens on each run:
| Step | What it does |
|---|---|
checkout |
Pulls the latest code from your repo |
azure/login |
Authenticates to Azure using the Service Principal |
acr login |
Authenticates to your private container registry |
acr build |
Builds the Docker image in Azure (not locally) and stores it in ACR with two tags: the commit SHA and latest
|
container create |
Creates or updates the ACI container group with the new image |
A couple of things worth highlighting:
-
az acr buildruns the Docker build on Azure's infrastructure, not on the GitHub runner. This means your GitHub runner doesn't need Docker installed, and you avoid pushing large images over the network. -
Tagging with
${{ github.sha }}gives you immutable image versions per commit you can always roll back to a specific SHA if something breaks. -
workflow_dispatchlets you trigger the pipeline manually from the GitHub UI without needing to push code. Useful for re-deploys or debugging.
Step 5 Push and Watch It Run
Commit everything and push to main:
git add .
git commit -m "Initial deployment: v1.0.0"
git push origin main
Head to your repository on GitHub → Actions tab. You'll see the workflow kick off automatically. Each step will show its status in real time.
Once the workflow completes (usually 2–4 minutes), you can find your container's public URL in the Azure Portal under your resource group, or via CLI:
az container show \
--resource-group rg-cicd-demo \
--name cicd-demo-app \
--query ipAddress.fqdn \
-o tsv
The URL will look something like cicd-demo-app.eastus.azurecontainer.io. One important note: ACI by default only exposes HTTP on port 80, but modern browsers may try to force HTTPS. If you get a connection error, explicitly use http:// in the URL.
Step 6 Test the Continuous Deployment
This is the part that usually gets a reaction from people who haven't seen CI/CD before.
Open index.html and change Version: 1.0.0 to Version: 2.0.0. Then:
git add .
git commit -m "Bump to v2.0.0"
git push
Watch the Actions tab. The pipeline triggers again automatically, rebuilds the image with the new code, and updates the running container. No FTP, no SSH, no manual steps. The container updates itself.
If you're impatient and want to see the change immediately without waiting for the browser cache or Azure DNS to update:
az container restart \
--name cicd-demo-app \
--resource-group rg-cicd-demo
Cleanup
Don't forget to clean up your Azure resources when you're done to avoid unexpected charges:
az group delete --name rg-cicd-demo --yes --no-wait
This removes the resource group, ACR, and ACI in one shot.
What's Next From Here
This lab uses ACI, which is great for experimentation and internal tooling, but has real limitations for production workloads (no auto-scaling, limited health check options, etc.).
Here's how I'd think about the progression:
- Azure Container Apps if you need autoscaling and KEDA-based event-driven scaling without managing a cluster
- Azure Kubernetes Service (AKS) when you're running microservices, need fine-grained networking, or require advanced deployment strategies (blue/green, canary)
- Azure App Service if you just need a web app and don't want to think about containers at all
The CI/CD workflow pattern we built here translates directly to all three. The az container create command becomes az containerapp update, kubectl apply, or az webapp deploy the GitHub Actions structure stays the same.
Key Takeaways
Working through this lab cements a few things that are easy to miss when you're reading about DevOps in the abstract:
Containers solve environment parity. The app runs identically on your laptop, the GitHub runner, and Azure because it carries its own runtime. The classic "works on my machine" problem disappears.
CI/CD is just automation with a trigger. Once you see it push code, pipeline runs, container updates it's hard to go back to manual deployments.
Infrastructure as code from day one. The workflow YAML lives in your repository. Anyone can clone the repo and reproduce the exact same pipeline. That's the difference between a one-off script and an actual engineering practice.
Secrets management matters. Even in a simple demo, keeping credentials out of code and using GitHub Secrets is non-negotiable, especially with public repositories.
If you have questions or want to extend this adding health checks, multi-environment pipelines, or integrating with Key Vault for secrets drop a comment below. Happy to dig into any of it.
Top comments (0)