DEV Community

Anthony Uketui
Anthony Uketui

Posted on

From Zero to CI/CD: How I Built and Deployed a Containerized App to Azure

Setting up a cloud pipeline often involves connecting several moving parts that must work together perfectly. I recently completed a project where I moved a Python application from a local development environment to a professional cloud hosting setup. The goal was to build a system that was secure, automated, and reliable. Here is a breakdown of how I built it, the technical decisions I made, and the lessons I learned.


The Project Overview

I built a web application using the FastAPI framework and hosted it on Azure Container Apps. This service allows the application to run in a managed environment that handles networking and scaling automatically.

Technical Features:

  • Docker Containerization: The application is packaged into a standardized unit that includes everything it needs to run.
  • Infrastructure as Code: I used Terraform scripts to create all the Azure resources instead of setting them up manually in a browser.
  • Secure CI/CD: I configured a GitHub Actions pipeline that uses OpenID Connect (OIDC) for authentication, which eliminates the need for long-term passwords.
  • Custom Interface: The application features a black and gold theme for a professional look.

Live URL: lab1-anthony.ashyocean-d74db9c0.westus2.azurecontainerapps.io

Note: The live URL may be unavailable if infrastructure has been destroyed to avoid ongoing Azure charges. The entire setup can be recreated with a single terraform apply command.


The Technology Stack

Tool Purpose
FastAPI The web framework used to build the application.
Uvicorn The ASGI server that handles incoming HTTP requests and passes them to FastAPI.
Docker Used to create a consistent environment for the code.
Azure (ACR/ACA) Used for storing the application images and hosting the live app.
Terraform The tool used to define and deploy the cloud infrastructure.
GitHub Actions The automation tool that deploys code updates.

Step 1: Application Architecture

I designed the application to be configuration-driven. This means it does not have hardcoded names or settings. When the app starts, it retrieves its settings from environment variables provided by the host.

def get_app_config() -> dict[str, str]:
    return {
        "app_name": os.getenv("APP_NAME", "Cloud Lab Starter App"),
        "intern_name": os.getenv("INTERN_NAME", "Replace Me"),
        "cloud_platform": os.getenv("CLOUD_PLATFORM", "Replace Me"),
    }
Enter fullscreen mode Exit fullscreen mode

Each variable has a fallback default value. If someone clones the repository and runs the application without configuring anything, it still works — it simply displays placeholder text as a hint to update the values.

This approach allows the same code to run in a local test environment or a production cloud environment without modification. For local testing, I used a .env file with python-dotenv that is excluded from the code repository to keep local settings private.


Step 2: Building the Docker Image

I used Docker to package the application. To ensure the image was secure and efficient, I followed three main practices:

  1. Non-Root User: By default, Docker containers run with full administrative privileges. I created a specific "appuser" with limited permissions and no shell access (/bin/false) to run the application. This ensures that even if the app is compromised, the attacker has very limited access to the system and cannot open an interactive session.

  2. Layer Optimization: I structured the Dockerfile to install dependencies before copying the application code. Docker remembers these finished steps, so when I change a line of code, it only updates that specific part. This reduces build times from minutes to seconds.

  3. Clean Image: The .env file is not copied into the Docker image. All configuration is injected at runtime by the hosting platform. This means the image contains zero sensitive information and can be safely reused across different environments.

FROM python:3.12-slim
WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY app ./app
RUN useradd --no-create-home -s /bin/false appuser
USER appuser
EXPOSE 8000
CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "8000", "--proxy-headers", "--forwarded-allow-ips", "*"]
Enter fullscreen mode Exit fullscreen mode

Platform Compatibility

One early lesson was about CPU architecture. My development machine is an Apple Silicon Mac, which uses the ARM architecture. Azure Container Apps requires the linux/amd64 architecture. Without specifying the target platform during the build, Azure would reject the image entirely. The fix was straightforward:

docker build --platform linux/amd64 -t lab1-starter-app .
Enter fullscreen mode Exit fullscreen mode

Step 3: Pushing to Azure Container Registry

Azure Container Registry (ACR) is a private image storage service. Before the application can be deployed, the Docker image must be uploaded to ACR so Azure can pull it.

az acr login --name cogneticsregistry
docker tag lab1-starter-app cogneticsregistry.azurecr.io/lab1-starter-app:v1.4
docker push cogneticsregistry.azurecr.io/lab1-starter-app:v1.4
Enter fullscreen mode Exit fullscreen mode

The docker tag step is essential. It embeds the registry address into the image name so Docker knows where to send it.

Registry Permissions

I initially created the ACR with the "ABAC Repository Permissions" mode. This caused persistent authorization failures even after assigning the correct roles. The solution was to recreate the registry with standard RBAC mode, which provided the necessary permissions for image pushes without overcomplicating access control.


Step 4: Infrastructure as Code with Terraform

Instead of manually creating resources in the Azure Portal, I used Terraform. This tool uses code to describe the exact setup required.

Resources Created

Resource Purpose
Resource Group The container that holds all Azure resources for the project.
Azure Container Registry Stores the Docker images with admin access enabled.
Log Analytics Workspace A central place to store and view application logs.
Container App Environment The surrounding network and security boundary.
Container App The specific instance where the code runs, pulling the image from ACR.

All resources are fully managed by Terraform. One terraform apply creates everything, and one terraform destroy removes everything.

Configuration Management

Sensitive and personal values are stored in a terraform.tfvars file that is excluded from the repository. The Terraform code itself uses variable references to keep the deployment clean:

env {
  name  = "INTERN_NAME"
  value = var.intern_name
}
Enter fullscreen mode Exit fullscreen mode

The HTTPS Proxy Issue

During this step, I encountered an issue where the application's CSS would not load. Azure Container Apps terminates HTTPS at the load balancer and forwards plain HTTP to the container. FastAPI was generating http:// URLs for static files, which the browser blocked as mixed content.

The fix was adding --proxy-headers and --forwarded-allow-ips * to the Uvicorn startup command. This tells the server to trust the proxy from Azure's load balancer, allowing it to generate the correct HTTPS URLs.


Step 5: Automated Deployment with OIDC

I set up a GitHub Actions pipeline so that every time I push code to the repository, the application is automatically rebuilt and deployed.

Why OIDC Over Stored Credentials

To make this secure, I used OpenID Connect (OIDC). Traditionally, you would store a client secret in GitHub. With OIDC, GitHub and Azure establish a trust relationship. When a deployment starts, Azure verifies the identity of the GitHub repository and issues a temporary token. This removes the risk of a permanent password being stolen or leaked.

Image Tagging Strategy

Instead of manual version tags, the pipeline tags each image with the git commit SHA. This means every deployed version is directly traceable to an exact commit in the code history.


Problems Encountered and Solutions

Problem Cause Solution
docker push failed ACR was in ABAC mode Switched to standard RBAC mode
Architecture mismatch Built for ARM (Mac) instead of AMD64 Used --platform linux/amd64 in build
CSS not loading HTTPS termination at load balancer Added --proxy-headers to Uvicorn
MissingSubscriptionRegistration Provider not registered Ran az provider register

Security and Operational Summary

Practice Benefit
Restricted User Account Protects the server by limiting what the app can do.
OIDC Authentication Replaces permanent passwords with temporary tokens.
Infrastructure as Code Allows for perfectly repeatable deployments.
Commit SHA Tagging Provides a clear audit trail for every deployment.

What I Would Do Differently in Production

  • Remote Terraform State — Store the state file in Azure Blob Storage for team collaboration.
  • Separate Environments — Maintain distinct dev, staging, and production configurations.
  • Azure Key Vault — Store sensitive credentials in a managed vault rather than variables.
  • Image Scanning — Use security tools like Trivy to check for vulnerabilities before deploying.

Conclusion

This project demonstrates how to move beyond simply "making an app work" to building a professional deployment system. By focusing on automation and security from the start, I created a workflow that is easy to manage and resistant to common security threats.

View the Source Code

🔗 GitHub Repository


Top comments (0)