DEV Community

Cover image for How I Built a Two-Pipeline Enterprise CI/CD System on Azure DevOps EpicBook Deployment
Ebelechukwu Lucy Okafor
Ebelechukwu Lucy Okafor

Posted on

How I Built a Two-Pipeline Enterprise CI/CD System on Azure DevOps EpicBook Deployment

Introduction

This week, I completed what has been the most challenging and rewarding assignment of my DevOps learning journey, deploying the EpicBook online bookstore application using a two-repository, two-pipeline enterprise architecture on Azure DevOps.
No tutorials. No hand-holding. Just a goal, a set of tools, and a lot of errors to fix.
Here is the full story of what I built, what broke, and what I learned.

What Is the Two-Pipeline Model?
In enterprise DevOps, infrastructure code and application code are kept in separate repositories and deployed by separate pipelines. This mirrors how real teams work; infrastructure engineers and developers operate independently.

Repository 1: infra-epicbook → Terraform IaC
Repository 2: theepicbook → Node.js app + Ansible playbooks

Pipeline 1: Infra Pipeline → provisions Azure resources
Pipeline 2: App Pipeline → configures servers + deploys app

Pipeline 1
runs first and produces outputs (VM IP, MySQL FQDN) that Pipeline 2 consumes. This handoff is automated through Azure DevOps pipeline artifacts.
The Full Stack
Layer
Tool
Cloud
Microsoft Azure — South Africa North
IaC
Terraform v1.8.5 (azurerm ~3.100)
Config
Ansible
CI/CD
Azure DevOps — self-hosted agent (WSL)
Authentication
Azure Service Principal (SPN)
App
Node.js 18 + Express + Sequelize ORM
Database
Azure MySQL Flexible Server 8.0
Web server
Nginx — reverse proxy on port 80
Process manager
PM2 — keeps Node.js alive
OS
Ubuntu 22.04 LTS — Standard_D2s_v3
Live URL
http://20.164.211.94

Architecture Overview
Developer → git push to main

Azure DevOps
┌─────────────────────────────────────────┐
│ Pipeline 1 (Terraform) │
│ terraform init → plan → apply │
│ → outputs: VM IP + MySQL FQDN │
│ │
│ Pipeline 2 (Ansible) │
│ install Node.js + PM2 + Nginx │
│ deploy EpicBook → configure MySQL │
└─────────────────────────────────────────┘

Azure Virtual Network (10.0.0.0/16)
┌─────────────────────┬───────────────────┐
│ Public Subnet │ Private Subnet │
│ 10.0.1.0/24 │ 10.0.2.0/24 │
│ │ │
│ epicbook-prod-vm │ MySQL Flexible │
│ Ubuntu 22.04 │ Server 8.0 │
│ Nginx :80 │ port 3306 │
│ Node.js :8080 │ VNet only │
│ PM2 │ SSL required │
└─────────────────────┴───────────────────┘

Browser → http://20.164.211.94

Step 1: Service Principal (SPN) Setup
Terraform needs permission to create Azure resources. I created an App Registration in Azure AD and assigned it the Contributor role on my subscription.
az ad sp create-for-rbac \
--name "epicbook-devops-spn" \
--role Contributor \
--scopes /subscriptions/$(az account show --query id -o tsv) \
--output json

This outputs four credentials: Client ID, Client Secret, Tenant ID, and Subscription ID. These go into an Azure DevOps service connection named azure-devops-connection.
Key lesson: The Client Secret is shown exactly once. Copy it immediately before closing the page.

Step 2: Terraform Remote State Backend
For pipeline-based Terraform, the state file must live in Azure Blob Storage so the agent can access it between runs.
az storage account create \
--name epicbooklucytfstate01 \
--resource-group rg-epicbook \
--location southafricanorth \
--sku Standard_LRS

az storage container create \
--name tfstate \
--account-name epicbooklucytfstate01 \
--auth-mode login

The backend.tf configuration:
terraform {
backend "azurerm" {
resource_group_name = "rg-epicbook"
storage_account_name = "epicbooklucytfstate01"
container_name = "tfstate"
key = "epicbook.terraform.tfstate"
}
}

Critical lesson: You cannot use variables in a Terraform backend block. Pass the storage account key as the ARM_ACCESS_KEY environment variable instead.

Step 3: Infrastructure Pipeline (Terraform)

The Infra pipeline provisions 13 Azure resources in one run:
Resource Group
Virtual Network (10.0.0.0/16)
Public subnet (10.0.1.0/24) + NSG (ports 22 and 80)
Private subnet (10.0.2.0/24) + NSG (port 3306 from VNet only)
Static Public IP (Standard SKU)
Network Interface
Private DNS Zone (linked to VNet)
Linux VM (Ubuntu 22.04, Standard_D2s_v3)
MySQL Flexible Server 8.0
MySQL database (bookstore)

The pipeline YAML authenticates using AzureCLI@2 with addSpnToEnvironment: true, which injects the SPN credentials automatically:

  • task: AzureCLI@2 displayName: 'Terraform Apply' inputs: azureSubscription: 'azure-devops-connection' addSpnToEnvironment: true scriptType: 'bash' scriptLocation: 'inlineScript' inlineScript: | export ARM_CLIENT_ID="$servicePrincipalId" export ARM_CLIENT_SECRET="$servicePrincipalKey" export ARM_TENANT_ID="$tenantId" export ARM_SUBSCRIPTION_ID=$(az account show --query id -o tsv | tr -d '\r') export ARM_ACCESS_KEY="$STORAGE_KEY" terraform -chdir="$(Build.SourcesDirectory)" apply -auto-approve tfplan ** env:** STORAGE_KEY: $(storage_account_key)

After applying, the pipeline captures and publishes the outputs:
public_ip = 20.164.211.94
db_host = epicbook-prod-mysql-nrbhh4.mysql.database.azure.com

Step 4: Application Pipeline (Ansible)
The App pipeline connects to the VM over SSH using a private key stored in Azure DevOps Secure Files, then runs an Ansible playbook that:
Installs Node.js 18, PM2, and Nginx
Clones the EpicBook repository
Installs npm dependencies
Creates .env and config/config.json with MySQL connection details
Starts the app with PM2
Configures Nginx as a reverse proxy to port 8080
Verifies the app responds with HTTP 200

  • script: | ansible-playbook \ -i inventory.ini \ ansible/site.yml \ --extra-vars "mysql_host=$(MYSQL_FQDN) db_name=bookstore db_user=epicbook_user" \ -v displayName: 'Run Ansible playbook'

The Errors I Hit (And Fixed)
This deployment took two weeks. Here is what broke and how I fixed it.
Error 1: SPN Cannot Read Subscription (403 Forbidden)
The SPN was missing the Reader role. Fixed by assigning both Contributor and Reader.

Error 2: Variables Not Allowed in Backend Block
Error: Variables not allowed
on backend.tf line 7: access_key = var.storage_account_key

Terraform evaluates the backend before loading variables. Removed the access_key line and passed it via ARM_ACCESS_KEY environment variable instead.

Error 3: Carriage Return in Subscription ID (400 Bad Request)
The Windows Azure CLI adds \r to command output. This corrupted the subscription ID URL:
parse "https://management.azure.com/subscriptions/a75e93ba\r/providers"

Fixed by piping through tr -d '\r':
export ARM_SUBSCRIPTION_ID=$(az account show --query id -o tsv | tr -d '\r')

Error 4: VM Size Quota Exceeded
Standard_B2ats_v2 has zero quota in South Africa North. Checked available sizes:
az vm list-skus --location southafricanorth --size Standard_D2s --output table
Switched to Standard_D2s_v3, which had no restrictions.

Error 5: App Connects to localhost Instead of MySQL
SequelizeConnectionRefusedError: connect ECONNREFUSED 127.0.0.1:3306

The EpicBook app reads database config from config/config.json, not .env. The file had host: 127.0.0.1 hardcoded. Updated it with the actual MySQL FQDN and SSL settings.
Error 6: 502 Bad Gateway
The app listens on port 8080 by default (process.env.PORT || 8080), but Nginx was proxying to port 3000. Fixed by checking the actual port in server.js:
grep -n "port|PORT|listen" ~/theepicbook/server.js

const PORT = process.env.PORT || 8080;

Then updated both .env and Nginx config to use port 8080.

Key Lessons Learned

  1. Always check what port your app actually uses before configuring Nginx. Never assume that grep the source code.
  2. Terraform backend blocks are special. They are evaluated before any variable loading. Credentials must come from environment variables, not var. references.
  3. WSL + Azure CLI adds \r to output. Always pipe through tr -d '\r' when using az CLI output in bash scripts that go into URLs or environment variables.
  4. Azure VM size quotas vary by region. Have fallback sizes ready. Standard_D2s_v3 is reliably available across most regions.
  5. Check where your app actually reads config. The EpicBook app ignored .env for database settings and read from config/config.json instead. Always check the source code.
  6. Private MySQL = no public endpoint. The MySQL Flexible Server in the private subnet has no public IP it is only reachable from within the VNet. This is production-grade security done right.
  7. PM2 is non-negotiable for Node.js in production. Without it, the app dies the moment your Ansible playbook finishes. What I Would Do Differently Add an admin_ssh_key block to Terraform from the start instead of manually pushing keys via az vm run-command Use Terraform workspaces for environment separation (dev/staging/prod) Store db_password in Azure Key Vault and reference it from the variable group Add a health check endpoint to the app for proper pipeline verification Set up PM2 startup script so the app survives VM reboots automatically

The Result
After two weeks, two pipelines, thirteen Azure resources, and nine documented errors, EpicBook is live.
The full architecture Terraform provisioning infrastructure, Ansible configuring servers, Node.js serving the application, MySQL storing data, Nginx proxying requests, PM2 keeping everything running, is automated end-to-end by Azure DevOps on every commit to main.
That is CI/CD done the enterprise way.
Resources
Azure DevOps documentation docs.microsoft.com/azure/devops
Terraform azurerm provider registry.terraform.io/providers/hashicorp/azurerm
Ansible documentation docs.ansible.com
PM2 documentation pm2.keymetrics.io
EpicBook app github.com/pravinmishraaws/theepicbook

Top comments (0)