DEV Community

Rahul Singh
Rahul Singh

Posted on • Originally published at aicodereview.cc

Azure DevOps Pipelines: Complete CI/CD Guide (2026)

What are Azure DevOps Pipelines

Azure DevOps Pipelines is Microsoft's CI/CD service for automating the build, test, and deployment of code across any language, platform, and cloud. It is part of the broader Azure DevOps suite alongside Azure Repos, Azure Boards, Azure Test Plans, and Azure Artifacts. Whether you are deploying a Node.js web application to Azure App Service, a Python API to Kubernetes, or a .NET microservices architecture to AWS, Azure Pipelines handles the automation from commit to production.

The service has evolved significantly since its origins as Team Foundation Build. Today it supports two pipeline authoring models: YAML pipelines defined as code in your repository, and Classic pipelines built through a visual editor in the Azure DevOps web interface. YAML pipelines are the strategic direction -- Microsoft has been investing all new features exclusively in the YAML model since 2023, and Classic pipelines are in maintenance mode.

Azure Pipelines runs on agents -- compute environments that execute your pipeline steps. You can use Microsoft-hosted agents (managed VMs with pre-installed tooling for Windows, Linux, and macOS) or self-hosted agents (your own machines, VMs, or containers). Microsoft-hosted agents are zero-maintenance but have fixed specifications and time limits. Self-hosted agents give you full control over the environment but require you to manage updates, scaling, and security.

Key concepts

Before writing your first pipeline, you need to understand the hierarchy of concepts that Azure Pipelines uses:

Pipeline is the top-level definition. It defines the entire CI/CD process -- what triggers it, what agents to use, and what work to perform. A YAML pipeline lives in a file (typically azure-pipelines.yml) at the root of your repository.

Stages are logical divisions of the pipeline. A typical pipeline has stages like Build, Test, and Deploy. Stages run sequentially by default, but you can configure them to run in parallel or with dependencies. Each stage contains one or more jobs.

Jobs are units of work that run on a single agent. All steps within a job execute on the same machine, which means they share the file system and environment variables. Jobs within a stage can run in parallel across different agents.

Steps are the individual actions within a job. A step is either a task (a pre-packaged unit of functionality from the marketplace) or a script (inline bash, PowerShell, or command-line commands). Steps run sequentially within a job.

Tasks are reusable building blocks published to the Visual Studio Marketplace. There are over 1,000 tasks available, covering everything from building Docker images to deploying Azure resources to running static analysis tools. Each task has versioned inputs and outputs, making them predictable and composable.

Triggers define when a pipeline runs. Push triggers start the pipeline when code is pushed to specific branches. PR triggers start the pipeline when a pull request is created or updated. Scheduled triggers run pipelines on a cron schedule. Pipeline triggers start one pipeline when another completes.

Variables store values that you reference throughout the pipeline. They can be defined inline in the YAML, in variable groups shared across pipelines, or linked from Azure Key Vault for secrets. Variables support runtime expressions, compile-time expressions, and macro expansion.

Service connections are authenticated links to external services -- Azure subscriptions, Docker registries, Kubernetes clusters, SSH endpoints, NuGet feeds, and more. They centralize credential management so individual pipelines never contain secrets directly.

Microsoft-hosted vs self-hosted agents

The choice between Microsoft-hosted and self-hosted agents is one of the first decisions you will make, and it has implications for cost, performance, and security.

Microsoft-hosted agents are fresh VMs provisioned for each job and destroyed afterward. They come with a comprehensive set of pre-installed software (Node.js, Python, .NET, Java, Docker, kubectl, and dozens more). You get a clean environment every time, which eliminates "works on my machine" problems but means you cannot cache state between runs at the OS level (though Azure Pipelines has a built-in caching mechanism). Microsoft-hosted agents run on Azure infrastructure and are available for Windows (Server 2022 and 2019), Ubuntu (22.04 and 20.04), and macOS (13 and 14).

Self-hosted agents are machines you manage. They can be physical servers, VMs, containers, or even Raspberry Pis. The agent software is a lightweight process that polls Azure DevOps for work. Self-hosted agents retain their state between runs (including file system caches, installed tools, and build outputs), which can dramatically speed up builds. They are also free -- you pay nothing to Azure DevOps for self-hosted agent parallel jobs, only for the infrastructure itself.

For most teams, the best approach is starting with Microsoft-hosted agents for simplicity and adding self-hosted agents when you need specialized hardware, persistent caches, or network access to on-premises resources.

YAML vs Classic pipelines

Azure DevOps supports two fundamentally different ways to author pipelines. Understanding the differences is critical for making the right choice and for planning migrations.

Feature comparison

Feature YAML Pipelines Classic Pipelines
Definition Code in repository Visual editor in browser
Version control Yes, tracked with source No, stored in Azure DevOps database
Code review Yes, through pull requests No
Templates Yes, full template system Limited task groups
Multi-stage Yes, build and release in one file Separate build and release definitions
Environments Yes, with approvals and checks Release gates (different model)
Pipeline as code Yes No
Branch-specific config Yes, different YAML per branch No, single definition for all branches
New features All new features added here Maintenance mode only
Learning curve Moderate (requires YAML knowledge) Low (visual drag and drop)
Debugging Harder (YAML validation) Easier (visual feedback)

Why YAML is the future

Microsoft made the strategic decision to invest exclusively in YAML pipelines for all new capabilities. Features like pipeline templates, environment approvals, container jobs, and pipeline resources are YAML-only. Classic pipelines continue to work and are not being removed, but they will not receive new features.

The advantages of YAML pipelines compound over time:

Version control. Your pipeline definition lives alongside your application code. When you change your build process, that change is in the same commit as the code change that requires it. You can see the full history of pipeline changes in Git, revert to previous configurations, and understand exactly what pipeline ran for any given commit.

Code review. Pipeline changes go through your normal pull request process. A teammate can review a change to the build configuration the same way they review a change to the application code. This catches misconfigurations, security issues, and inefficiencies before they reach the main branch.

Templates. YAML pipelines support a powerful template system that lets you define reusable pieces of pipeline logic -- from individual steps to entire multi-stage pipelines -- and share them across repositories. This enables platform engineering teams to provide standardized CI/CD patterns that application teams extend.

Branch-specific configuration. The YAML file can differ between branches. Your main branch pipeline might include production deployment steps that do not exist in the feature branch YAML. This is impossible with Classic pipelines, which use a single definition for all branches.

Migrating from Classic to YAML

If you have existing Classic pipelines, here is the pragmatic approach to migration:

  1. Export the Classic pipeline. Azure DevOps provides a "View YAML" option in the Classic editor that generates an approximate YAML equivalent. It is not perfect, but it gives you a starting point.

  2. Restructure the YAML. The exported YAML is typically a flat list of tasks. Organize it into stages (Build, Test, Deploy) and jobs (compile, unit-test, integration-test).

  3. Replace task groups with templates. If your Classic pipeline uses task groups, create YAML templates that provide the same functionality.

  4. Set up triggers. Classic build triggers need to be replicated as YAML trigger and PR trigger blocks.

  5. Migrate variables. Move Classic pipeline variables to YAML variable blocks or, better yet, variable groups that can be shared.

  6. Test in parallel. Run the new YAML pipeline alongside the Classic pipeline for several weeks before decommissioning the old one.

Getting started: your first pipeline

Let us build a pipeline from scratch. This section walks through creating a basic CI pipeline for a Node.js application, covering every line of the YAML file.

Creating the pipeline

There are two ways to create a new YAML pipeline:

From the Azure DevOps web interface. Go to Pipelines, click "New Pipeline," select your repository source (Azure Repos Git, GitHub, Bitbucket, etc.), choose a starter template or start with an empty YAML file, and save.

By committing a YAML file directly. Create an azure-pipelines.yml file in the root of your repository and push it. Azure DevOps automatically detects the file and offers to create a pipeline from it.

azure-pipelines.yml basics

Here is a complete, production-ready pipeline for a Node.js application:

trigger:
  branches:
    include:
      - main
      - develop
  paths:
    exclude:
      - '**/*.md'
      - docs/**

pr:
  branches:
    include:
      - main
  paths:
    exclude:
      - '**/*.md'

pool:
  vmImage: 'ubuntu-latest'

variables:
  nodeVersion: '20.x'
  npmCacheFolder: $(Pipeline.Workspace)/.npm

stages:
  - stage: Build
    displayName: 'Build and Test'
    jobs:
      - job: BuildJob
        displayName: 'Build, Lint, and Test'
        steps:
          - task: NodeTool@0
            displayName: 'Install Node.js $(nodeVersion)'
            inputs:
              versionSpec: '$(nodeVersion)'

          - task: Cache@2
            displayName: 'Cache npm packages'
            inputs:
              key: 'npm | "$(Agent.OS)" | package-lock.json'
              restoreKeys: |
                npm | "$(Agent.OS)"
              path: $(npmCacheFolder)

          - script: npm ci
            displayName: 'Install dependencies'

          - script: npm run lint
            displayName: 'Run linter'

          - script: npm run build
            displayName: 'Build application'

          - script: npm test -- --coverage
            displayName: 'Run tests with coverage'

          - task: PublishTestResults@2
            displayName: 'Publish test results'
            inputs:
              testResultsFormat: 'JUnit'
              testResultsFiles: '**/junit.xml'
            condition: succeededOrFailed()

          - task: PublishCodeCoverageResults@2
            displayName: 'Publish code coverage'
            inputs:
              summaryFileLocation: '**/coverage/cobertura-coverage.xml'
            condition: succeededOrFailed()
Enter fullscreen mode Exit fullscreen mode

Understanding the trigger configuration

The trigger block controls when the pipeline runs on push events:

trigger:
  branches:
    include:
      - main
      - develop
    exclude:
      - 'releases/*'
  paths:
    include:
      - src/**
    exclude:
      - '**/*.md'
      - docs/**
  tags:
    include:
      - 'v*'
Enter fullscreen mode Exit fullscreen mode

Branch filters control which branches trigger the pipeline. Use include to whitelist specific branches or patterns, and exclude to blacklist them. Glob patterns like feature/* and releases/** are supported.

Path filters prevent the pipeline from running when only certain files change. This is essential for monorepos and repositories with documentation alongside code. If a commit only changes Markdown files, the pipeline will not run.

Tag filters trigger the pipeline when tags matching a pattern are pushed. This is commonly used for release pipelines that build and publish when a version tag like v1.2.3 is created.

The pr block is similar but controls when the pipeline runs as build validation on pull requests:

pr:
  branches:
    include:
      - main
      - develop
  paths:
    exclude:
      - '**/*.md'
  drafts: false
Enter fullscreen mode Exit fullscreen mode

Setting drafts: false skips running the pipeline on draft pull requests, which saves agent minutes during early development.

Pool selection

The pool block specifies where the pipeline runs:

# Microsoft-hosted agent
pool:
  vmImage: 'ubuntu-latest'

# Self-hosted agent by pool name
pool:
  name: 'MyPrivatePool'

# Self-hosted agent with demands
pool:
  name: 'MyPrivatePool'
  demands:
    - docker
    - Agent.OS -equals Linux
Enter fullscreen mode Exit fullscreen mode

Available Microsoft-hosted images include ubuntu-latest (Ubuntu 22.04), ubuntu-20.04, windows-latest (Windows Server 2022), windows-2019, macos-latest (macOS 14), and macos-13. The ubuntu-latest image is the most commonly used and generally the fastest.

Pipeline architecture

A well-structured pipeline separates concerns into stages, manages dependencies explicitly, and uses variables and conditions to control flow.

Stages, jobs, and steps in depth

Here is a multi-stage pipeline that demonstrates the full hierarchy:

stages:
  - stage: Build
    displayName: 'Build Stage'
    jobs:
      - job: Compile
        displayName: 'Compile Application'
        pool:
          vmImage: 'ubuntu-latest'
        steps:
          - script: dotnet build --configuration Release
            displayName: 'Build .NET application'

          - task: PublishBuildArtifacts@1
            displayName: 'Publish build artifacts'
            inputs:
              PathtoPublish: '$(Build.ArtifactStagingDirectory)'
              ArtifactName: 'drop'

      - job: UnitTests
        displayName: 'Run Unit Tests'
        dependsOn: Compile
        pool:
          vmImage: 'ubuntu-latest'
        steps:
          - script: dotnet test --configuration Release --logger trx
            displayName: 'Execute unit tests'

  - stage: IntegrationTests
    displayName: 'Integration Testing'
    dependsOn: Build
    jobs:
      - job: APITests
        displayName: 'API Integration Tests'
        pool:
          vmImage: 'ubuntu-latest'
        services:
          postgres:
            image: postgres:16
            ports:
              - 5432:5432
            env:
              POSTGRES_PASSWORD: testpassword
              POSTGRES_DB: testdb
        steps:
          - script: dotnet test --filter Category=Integration
            displayName: 'Run integration tests'

  - stage: Deploy
    displayName: 'Deploy to Production'
    dependsOn: IntegrationTests
    condition: and(succeeded(), eq(variables['Build.SourceBranch'], 'refs/heads/main'))
    jobs:
      - deployment: Production
        displayName: 'Deploy to Production'
        environment: 'production'
        strategy:
          runOnce:
            deploy:
              steps:
                - script: echo "Deploying to production"
                  displayName: 'Deploy application'
Enter fullscreen mode Exit fullscreen mode

Dependencies between stages

Stages run sequentially by default, each waiting for the previous to complete. You can create parallel execution and complex dependency graphs using dependsOn:

stages:
  - stage: Build
    jobs:
      - job: BuildApp
        steps:
          - script: echo "Building"

  - stage: TestUnit
    dependsOn: Build
    jobs:
      - job: UnitTests
        steps:
          - script: echo "Unit tests"

  - stage: TestIntegration
    dependsOn: Build
    jobs:
      - job: IntegrationTests
        steps:
          - script: echo "Integration tests"

  # This stage runs after BOTH test stages complete
  - stage: Deploy
    dependsOn:
      - TestUnit
      - TestIntegration
    jobs:
      - job: DeployApp
        steps:
          - script: echo "Deploying"
Enter fullscreen mode Exit fullscreen mode

This creates a diamond dependency pattern: Build runs first, then TestUnit and TestIntegration run in parallel, and Deploy runs only after both test stages succeed.

Conditions and expressions

Conditions control whether a stage, job, or step executes. They use a functional expression language:

# Run only on main branch
condition: eq(variables['Build.SourceBranch'], 'refs/heads/main')

# Run only if previous stage succeeded
condition: succeeded()

# Run even if previous stage failed (useful for cleanup)
condition: always()

# Run only on PR builds
condition: eq(variables['Build.Reason'], 'PullRequest')

# Combine conditions
condition: and(succeeded(), ne(variables['Build.SourceBranch'], 'refs/heads/main'))

# Run based on a variable value
condition: eq(variables['deployTarget'], 'production')
Enter fullscreen mode Exit fullscreen mode

Variables and variable groups

Variables can be defined at multiple levels with different scopes:

# Pipeline-level variables
variables:
  buildConfiguration: 'Release'
  dotnetVersion: '8.0.x'

# Variable groups (defined in Azure DevOps Library)
variables:
  - group: 'production-secrets'
  - name: buildConfiguration
    value: 'Release'

# Stage-level variables
stages:
  - stage: Build
    variables:
      stageSpecificVar: 'build-value'

# Template expressions (compile-time)
variables:
  isMain: ${{ eq(variables['Build.SourceBranch'], 'refs/heads/main') }}

# Runtime expressions
steps:
  - script: echo "Branch is $(Build.SourceBranch)"
  - script: echo "Is main: $[eq(variables['Build.SourceBranch'], 'refs/heads/main')]"
Enter fullscreen mode Exit fullscreen mode

Variable groups are collections of variables defined in the Azure DevOps Library that can be shared across multiple pipelines. They are the recommended way to manage environment-specific configuration and secrets. Variable groups can be linked to Azure Key Vault, which means secrets are fetched at runtime and never stored in Azure DevOps itself.

Service connections

Service connections authenticate pipelines to external services. Common types include:

  • Azure Resource Manager -- deploy to Azure subscriptions using service principal or managed identity
  • Docker Registry -- push and pull images from Docker Hub, Azure Container Registry, or private registries
  • Kubernetes -- deploy to any Kubernetes cluster
  • SSH -- connect to remote servers
  • Generic -- store any URL and credentials for custom integrations

Service connections are created in Project Settings and referenced by name in pipeline YAML:

steps:
  - task: AzureWebApp@1
    inputs:
      azureSubscription: 'my-azure-connection'
      appName: 'my-web-app'
      package: '$(Build.ArtifactStagingDirectory)/**/*.zip'
Enter fullscreen mode Exit fullscreen mode

Multi-stage pipelines

Multi-stage pipelines combine build, test, and deployment into a single YAML file, replacing the separate Build and Release definitions from Classic pipelines.

Build, test, and deploy

Here is a complete multi-stage pipeline for a containerized application:

trigger:
  branches:
    include:
      - main

variables:
  dockerRegistry: 'myregistry.azurecr.io'
  imageName: 'my-api'
  tag: '$(Build.BuildId)'

stages:
  - stage: Build
    displayName: 'Build and Push Image'
    jobs:
      - job: DockerBuild
        pool:
          vmImage: 'ubuntu-latest'
        steps:
          - task: Docker@2
            displayName: 'Build Docker image'
            inputs:
              containerRegistry: 'acr-connection'
              repository: '$(imageName)'
              command: 'buildAndPush'
              Dockerfile: '**/Dockerfile'
              tags: |
                $(tag)
                latest

  - stage: DeployStaging
    displayName: 'Deploy to Staging'
    dependsOn: Build
    jobs:
      - deployment: StagingDeploy
        displayName: 'Staging Deployment'
        pool:
          vmImage: 'ubuntu-latest'
        environment: 'staging'
        strategy:
          runOnce:
            deploy:
              steps:
                - task: KubernetesManifest@1
                  displayName: 'Deploy to staging cluster'
                  inputs:
                    action: 'deploy'
                    connectionType: 'kubernetesServiceConnection'
                    kubernetesServiceConnection: 'k8s-staging'
                    namespace: 'staging'
                    manifests: |
                      k8s/deployment.yml
                      k8s/service.yml
                    containers: '$(dockerRegistry)/$(imageName):$(tag)'

  - stage: DeployProduction
    displayName: 'Deploy to Production'
    dependsOn: DeployStaging
    condition: succeeded()
    jobs:
      - deployment: ProductionDeploy
        displayName: 'Production Deployment'
        pool:
          vmImage: 'ubuntu-latest'
        environment: 'production'
        strategy:
          canary:
            increments: [10, 50]
            deploy:
              steps:
                - task: KubernetesManifest@1
                  displayName: 'Deploy canary to production'
                  inputs:
                    action: 'deploy'
                    connectionType: 'kubernetesServiceConnection'
                    kubernetesServiceConnection: 'k8s-production'
                    namespace: 'production'
                    manifests: |
                      k8s/deployment.yml
                      k8s/service.yml
                    containers: '$(dockerRegistry)/$(imageName):$(tag)'
                    strategy: canary
                    percentage: $(strategy.increment)
            on:
              failure:
                steps:
                  - task: KubernetesManifest@1
                    displayName: 'Rollback canary'
                    inputs:
                      action: 'reject'
                      connectionType: 'kubernetesServiceConnection'
                      kubernetesServiceConnection: 'k8s-production'
                      namespace: 'production'
                      manifests: |
                        k8s/deployment.yml
Enter fullscreen mode Exit fullscreen mode

Environment approvals

Environments in Azure DevOps provide approval gates and deployment history. When you target an environment in a deployment job, Azure DevOps tracks every deployment to that environment with full traceability.

To configure approvals, go to Pipelines > Environments in Azure DevOps, select an environment, and add checks:

  • Approvals -- require one or more people to approve before the deployment proceeds
  • Branch control -- only allow deployments from specific branches
  • Business hours -- restrict deployments to specific time windows
  • Exclusive lock -- prevent concurrent deployments to the same environment
  • Template validation -- require the pipeline to extend from an approved template

These checks are configured in the Azure DevOps web interface, not in the YAML file. This is intentional -- it means the people responsible for an environment control its protections, not the pipeline authors.

Deployment strategies

Azure Pipelines supports three built-in deployment strategies for deployment jobs:

runOnce deploys to all targets simultaneously. It is the simplest strategy and appropriate for non-production environments or applications that can tolerate brief downtime:

strategy:
  runOnce:
    deploy:
      steps:
        - script: echo "Deploy all at once"
Enter fullscreen mode Exit fullscreen mode

Rolling deploys to targets in batches. You specify how many targets to update simultaneously, either as a count or a percentage:

strategy:
  rolling:
    maxParallel: 2
    deploy:
      steps:
        - script: echo "Deploy in rolling batches"
Enter fullscreen mode Exit fullscreen mode

Canary deploys to a small subset first, validates, and then rolls out incrementally. This is the safest strategy for production deployments:

strategy:
  canary:
    increments: [10, 25, 50, 100]
    deploy:
      steps:
        - script: echo "Deploy $(strategy.increment)% canary"
    on:
      success:
        steps:
          - script: echo "Canary succeeded, continuing"
      failure:
        steps:
          - script: echo "Canary failed, rolling back"
Enter fullscreen mode Exit fullscreen mode

Pipeline templates

Templates are the most powerful feature in Azure Pipelines YAML. They enable platform teams to create standardized, reusable pipeline components that application teams consume. Templates promote consistency, reduce duplication, and enforce organizational standards.

Template types

Azure Pipelines supports four types of templates:

Step templates define reusable sequences of steps:

# templates/steps/npm-build.yml
parameters:
  - name: nodeVersion
    type: string
    default: '20.x'
  - name: buildCommand
    type: string
    default: 'build'

steps:
  - task: NodeTool@0
    displayName: 'Install Node.js ${{ parameters.nodeVersion }}'
    inputs:
      versionSpec: '${{ parameters.nodeVersion }}'

  - script: npm ci
    displayName: 'Install dependencies'

  - script: npm run ${{ parameters.buildCommand }}
    displayName: 'Run build command'
Enter fullscreen mode Exit fullscreen mode

Job templates define reusable jobs with their own pool and steps:

# templates/jobs/dotnet-test.yml
parameters:
  - name: projects
    type: string
    default: '**/*.Tests.csproj'
  - name: configuration
    type: string
    default: 'Release'

jobs:
  - job: DotNetTest
    displayName: '.NET Unit Tests'
    pool:
      vmImage: 'ubuntu-latest'
    steps:
      - task: UseDotNet@2
        inputs:
          packageType: 'sdk'
          version: '8.0.x'

      - task: DotNetCoreCLI@2
        displayName: 'Run tests'
        inputs:
          command: 'test'
          projects: '${{ parameters.projects }}'
          arguments: '--configuration ${{ parameters.configuration }} --collect:"XPlat Code Coverage"'
Enter fullscreen mode Exit fullscreen mode

Stage templates define reusable stages with jobs and deployment logic:

# templates/stages/deploy-to-environment.yml
parameters:
  - name: environment
    type: string
  - name: azureSubscription
    type: string
  - name: appName
    type: string
  - name: resourceGroup
    type: string

stages:
  - stage: Deploy_${{ parameters.environment }}
    displayName: 'Deploy to ${{ parameters.environment }}'
    jobs:
      - deployment: Deploy
        displayName: 'Deploy ${{ parameters.appName }}'
        pool:
          vmImage: 'ubuntu-latest'
        environment: '${{ parameters.environment }}'
        strategy:
          runOnce:
            deploy:
              steps:
                - task: AzureWebApp@1
                  displayName: 'Deploy to Azure Web App'
                  inputs:
                    azureSubscription: '${{ parameters.azureSubscription }}'
                    appType: 'webAppLinux'
                    appName: '${{ parameters.appName }}'
                    resourceGroupName: '${{ parameters.resourceGroup }}'
                    package: '$(Pipeline.Workspace)/drop/**/*.zip'
Enter fullscreen mode Exit fullscreen mode

Variable templates define reusable variable sets:

# templates/variables/production.yml
variables:
  environment: 'production'
  azureSubscription: 'prod-azure-connection'
  resourceGroup: 'rg-production'
  appServicePlan: 'asp-production'
Enter fullscreen mode Exit fullscreen mode

Consuming templates

Templates are consumed in your pipeline YAML using the template keyword:

trigger:
  branches:
    include:
      - main

stages:
  - stage: Build
    jobs:
      - job: BuildApp
        pool:
          vmImage: 'ubuntu-latest'
        steps:
          - template: templates/steps/npm-build.yml
            parameters:
              nodeVersion: '20.x'
              buildCommand: 'build:prod'

  - template: templates/stages/deploy-to-environment.yml
    parameters:
      environment: 'staging'
      azureSubscription: 'staging-connection'
      appName: 'my-app-staging'
      resourceGroup: 'rg-staging'

  - template: templates/stages/deploy-to-environment.yml
    parameters:
      environment: 'production'
      azureSubscription: 'production-connection'
      appName: 'my-app-production'
      resourceGroup: 'rg-production'
Enter fullscreen mode Exit fullscreen mode

Template repositories

Templates do not have to live in the same repository as the pipeline. You can reference templates from other repositories using the resources block:

resources:
  repositories:
    - repository: templates
      type: git
      name: 'MyProject/pipeline-templates'
      ref: 'refs/tags/v2.0'

stages:
  - template: stages/deploy-to-environment.yml@templates
    parameters:
      environment: 'production'
      azureSubscription: 'prod-connection'
      appName: 'my-app'
      resourceGroup: 'rg-prod'
Enter fullscreen mode Exit fullscreen mode

The ref field pins the template to a specific tag, branch, or commit. This is critical for stability -- you do not want a template change in the shared repository to break all consuming pipelines. Teams typically tag template releases and update consumers deliberately.

Extending from templates

The extends keyword is a more restrictive form of template usage. When a pipeline extends a template, the template defines the entire structure, and the consuming pipeline can only fill in parameters:

# templates/pipeline-base.yml
parameters:
  - name: buildSteps
    type: stepList
    default: []

stages:
  - stage: Build
    jobs:
      - job: Build
        pool:
          vmImage: 'ubuntu-latest'
        steps:
          - checkout: self
          - ${{ each step in parameters.buildSteps }}:
            - ${{ step }}
          - task: PublishBuildArtifacts@1
            inputs:
              PathtoPublish: '$(Build.ArtifactStagingDirectory)'

  - stage: SecurityScan
    jobs:
      - job: Scan
        pool:
          vmImage: 'ubuntu-latest'
        steps:
          - script: echo "Running mandatory security scan"
Enter fullscreen mode Exit fullscreen mode
# azure-pipelines.yml
extends:
  template: templates/pipeline-base.yml@templates
  parameters:
    buildSteps:
      - script: npm ci
      - script: npm run build
Enter fullscreen mode Exit fullscreen mode

This pattern lets platform teams enforce that every pipeline includes certain steps (like security scanning) while giving application teams flexibility in their build steps. The consuming pipeline cannot remove or skip the SecurityScan stage because it is part of the template structure.

Integrating code review tools

One of the most effective uses of Azure Pipelines is running code analysis tools as part of your build validation. When configured as a branch policy, these tools run on every pull request and can block merge if quality or security standards are not met.

SonarQube quality gate

SonarQube is the most widely used code quality platform. The Azure DevOps integration is available through the SonarQube extension in the Visual Studio Marketplace.

After installing the extension and configuring a SonarQube service connection in your project settings, add the analysis to your pipeline:

trigger:
  branches:
    include:
      - main

pr:
  branches:
    include:
      - main

pool:
  vmImage: 'ubuntu-latest'

steps:
  - task: SonarQubePrepare@6
    displayName: 'Prepare SonarQube analysis'
    inputs:
      SonarQube: 'sonarqube-connection'
      scannerMode: 'CLI'
      configMode: 'manual'
      cliProjectKey: 'my-project'
      cliProjectName: 'My Project'
      cliSources: 'src'

  - script: npm ci
    displayName: 'Install dependencies'

  - script: npm run build
    displayName: 'Build application'

  - script: npm test -- --coverage
    displayName: 'Run tests with coverage'

  - task: SonarQubeAnalyze@6
    displayName: 'Run SonarQube analysis'

  - task: SonarQubePublish@6
    displayName: 'Publish quality gate result'
    inputs:
      pollingTimeoutSec: '300'
Enter fullscreen mode Exit fullscreen mode

The SonarQubePrepare task configures the scanner before your build. SonarQubeAnalyze runs the analysis after the build and tests complete. SonarQubePublish polls the SonarQube server for the quality gate result and publishes it as a pipeline status.

To make this a required check, go to your repository's branch policies for the main branch, add a build validation policy pointing to this pipeline, and set it as required. Pull requests cannot be completed until SonarQube's quality gate passes.

Semgrep security scanning

Semgrep is a fast, open-source static analysis tool particularly strong at finding security vulnerabilities. It can run directly in an Azure Pipeline without a marketplace extension:

trigger:
  branches:
    include:
      - main

pr:
  branches:
    include:
      - main

pool:
  vmImage: 'ubuntu-latest'

steps:
  - script: |
      python3 -m pip install semgrep
    displayName: 'Install Semgrep'

  - script: |
      semgrep scan \
        --config auto \
        --error \
        --json \
        --output semgrep-results.json \
        src/
    displayName: 'Run Semgrep scan'
    continueOnError: true

  - script: |
      semgrep scan \
        --config auto \
        --sarif \
        --output semgrep-results.sarif \
        src/
    displayName: 'Generate SARIF report'
    continueOnError: true

  - task: PublishBuildArtifacts@1
    displayName: 'Publish Semgrep results'
    inputs:
      PathtoPublish: 'semgrep-results.json'
      ArtifactName: 'SemgrepResults'
    condition: always()
Enter fullscreen mode Exit fullscreen mode

For teams using Semgrep Cloud Platform (the managed offering), you can authenticate with an app token:

steps:
  - script: |
      python3 -m pip install semgrep
    displayName: 'Install Semgrep'

  - script: |
      semgrep ci
    displayName: 'Run Semgrep CI'
    env:
      SEMGREP_APP_TOKEN: $(SEMGREP_APP_TOKEN)
Enter fullscreen mode Exit fullscreen mode

The semgrep ci command automatically detects pull request context, uploads results to the Semgrep dashboard, and provides diff-aware scanning so it only reports issues introduced in the current change rather than pre-existing findings.

Snyk vulnerability scanning

Snyk provides both dependency vulnerability scanning (Snyk Open Source) and source code analysis (Snyk Code). The Snyk Security Scan task is available from the Visual Studio Marketplace:

trigger:
  branches:
    include:
      - main

pr:
  branches:
    include:
      - main

pool:
  vmImage: 'ubuntu-latest'

steps:
  - task: NodeTool@0
    inputs:
      versionSpec: '20.x'

  - script: npm ci
    displayName: 'Install dependencies'

  - task: SnykSecurityScan@1
    displayName: 'Snyk Open Source scan'
    inputs:
      serviceConnectionEndpoint: 'snyk-connection'
      testType: 'app'
      monitorWhen: 'always'
      failOnIssues: true
      severityThreshold: 'high'

  - task: SnykSecurityScan@1
    displayName: 'Snyk Code scan'
    inputs:
      serviceConnectionEndpoint: 'snyk-connection'
      testType: 'code'
      failOnIssues: true
      severityThreshold: 'high'
Enter fullscreen mode Exit fullscreen mode

Alternatively, you can use the Snyk CLI directly without the marketplace task:

steps:
  - script: |
      npm install -g snyk
      snyk auth $(SNYK_TOKEN)
    displayName: 'Install and authenticate Snyk'

  - script: |
      snyk test --severity-threshold=high --json > snyk-results.json || true
    displayName: 'Snyk dependency scan'

  - script: |
      snyk code test --severity-threshold=high --json > snyk-code-results.json || true
    displayName: 'Snyk code analysis'

  - task: PublishBuildArtifacts@1
    displayName: 'Publish Snyk results'
    inputs:
      PathtoPublish: 'snyk-results.json'
      ArtifactName: 'SnykResults'
    condition: always()
Enter fullscreen mode Exit fullscreen mode

Build validation for PRs

To enforce code quality standards on every pull request, configure branch policies in Azure DevOps:

  1. Go to Repos > Branches in your Azure DevOps project
  2. Click the three-dot menu on your target branch (typically main) and select "Branch policies"
  3. Under "Build Validation," click "Add build policy"
  4. Select the pipeline that includes your code analysis steps
  5. Set "Trigger" to "Automatic" so it runs on every PR update
  6. Set "Policy requirement" to "Required" so PRs cannot be completed without a passing build
  7. Optionally set "Build expiration" so that stale PR builds require re-validation after a period of inactivity

With this configuration, every pull request must pass your SonarQube quality gate, Semgrep security scan, and Snyk vulnerability check before it can be merged. Developers see the results directly in the Azure DevOps pull request interface, with links to detailed findings.

A combined pipeline that runs all three tools looks like this:

trigger: none

pr:
  branches:
    include:
      - main

pool:
  vmImage: 'ubuntu-latest'

jobs:
  - job: CodeQuality
    displayName: 'Code Quality Analysis'
    steps:
      - task: NodeTool@0
        inputs:
          versionSpec: '20.x'

      - script: npm ci
        displayName: 'Install dependencies'

      - script: npm test -- --coverage
        displayName: 'Run tests'

      # SonarQube
      - task: SonarQubePrepare@6
        inputs:
          SonarQube: 'sonarqube-connection'
          scannerMode: 'CLI'
          configMode: 'manual'
          cliProjectKey: 'my-project'
          cliSources: 'src'

      - task: SonarQubeAnalyze@6
        displayName: 'SonarQube analysis'

      - task: SonarQubePublish@6
        displayName: 'SonarQube quality gate'

  - job: SecurityScan
    displayName: 'Security Scanning'
    steps:
      - task: NodeTool@0
        inputs:
          versionSpec: '20.x'

      - script: npm ci
        displayName: 'Install dependencies'

      # Semgrep
      - script: |
          python3 -m pip install semgrep
          semgrep scan --config auto --error src/
        displayName: 'Semgrep security scan'

      # Snyk
      - script: |
          npm install -g snyk
          snyk auth $(SNYK_TOKEN)
          snyk test --severity-threshold=high
          snyk code test --severity-threshold=high
        displayName: 'Snyk vulnerability scan'
Enter fullscreen mode Exit fullscreen mode

Running SonarQube and security scanning as separate parallel jobs reduces total pipeline time because they execute on different agents simultaneously.

Advanced features

Pipeline caching

Caching significantly reduces build times by persisting files between pipeline runs. The Cache task supports keyed caching based on file fingerprints:

variables:
  YARN_CACHE_FOLDER: $(Pipeline.Workspace)/.yarn

steps:
  - task: Cache@2
    displayName: 'Cache Yarn packages'
    inputs:
      key: 'yarn | "$(Agent.OS)" | yarn.lock'
      restoreKeys: |
        yarn | "$(Agent.OS)"
        yarn
      path: $(YARN_CACHE_FOLDER)

  - script: yarn install --frozen-lockfile
    displayName: 'Install dependencies'
Enter fullscreen mode Exit fullscreen mode

The cache key is computed from the specified components. When yarn.lock changes, a new cache is created. The restoreKeys provide fallback keys -- if no exact match is found, the most recent cache matching a fallback key is restored. This means you still get a partial cache hit even when dependencies change.

Common caching patterns include:

# Python pip cache
- task: Cache@2
  inputs:
    key: 'pip | "$(Agent.OS)" | requirements.txt'
    path: $(PIP_CACHE_DIR)

# .NET NuGet cache
- task: Cache@2
  inputs:
    key: 'nuget | "$(Agent.OS)" | **/packages.lock.json'
    path: $(NUGET_PACKAGES)

# Maven cache
- task: Cache@2
  inputs:
    key: 'maven | "$(Agent.OS)" | **/pom.xml'
    path: $(MAVEN_CACHE_FOLDER)

# Go modules cache
- task: Cache@2
  inputs:
    key: 'go | "$(Agent.OS)" | go.sum'
    path: $(GOPATH)/pkg/mod
Enter fullscreen mode Exit fullscreen mode

Artifacts and publishing

Artifacts are files produced by a pipeline that need to be consumed by later stages or kept for reference. Azure Pipelines has two artifact systems:

Pipeline Artifacts (recommended) use the PublishPipelineArtifact and DownloadPipelineArtifact tasks. They are faster, support deduplication, and integrate with Azure Artifacts:

# Publish in build stage
- task: PublishPipelineArtifact@1
  displayName: 'Publish build output'
  inputs:
    targetPath: '$(Build.ArtifactStagingDirectory)'
    artifact: 'webapp'

# Download in deploy stage
- task: DownloadPipelineArtifact@2
  displayName: 'Download build output'
  inputs:
    buildType: 'current'
    artifactName: 'webapp'
    targetPath: '$(Pipeline.Workspace)/webapp'
Enter fullscreen mode Exit fullscreen mode

Build Artifacts (legacy) use the PublishBuildArtifacts and DownloadBuildArtifacts tasks. They are older and slower but still widely used. New pipelines should use Pipeline Artifacts.

Matrix strategies

Matrix strategies let you run the same job across multiple configurations -- different operating systems, language versions, or frameworks:

jobs:
  - job: Test
    strategy:
      matrix:
        linux_node18:
          vmImage: 'ubuntu-latest'
          nodeVersion: '18.x'
        linux_node20:
          vmImage: 'ubuntu-latest'
          nodeVersion: '20.x'
        windows_node20:
          vmImage: 'windows-latest'
          nodeVersion: '20.x'
        mac_node20:
          vmImage: 'macos-latest'
          nodeVersion: '20.x'
      maxParallel: 4
    pool:
      vmImage: $(vmImage)
    steps:
      - task: NodeTool@0
        inputs:
          versionSpec: $(nodeVersion)
      - script: npm ci
      - script: npm test
Enter fullscreen mode Exit fullscreen mode

This creates four parallel jobs covering all your target platforms and Node.js versions. The maxParallel setting controls how many jobs can run simultaneously, limited by your available parallel job capacity.

Container jobs

Container jobs run your pipeline steps inside a Docker container, giving you precise control over the execution environment:

pool:
  vmImage: 'ubuntu-latest'

container: node:20-alpine

steps:
  - script: node --version
    displayName: 'Check Node version'
  - script: npm ci
    displayName: 'Install dependencies'
  - script: npm test
    displayName: 'Run tests'
Enter fullscreen mode Exit fullscreen mode

You can also run sidecar services alongside your main container:

resources:
  containers:
    - container: app
      image: node:20
    - container: redis
      image: redis:7
      ports:
        - 6379:6379
    - container: postgres
      image: postgres:16
      ports:
        - 5432:5432
      env:
        POSTGRES_PASSWORD: testpw
        POSTGRES_DB: testdb

pool:
  vmImage: 'ubuntu-latest'

container: app

services:
  redis: redis
  postgres: postgres

steps:
  - script: npm ci && npm test
    displayName: 'Run tests with Redis and Postgres'
    env:
      REDIS_URL: 'redis://redis:6379'
      DATABASE_URL: 'postgresql://postgres:testpw@postgres:5432/testdb'
Enter fullscreen mode Exit fullscreen mode

Pipeline resources

The resources block defines external resources that your pipeline depends on:

resources:
  # Other repositories
  repositories:
    - repository: templates
      type: git
      name: 'SharedProject/pipeline-templates'
      ref: 'refs/tags/v2.0'

  # Container images
  containers:
    - container: build-env
      image: myregistry.azurecr.io/build-tools:latest
      endpoint: 'acr-connection'

  # Other pipelines (trigger when they complete)
  pipelines:
    - pipeline: upstream-build
      source: 'Upstream-CI'
      trigger:
        branches:
          include:
            - main

  # Webhooks (trigger from external events)
  webhooks:
    - webhook: deployment-trigger
      connection: 'incoming-webhook'
Enter fullscreen mode Exit fullscreen mode

Pipeline resources enable powerful cross-pipeline orchestration. For example, you can trigger a deployment pipeline automatically when an upstream build pipeline completes successfully, passing the build artifacts along.

Cost optimization

Azure Pipelines costs are driven primarily by parallel job capacity and agent minutes. Understanding the pricing model helps you make informed decisions about architecture and infrastructure.

Microsoft-hosted vs self-hosted cost analysis

Microsoft-hosted agents provide 1 free parallel job with 1,800 minutes per month. Additional parallel jobs cost $40 per month each. You also pay for the minutes consumed on additional jobs. For open-source (public) projects, Microsoft provides 10 free parallel jobs with unlimited minutes.

Self-hosted agents have no per-agent cost from Azure DevOps. You get unlimited parallel jobs on self-hosted agents. The cost is entirely the infrastructure: VMs, containers, or physical machines that you manage. For teams running more than a few hundred builds per month, self-hosted agents almost always save money.

A practical cost comparison: a team running 500 builds per month averaging 10 minutes each needs 5,000 agent minutes. With Microsoft-hosted agents, they would exceed the 1,800 free minutes and need at least 2 parallel jobs ($40/month for the additional job). With a self-hosted agent on a $20/month Linux VM, they get unlimited builds and faster execution due to caching.

Caching strategies for cost reduction

Pipeline caching reduces build times, which directly reduces Microsoft-hosted agent minute consumption:

  • Dependency caching (npm, pip, NuGet) typically saves 30-60 seconds per build
  • Build output caching (compiled artifacts, Docker layers) can save minutes per build
  • Incremental builds (only rebuild what changed) can reduce build times by 80% or more for large projects

Pipeline efficiency tips

Minimize unnecessary triggers. Use path filters to avoid running pipelines when only documentation or non-code files change. Use drafts: false to skip draft pull requests.

Parallelize where possible. Run independent jobs in parallel rather than sequentially. Code quality checks, security scans, and unit tests can usually run simultaneously.

Right-size your agents. If your builds are I/O bound (dependency installation), a cheaper agent with fast storage is better than an expensive agent with more CPU. If your builds are CPU bound (compilation), invest in more cores.

Use pipeline artifacts instead of build artifacts. Pipeline Artifacts are faster to upload and download, reducing the time each job spends on artifact management.

Cancel superseded runs. In branch policies, enable "Cancel superseded runs" so that when a developer pushes a new commit to a PR, the previous validation build is cancelled immediately rather than consuming minutes on an outdated commit.

Migration guides

From Jenkins

Jenkins and Azure Pipelines share similar concepts but differ in implementation. Here is how to map them:

Jenkins Azure Pipelines
Jenkinsfile azure-pipelines.yml
Stages Stages
Steps Steps
Agent Pool / Agent
Plugins Tasks (Marketplace)
Shared Libraries Template Repositories
Credentials Service Connections / Variable Groups
Multibranch Pipeline PR / Branch Triggers
Jenkins Controller Azure DevOps Service (managed)

A Jenkins pipeline like this:

pipeline {
    agent { docker { image 'node:20' } }
    stages {
        stage('Install') {
            steps {
                sh 'npm ci'
            }
        }
        stage('Test') {
            steps {
                sh 'npm test'
            }
        }
        stage('Build') {
            steps {
                sh 'npm run build'
            }
        }
        stage('Deploy') {
            when {
                branch 'main'
            }
            steps {
                sh 'npm run deploy'
            }
        }
    }
    post {
        always {
            junit '**/test-results.xml'
        }
    }
}
Enter fullscreen mode Exit fullscreen mode

Translates to this Azure Pipeline:

trigger:
  branches:
    include:
      - main

pool:
  vmImage: 'ubuntu-latest'

container: node:20

stages:
  - stage: Build
    jobs:
      - job: BuildAndTest
        steps:
          - script: npm ci
            displayName: 'Install dependencies'

          - script: npm test
            displayName: 'Run tests'

          - script: npm run build
            displayName: 'Build application'

          - task: PublishTestResults@2
            inputs:
              testResultsFiles: '**/test-results.xml'
            condition: always()

  - stage: Deploy
    dependsOn: Build
    condition: and(succeeded(), eq(variables['Build.SourceBranch'], 'refs/heads/main'))
    jobs:
      - job: DeployApp
        steps:
          - script: npm run deploy
            displayName: 'Deploy application'
Enter fullscreen mode Exit fullscreen mode

Key migration considerations from Jenkins:

  • Jenkins plugins become Azure Pipeline tasks. Check the Visual Studio Marketplace for equivalents. Most popular Jenkins plugins have Azure Pipeline task counterparts.
  • Jenkins shared libraries become template repositories. The pattern is similar -- centralized reusable pipeline code consumed by individual project pipelines.
  • Jenkins credentials become service connections and variable groups. Service connections handle external service authentication. Variable groups handle environment-specific secrets.
  • Jenkins agent management is eliminated if you use Microsoft-hosted agents. If you need self-hosted agents, the Azure Pipelines agent is simpler to manage than Jenkins agents.

From GitHub Actions

GitHub Actions and Azure Pipelines have nearly identical concepts but different syntax. The migration is mostly a syntax translation:

GitHub Actions Azure Pipelines
.github/workflows/*.yml azure-pipelines.yml
on: push / pull_request trigger: / pr:
jobs jobs (within stages)
steps steps
uses: action/name@v1 task: TaskName@1
run: script:
env: env: / variables:
secrets.NAME $(NAME) from variable groups
matrix strategy: matrix
needs: dependsOn:
if: condition:
artifacts Pipeline Artifacts

A GitHub Actions workflow like this:

name: CI
on:
  push:
    branches: [main]
  pull_request:
    branches: [main]

jobs:
  build:
    runs-on: ubuntu-latest
    strategy:
      matrix:
        node-version: [18, 20]
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-node@v4
        with:
          node-version: ${{ matrix.node-version }}
          cache: 'npm'
      - run: npm ci
      - run: npm test
      - run: npm run build

  deploy:
    needs: build
    if: github.ref == 'refs/heads/main'
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - run: echo "Deploying"
Enter fullscreen mode Exit fullscreen mode

Translates to this Azure Pipeline:

trigger:
  branches:
    include:
      - main

pr:
  branches:
    include:
      - main

stages:
  - stage: Build
    jobs:
      - job: Build
        pool:
          vmImage: 'ubuntu-latest'
        strategy:
          matrix:
            node18:
              nodeVersion: '18.x'
            node20:
              nodeVersion: '20.x'
        steps:
          - checkout: self

          - task: NodeTool@0
            inputs:
              versionSpec: $(nodeVersion)

          - task: Cache@2
            inputs:
              key: 'npm | "$(Agent.OS)" | package-lock.json'
              path: '$(Pipeline.Workspace)/.npm'

          - script: npm ci
            displayName: 'Install dependencies'

          - script: npm test
            displayName: 'Run tests'

          - script: npm run build
            displayName: 'Build application'

  - stage: Deploy
    dependsOn: Build
    condition: and(succeeded(), eq(variables['Build.SourceBranch'], 'refs/heads/main'))
    jobs:
      - job: DeployApp
        pool:
          vmImage: 'ubuntu-latest'
        steps:
          - checkout: self
          - script: echo "Deploying"
            displayName: 'Deploy application'
Enter fullscreen mode Exit fullscreen mode

Key differences to be aware of:

  • GitHub Actions has a richer marketplace of community actions. Azure Pipelines tasks are more curated but fewer in number. You may need to replace some GitHub Actions with inline scripts.
  • GitHub Actions uses GITHUB_TOKEN automatically. Azure Pipelines requires explicit service connections for each external service.
  • Azure Pipelines has stages as a first-class concept. In GitHub Actions, you achieve similar separation using workflow files or job dependencies, but there is no explicit stage concept with its own approval gates.

From GitLab CI

GitLab CI and Azure Pipelines have different philosophies but similar capabilities:

GitLab CI Azure Pipelines
.gitlab-ci.yml azure-pipelines.yml
stages stages
jobs (named blocks) jobs (within stages)
script steps with script
image container / pool
variables variables
rules / only / except condition / trigger filters
cache Cache@2 task
artifacts Pipeline Artifacts
include template
environment environment
needs dependsOn

A GitLab CI configuration like this:

stages:
  - build
  - test
  - deploy

variables:
  NODE_VERSION: "20"

build:
  stage: build
  image: node:${NODE_VERSION}
  script:
    - npm ci
    - npm run build
  artifacts:
    paths:
      - dist/

test:
  stage: test
  image: node:${NODE_VERSION}
  script:
    - npm ci
    - npm test
  coverage: '/Lines\s*:\s*(\d+\.?\d*)%/'

deploy:
  stage: deploy
  image: node:${NODE_VERSION}
  script:
    - npm run deploy
  environment:
    name: production
  rules:
    - if: $CI_COMMIT_BRANCH == "main"
Enter fullscreen mode Exit fullscreen mode

Translates to this Azure Pipeline:

trigger:
  branches:
    include:
      - main

variables:
  nodeVersion: '20.x'

stages:
  - stage: Build
    jobs:
      - job: BuildJob
        pool:
          vmImage: 'ubuntu-latest'
        container: node:20
        steps:
          - script: npm ci
            displayName: 'Install dependencies'
          - script: npm run build
            displayName: 'Build application'
          - task: PublishPipelineArtifact@1
            inputs:
              targetPath: 'dist'
              artifact: 'build-output'

  - stage: Test
    dependsOn: Build
    jobs:
      - job: TestJob
        pool:
          vmImage: 'ubuntu-latest'
        container: node:20
        steps:
          - script: npm ci
            displayName: 'Install dependencies'
          - script: npm test
            displayName: 'Run tests'
          - task: PublishCodeCoverageResults@2
            inputs:
              summaryFileLocation: '**/coverage/cobertura-coverage.xml'

  - stage: Deploy
    dependsOn: Test
    condition: and(succeeded(), eq(variables['Build.SourceBranch'], 'refs/heads/main'))
    jobs:
      - deployment: ProductionDeploy
        pool:
          vmImage: 'ubuntu-latest'
        environment: 'production'
        strategy:
          runOnce:
            deploy:
              steps:
                - script: npm run deploy
                  displayName: 'Deploy to production'
Enter fullscreen mode Exit fullscreen mode

The biggest adjustment from GitLab CI is the verbosity. Azure Pipelines YAML is more explicit, which makes it more self-documenting but also longer. The trade-off is that Azure Pipelines gives you more control over agent selection, deployment strategies, and environment approvals.

Downloadable pipeline templates

Here are complete, production-ready pipeline templates for common application types. Copy them into your repository and customize the variables for your project.

Node.js web application

# azure-pipelines.yml - Node.js Web Application
trigger:
  branches:
    include:
      - main
      - develop

pr:
  branches:
    include:
      - main

pool:
  vmImage: 'ubuntu-latest'

variables:
  nodeVersion: '20.x'
  npmCacheFolder: $(Pipeline.Workspace)/.npm

stages:
  - stage: CI
    displayName: 'Build and Test'
    jobs:
      - job: Build
        displayName: 'Build, Lint, Test'
        steps:
          - task: NodeTool@0
            displayName: 'Use Node.js $(nodeVersion)'
            inputs:
              versionSpec: '$(nodeVersion)'

          - task: Cache@2
            displayName: 'Cache npm'
            inputs:
              key: 'npm | "$(Agent.OS)" | package-lock.json'
              restoreKeys: 'npm | "$(Agent.OS)"'
              path: $(npmCacheFolder)

          - script: npm ci
            displayName: 'Install dependencies'

          - script: npm run lint
            displayName: 'Lint'

          - script: npm run build
            displayName: 'Build'

          - script: npm test -- --coverage --reporters=default --reporters=jest-junit
            displayName: 'Test'
            env:
              JEST_JUNIT_OUTPUT_DIR: $(Build.ArtifactStagingDirectory)/test-results

          - task: PublishTestResults@2
            displayName: 'Publish test results'
            inputs:
              testResultsFormat: 'JUnit'
              testResultsFiles: '$(Build.ArtifactStagingDirectory)/test-results/junit.xml'
            condition: succeededOrFailed()

          - task: PublishCodeCoverageResults@2
            displayName: 'Publish coverage'
            inputs:
              summaryFileLocation: '**/coverage/cobertura-coverage.xml'
            condition: succeededOrFailed()

          - task: PublishPipelineArtifact@1
            displayName: 'Publish build artifact'
            inputs:
              targetPath: 'dist'
              artifact: 'webapp'

  - stage: DeployStaging
    displayName: 'Deploy to Staging'
    dependsOn: CI
    condition: and(succeeded(), eq(variables['Build.SourceBranch'], 'refs/heads/main'))
    jobs:
      - deployment: Staging
        environment: 'staging'
        strategy:
          runOnce:
            deploy:
              steps:
                - task: AzureWebApp@1
                  displayName: 'Deploy to Azure App Service'
                  inputs:
                    azureSubscription: 'azure-staging'
                    appType: 'webAppLinux'
                    appName: 'my-app-staging'
                    package: '$(Pipeline.Workspace)/webapp'

  - stage: DeployProduction
    displayName: 'Deploy to Production'
    dependsOn: DeployStaging
    condition: succeeded()
    jobs:
      - deployment: Production
        environment: 'production'
        strategy:
          runOnce:
            deploy:
              steps:
                - task: AzureWebApp@1
                  displayName: 'Deploy to Azure App Service'
                  inputs:
                    azureSubscription: 'azure-production'
                    appType: 'webAppLinux'
                    appName: 'my-app-production'
                    package: '$(Pipeline.Workspace)/webapp'
Enter fullscreen mode Exit fullscreen mode

Python API

# azure-pipelines.yml - Python FastAPI / Flask Application
trigger:
  branches:
    include:
      - main
      - develop

pr:
  branches:
    include:
      - main

pool:
  vmImage: 'ubuntu-latest'

variables:
  pythonVersion: '3.12'
  pipCacheDir: $(Pipeline.Workspace)/.pip

stages:
  - stage: CI
    displayName: 'Build and Test'
    jobs:
      - job: Test
        displayName: 'Lint and Test'
        steps:
          - task: UsePythonVersion@0
            displayName: 'Use Python $(pythonVersion)'
            inputs:
              versionSpec: '$(pythonVersion)'

          - task: Cache@2
            displayName: 'Cache pip'
            inputs:
              key: 'pip | "$(Agent.OS)" | requirements.txt'
              restoreKeys: 'pip | "$(Agent.OS)"'
              path: $(pipCacheDir)

          - script: |
              python -m pip install --upgrade pip
              pip install -r requirements.txt
              pip install -r requirements-dev.txt
            displayName: 'Install dependencies'

          - script: |
              pip install ruff
              ruff check src/
              ruff format --check src/
            displayName: 'Lint with Ruff'

          - script: |
              pip install mypy
              mypy src/ --ignore-missing-imports
            displayName: 'Type check with mypy'

          - script: |
              pytest tests/ \
                --junitxml=$(Build.ArtifactStagingDirectory)/test-results/results.xml \
                --cov=src \
                --cov-report=xml:$(Build.ArtifactStagingDirectory)/coverage/coverage.xml \
                --cov-report=html:$(Build.ArtifactStagingDirectory)/coverage/htmlcov
            displayName: 'Run tests with coverage'

          - task: PublishTestResults@2
            displayName: 'Publish test results'
            inputs:
              testResultsFormat: 'JUnit'
              testResultsFiles: '$(Build.ArtifactStagingDirectory)/test-results/results.xml'
            condition: succeededOrFailed()

          - task: PublishCodeCoverageResults@2
            displayName: 'Publish coverage'
            inputs:
              summaryFileLocation: '$(Build.ArtifactStagingDirectory)/coverage/coverage.xml'
            condition: succeededOrFailed()

  - stage: DockerBuild
    displayName: 'Build Docker Image'
    dependsOn: CI
    condition: and(succeeded(), eq(variables['Build.SourceBranch'], 'refs/heads/main'))
    jobs:
      - job: Docker
        displayName: 'Build and Push'
        steps:
          - task: Docker@2
            displayName: 'Build and push image'
            inputs:
              containerRegistry: 'acr-connection'
              repository: 'my-python-api'
              command: 'buildAndPush'
              Dockerfile: 'Dockerfile'
              tags: |
                $(Build.BuildId)
                latest

  - stage: Deploy
    displayName: 'Deploy to Production'
    dependsOn: DockerBuild
    jobs:
      - deployment: Production
        environment: 'production'
        strategy:
          runOnce:
            deploy:
              steps:
                - task: AzureContainerApps@1
                  displayName: 'Deploy to Container Apps'
                  inputs:
                    azureSubscription: 'azure-production'
                    containerAppName: 'my-python-api'
                    resourceGroup: 'rg-production'
                    imageToDeploy: 'myregistry.azurecr.io/my-python-api:$(Build.BuildId)'
Enter fullscreen mode Exit fullscreen mode

.NET application

# azure-pipelines.yml - .NET 8 Application
trigger:
  branches:
    include:
      - main
      - develop

pr:
  branches:
    include:
      - main

pool:
  vmImage: 'ubuntu-latest'

variables:
  buildConfiguration: 'Release'
  dotnetVersion: '8.0.x'
  nugetCacheFolder: $(Pipeline.Workspace)/.nuget/packages

stages:
  - stage: CI
    displayName: 'Build and Test'
    jobs:
      - job: Build
        displayName: 'Build and Test .NET'
        steps:
          - task: UseDotNet@2
            displayName: 'Use .NET SDK $(dotnetVersion)'
            inputs:
              packageType: 'sdk'
              version: '$(dotnetVersion)'

          - task: Cache@2
            displayName: 'Cache NuGet'
            inputs:
              key: 'nuget | "$(Agent.OS)" | **/packages.lock.json'
              restoreKeys: 'nuget | "$(Agent.OS)"'
              path: $(nugetCacheFolder)

          - task: DotNetCoreCLI@2
            displayName: 'Restore packages'
            inputs:
              command: 'restore'
              projects: '**/*.csproj'

          - task: DotNetCoreCLI@2
            displayName: 'Build solution'
            inputs:
              command: 'build'
              projects: '**/*.csproj'
              arguments: '--configuration $(buildConfiguration) --no-restore'

          - task: DotNetCoreCLI@2
            displayName: 'Run unit tests'
            inputs:
              command: 'test'
              projects: '**/*Tests.csproj'
              arguments: >-
                --configuration $(buildConfiguration)
                --no-build
                --collect:"XPlat Code Coverage"
                --logger trx
                --results-directory $(Build.ArtifactStagingDirectory)/TestResults

          - task: PublishTestResults@2
            displayName: 'Publish test results'
            inputs:
              testResultsFormat: 'VSTest'
              testResultsFiles: '$(Build.ArtifactStagingDirectory)/TestResults/**/*.trx'
            condition: succeededOrFailed()

          - task: PublishCodeCoverageResults@2
            displayName: 'Publish coverage'
            inputs:
              summaryFileLocation: '$(Build.ArtifactStagingDirectory)/TestResults/**/coverage.cobertura.xml'
            condition: succeededOrFailed()

          - task: DotNetCoreCLI@2
            displayName: 'Publish application'
            inputs:
              command: 'publish'
              publishWebProjects: true
              arguments: '--configuration $(buildConfiguration) --output $(Build.ArtifactStagingDirectory)/publish'
              zipAfterPublish: true

          - task: PublishPipelineArtifact@1
            displayName: 'Publish artifact'
            inputs:
              targetPath: '$(Build.ArtifactStagingDirectory)/publish'
              artifact: 'dotnet-app'

  - stage: DeployStaging
    displayName: 'Deploy to Staging'
    dependsOn: CI
    condition: and(succeeded(), eq(variables['Build.SourceBranch'], 'refs/heads/main'))
    jobs:
      - deployment: Staging
        environment: 'staging'
        strategy:
          runOnce:
            deploy:
              steps:
                - task: AzureWebApp@1
                  displayName: 'Deploy to Azure App Service'
                  inputs:
                    azureSubscription: 'azure-staging'
                    appType: 'webAppLinux'
                    appName: 'my-dotnet-app-staging'
                    package: '$(Pipeline.Workspace)/dotnet-app/**/*.zip'
                    runtimeStack: 'DOTNETCORE|8.0'

  - stage: DeployProduction
    displayName: 'Deploy to Production'
    dependsOn: DeployStaging
    condition: succeeded()
    jobs:
      - deployment: Production
        environment: 'production'
        strategy:
          runOnce:
            deploy:
              steps:
                - task: AzureWebApp@1
                  displayName: 'Deploy to Azure App Service'
                  inputs:
                    azureSubscription: 'azure-production'
                    appType: 'webAppLinux'
                    appName: 'my-dotnet-app-production'
                    package: '$(Pipeline.Workspace)/dotnet-app/**/*.zip'
                    runtimeStack: 'DOTNETCORE|8.0'
Enter fullscreen mode Exit fullscreen mode

Docker build and push

# azure-pipelines.yml - Docker Multi-Stage Build
trigger:
  branches:
    include:
      - main
  paths:
    include:
      - src/**
      - Dockerfile
      - docker-compose*.yml

pr:
  branches:
    include:
      - main

pool:
  vmImage: 'ubuntu-latest'

variables:
  dockerRegistry: 'myregistry.azurecr.io'
  imageName: 'my-application'
  tag: '$(Build.BuildId)'

stages:
  - stage: Build
    displayName: 'Build and Push Docker Image'
    jobs:
      - job: Docker
        displayName: 'Docker Build'
        steps:
          - task: Docker@2
            displayName: 'Login to ACR'
            inputs:
              containerRegistry: 'acr-connection'
              command: 'login'

          - task: Cache@2
            displayName: 'Cache Docker layers'
            inputs:
              key: 'docker | "$(Agent.OS)" | Dockerfile'
              path: '$(Pipeline.Workspace)/docker-cache'
              restoreKeys: 'docker | "$(Agent.OS)"'

          - script: |
              docker buildx create --use
              docker buildx build \
                --cache-from type=local,src=$(Pipeline.Workspace)/docker-cache \
                --cache-to type=local,dest=$(Pipeline.Workspace)/docker-cache,mode=max \
                --tag $(dockerRegistry)/$(imageName):$(tag) \
                --tag $(dockerRegistry)/$(imageName):latest \
                --push \
                --file Dockerfile \
                .
            displayName: 'Build and push with BuildKit caching'

          - script: |
              docker run --rm \
                -v /var/run/docker.sock:/var/run/docker.sock \
                aquasec/trivy:latest image \
                --exit-code 1 \
                --severity HIGH,CRITICAL \
                $(dockerRegistry)/$(imageName):$(tag)
            displayName: 'Scan image with Trivy'
            continueOnError: true

  - stage: DeployStaging
    displayName: 'Deploy to Staging'
    dependsOn: Build
    condition: and(succeeded(), eq(variables['Build.SourceBranch'], 'refs/heads/main'))
    jobs:
      - deployment: Staging
        environment: 'staging'
        strategy:
          runOnce:
            deploy:
              steps:
                - task: KubernetesManifest@1
                  displayName: 'Deploy to staging K8s'
                  inputs:
                    action: 'deploy'
                    connectionType: 'kubernetesServiceConnection'
                    kubernetesServiceConnection: 'k8s-staging'
                    namespace: 'staging'
                    manifests: 'k8s/*.yml'
                    containers: '$(dockerRegistry)/$(imageName):$(tag)'

  - stage: DeployProduction
    displayName: 'Deploy to Production'
    dependsOn: DeployStaging
    condition: succeeded()
    jobs:
      - deployment: Production
        environment: 'production'
        strategy:
          canary:
            increments: [25, 50, 100]
            deploy:
              steps:
                - task: KubernetesManifest@1
                  displayName: 'Deploy canary to production'
                  inputs:
                    action: 'deploy'
                    connectionType: 'kubernetesServiceConnection'
                    kubernetesServiceConnection: 'k8s-production'
                    namespace: 'production'
                    manifests: 'k8s/*.yml'
                    containers: '$(dockerRegistry)/$(imageName):$(tag)'
                    strategy: canary
                    percentage: $(strategy.increment)
            on:
              failure:
                steps:
                  - task: KubernetesManifest@1
                    displayName: 'Rollback canary'
                    inputs:
                      action: 'reject'
                      connectionType: 'kubernetesServiceConnection'
                      kubernetesServiceConnection: 'k8s-production'
                      namespace: 'production'
                      manifests: 'k8s/*.yml'
Enter fullscreen mode Exit fullscreen mode

Wrapping up

Azure DevOps Pipelines is a mature CI/CD platform that handles everything from simple single-stage builds to complex multi-environment deployments with approval gates, canary rollouts, and cross-pipeline orchestration. The YAML pipeline model gives you version-controlled, reviewable, and templatable infrastructure as code for your entire build and release process.

For teams starting fresh, begin with a single-stage YAML pipeline using Microsoft-hosted agents. Add stages as your deployment process matures. Introduce templates when you have multiple repositories that share pipeline patterns. Add self-hosted agents when performance or cost demands it.

For teams migrating from Jenkins, GitHub Actions, or GitLab CI, the concept mapping is straightforward even if the syntax differs. The biggest advantage Azure Pipelines offers over these alternatives is deep integration with the rest of the Azure DevOps suite -- work item tracking, test plans, artifact feeds, and repository management all live in the same platform with built-in traceability between code changes, builds, and deployments.

Regardless of where you are starting from, invest time in configuring code quality gates with tools like SonarQube, Semgrep, and Snyk as build validation steps. Automated quality enforcement on every pull request is the single highest-leverage improvement most teams can make to their CI/CD pipeline. It catches issues early, reduces review burden on senior engineers, and builds a culture of quality that compounds over time.

Frequently Asked Questions

What is an Azure DevOps Pipeline?

An Azure DevOps Pipeline is a CI/CD automation service that builds, tests, and deploys code. It supports YAML-based pipelines (infrastructure as code) and Classic pipelines (visual editor). Pipelines can deploy to Azure, AWS, GCP, Kubernetes, or any custom target.

Should I use YAML or Classic pipelines?

Use YAML pipelines for new projects. They support version control, code review, templates, and are the strategic direction for Azure DevOps. Classic pipelines are being deprecated for new features. Existing Classic pipelines continue to work but should be migrated to YAML over time.

How do I migrate from Jenkins to Azure Pipelines?

Map Jenkins concepts: Jenkinsfile stages become YAML stages, Jenkins plugins become Azure Pipeline tasks (from the Visual Studio Marketplace), Jenkins agents become Azure Pipeline agents (Microsoft-hosted or self-hosted). Azure provides a Jenkins-to-Azure Pipelines migration guide with task-by-task equivalents.

How much do Azure DevOps Pipelines cost?

Azure Pipelines offers 1 free Microsoft-hosted parallel job with 1,800 minutes/month for public projects (unlimited) and private projects. Additional parallel jobs cost $40/month each. Self-hosted agents are free (unlimited parallel jobs) — you only pay for the infrastructure.

Can I integrate code review tools with Azure Pipelines?

Yes. SonarQube, Semgrep, Snyk, and other analysis tools are available as Azure Pipeline tasks from the Visual Studio Marketplace. You can add them as build validation steps that run on every pull request, blocking merge if quality gates fail.


Originally published at aicodereview.cc

Top comments (0)