What Is CI/CD?
CI/CD stands for Continuous Integration and Continuous Delivery (or Continuous Deployment). It is a set of practices and tools that automate the process of building, testing, and deploying software. A CI/CD pipeline is the automated workflow that executes these steps every time a developer pushes code changes to a repository.
Before CI/CD existed, software releases were painful. Teams would develop features in isolation for weeks or months, then spend days merging everything together and fixing conflicts. Testing happened at the end, usually manually, and deployments were high-stakes events that everyone dreaded. A single release might take an entire weekend, with the team sitting in a war room waiting for things to break.
CI/CD changed all of that. By automating the integration, testing, and deployment steps, teams can ship changes in minutes instead of weeks. Every commit gets built and tested automatically. Every passing build can be deployed to production with a single click or no clicks at all.
Continuous Integration
Continuous Integration is the practice of automatically building and testing code every time a developer pushes changes to a shared repository. The core idea is simple: merge small changes frequently, and verify each merge immediately.
When a developer pushes a commit or opens a pull request, the CI system automatically checks out the code, installs dependencies, compiles the application (if applicable), and runs the test suite. If anything fails, the developer is notified within minutes.
CI solves the "integration hell" problem. When ten developers work on separate branches for two weeks and then try to merge everything at once, the resulting conflicts can take days to resolve. When those same ten developers merge small changes multiple times per day, each integration is trivial.
Continuous Delivery vs Continuous Deployment
This is where terminology gets confusing, because CD means two different things depending on who you ask.
Continuous Delivery means that every code change, after passing automated tests, is automatically prepared for release. The code is built, tested, and packaged into a deployable artifact. However, the actual deployment to production requires manual approval. A human clicks the "deploy" button after reviewing the change.
Continuous Deployment goes one step further. Every code change that passes the automated pipeline is automatically deployed to production without any manual intervention. There is no deploy button. If the tests pass, the code ships.
Most teams practice Continuous Delivery. Continuous Deployment requires a very mature test suite and robust rollback mechanisms. Companies like Netflix, Meta, and Etsy practice Continuous Deployment at scale, deploying hundreds or thousands of times per day.
A brief history of CI/CD
The concepts behind CI/CD evolved over decades:
- 2000: Martin Fowler and Kent Beck popularize Continuous Integration as part of Extreme Programming (XP).
- 2001: CruiseControl, one of the first CI servers, is released as open source.
- 2004: Hudson (later forked as Jenkins) is created by Kohsuke Kawaguchi at Sun Microsystems.
- 2010: Jez Humble and David Farley publish Continuous Delivery, the definitive book on the topic.
- 2011: Jenkins becomes the most popular CI server, and Travis CI launches as a hosted CI service for open-source projects.
- 2014: GitLab introduces GitLab CI, integrating CI/CD directly into the Git hosting platform.
- 2017: CircleCI 2.0 launches with Docker-native workflows.
- 2019: GitHub Actions launches, making CI/CD a native feature of the world's largest code hosting platform.
- 2020-present: CI/CD becomes table stakes. Every major development platform includes built-in pipeline support.
Why CI/CD matters
The business case for CI/CD is well-established. According to the DORA (DevOps Research and Assessment) State of DevOps reports, elite-performing teams deploy 973 times more frequently than low performers, with 6,570 times faster lead times from commit to deploy. These teams also have lower change failure rates and faster recovery times.
CI/CD delivers four key benefits:
- Speed. Automated builds and tests complete in minutes, not days. Deployments that used to take a weekend happen in seconds.
- Reliability. Every change goes through the same automated quality gates. There is no "we forgot to run the tests" scenario.
- Developer experience. Developers get fast feedback on their changes. Instead of finding out a week later that their code broke something, they know within minutes.
- Risk reduction. Small, frequent deployments are inherently lower risk than large, infrequent releases. If something breaks, the blast radius is small and the change is easy to revert.
How a CI/CD Pipeline Works
A CI/CD pipeline is a series of automated stages that code changes pass through on their way from a developer's machine to production. Each stage performs a specific function, and the pipeline stops if any stage fails.
Pipeline stages
A typical CI/CD pipeline has four to six stages:
1. Source. The pipeline is triggered when code changes are pushed to the repository. The CI system checks out the code and prepares the build environment.
2. Build. The application is compiled (for compiled languages like Java, Go, or C++) or dependencies are installed (for interpreted languages like Python, JavaScript, or Ruby). Build artifacts are generated.
3. Test. Automated tests run against the built application. This typically includes unit tests, integration tests, and sometimes end-to-end tests. Code quality checks and security scans also run in this stage.
4. Package. The tested application is packaged into a deployable artifact - a Docker image, a JAR file, a ZIP archive, or whatever format the deployment target expects.
5. Deploy to staging. The packaged artifact is deployed to a staging or pre-production environment that mirrors production as closely as possible.
6. Deploy to production. After validation in staging (which may include manual approval, automated smoke tests, or canary analysis), the artifact is deployed to production.
Triggers
Pipelines can be triggered by different events:
-
Push triggers run the pipeline on every push to specific branches (e.g.,
main,develop). - Pull request triggers run the pipeline when a PR is opened or updated, typically running tests and code analysis without deploying.
- Schedule triggers run the pipeline at fixed intervals (e.g., nightly builds, weekly security scans).
- Manual triggers allow developers to start a pipeline run on demand, useful for production deployments that require human approval.
- Tag triggers run the pipeline when a Git tag is created, commonly used for release workflows.
Artifacts and environments
Artifacts are the outputs of pipeline stages - compiled binaries, Docker images, test reports, code coverage reports, and security scan results. CI/CD systems store these artifacts so they can be passed between stages and downloaded later for debugging.
Environments are the deployment targets that the pipeline pushes code to. A typical setup includes:
-
Development - automatically deployed on every commit to
mainordevelop - Staging - deployed after tests pass, used for final validation
- Production - deployed after staging validation, often with manual approval
How the stages connect
Here is how a typical pipeline flows from start to finish:
Developer pushes code
|
v
[Source: Checkout]
|
v
[Build: Compile + Install deps]
|
v
[Test: Unit + Integration + Security]
|
v
[Package: Docker image / artifact]
|
v
[Deploy: Staging]
|
v
[Validate: Smoke tests on staging]
|
v
[Approval gate: Manual or automatic]
|
v
[Deploy: Production]
If any stage fails, the pipeline stops and the developer is notified. The build is marked as failed, and if the pipeline was triggered by a pull request, the PR is marked with a red status check that blocks merging.
CI/CD Tool Comparison
There are dozens of CI/CD tools available. Here is a detailed comparison of the six most widely used platforms in 2026.
GitHub Actions
GitHub Actions is the CI/CD platform built directly into GitHub. It launched in 2019 and has rapidly become the most popular choice for open-source projects and teams already using GitHub for version control.
Pricing:
- Free for public repositories (unlimited minutes)
- 2,000 minutes/month free for private repos on the Free plan
- 3,000 minutes/month on the Team plan ($4/user/month)
- 50,000 minutes/month on the Enterprise plan ($21/user/month)
- Additional minutes cost $0.008/minute for Linux runners
Config example (.github/workflows/ci.yml):
name: CI Pipeline
on:
push:
branches: [main]
pull_request:
branches: [main]
jobs:
build-and-test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: 20
cache: 'npm'
- run: npm ci
- run: npm run build
- run: npm test
- run: npm run lint
Pros:
- Zero setup required if you are already on GitHub
- Massive marketplace with 20,000+ pre-built actions
- Generous free tier for open-source projects
- Matrix builds, reusable workflows, and composite actions
- Tight integration with GitHub PRs, issues, and deployments
Cons:
- Vendor lock-in to GitHub
- Debugging failed workflows requires reading logs in the browser (no local execution)
- YAML syntax can become complex for large pipelines
- Slower runner startup times compared to self-hosted solutions
GitLab CI
GitLab CI is the CI/CD platform built into GitLab. It was one of the first platforms to integrate CI/CD directly into the Git hosting experience, and it remains the most tightly integrated CI/CD solution in any development platform.
Pricing:
- 400 compute minutes/month on the Free plan
- 10,000 minutes/month on the Premium plan ($29/user/month)
- 50,000 minutes/month on the Ultimate plan ($99/user/month)
- Self-hosted runners are free (you pay for infrastructure)
Config example (.gitlab-ci.yml):
stages:
- build
- test
- deploy
variables:
PIP_CACHE_DIR: "$CI_PROJECT_DIR/.cache/pip"
cache:
paths:
- .cache/pip
- venv/
build:
stage: build
image: python:3.12
script:
- python -m venv venv
- source venv/bin/activate
- pip install -r requirements.txt
artifacts:
paths:
- venv/
test:
stage: test
image: python:3.12
script:
- source venv/bin/activate
- pytest --cov=app --cov-report=xml
coverage: '/(?i)total.*? (100(?:\.0+)?\%|[1-9]?\d(?:\.\d+)?\%)$/'
artifacts:
reports:
coverage_report:
coverage_format: cobertura
path: coverage.xml
deploy_staging:
stage: deploy
script:
- ./deploy.sh staging
environment:
name: staging
url: https://staging.example.com
only:
- main
Pros:
- Best-in-class integrated DevOps experience (source control, CI/CD, registry, monitoring in one platform)
- Powerful auto-merge and merge train features
- Built-in container registry and artifact management
- Excellent self-hosted option for air-gapped environments
- Native DORA metrics and value stream analytics
Cons:
- Free tier is limited to 400 minutes (compared to GitHub Actions' 2,000)
- Premium/Ultimate plans are expensive per user
- Self-hosted GitLab requires significant infrastructure
- UI can feel overwhelming with so many features
Jenkins
Jenkins is the original CI server. It is an open-source automation server written in Java that has been around since 2011 (as a fork of Hudson, which dates back to 2004). Jenkins is the most flexible CI/CD tool available, but that flexibility comes at a cost: setup and maintenance are entirely your responsibility.
Pricing:
- Free and open source (MIT License)
- You pay for infrastructure to host the Jenkins server and build agents
- Typical cost: $50-500/month for cloud-hosted Jenkins, depending on scale
- CloudBees (the commercial Jenkins company) offers paid enterprise support starting around $30,000/year
Config example (Jenkinsfile):
pipeline {
agent any
environment {
JAVA_HOME = tool('JDK-17')
MAVEN_HOME = tool('Maven-3.9')
}
stages {
stage('Checkout') {
steps {
checkout scm
}
}
stage('Build') {
steps {
sh '${MAVEN_HOME}/bin/mvn clean compile -DskipTests'
}
}
stage('Unit Tests') {
steps {
sh '${MAVEN_HOME}/bin/mvn test'
}
post {
always {
junit '**/target/surefire-reports/*.xml'
}
}
}
stage('Integration Tests') {
steps {
sh '${MAVEN_HOME}/bin/mvn verify -DskipUnitTests'
}
}
stage('Package') {
steps {
sh '${MAVEN_HOME}/bin/mvn package -DskipTests'
archiveArtifacts artifacts: 'target/*.jar'
}
}
stage('Deploy to Staging') {
when {
branch 'main'
}
steps {
sh './deploy.sh staging'
}
}
stage('Deploy to Production') {
when {
branch 'main'
}
input {
message 'Deploy to production?'
ok 'Deploy'
}
steps {
sh './deploy.sh production'
}
}
}
post {
failure {
slackSend channel: '#builds', message: "Build FAILED: ${env.JOB_NAME} #${env.BUILD_NUMBER}"
}
}
}
Pros:
- Completely free and open source
- The most plugins of any CI/CD tool (1,800+)
- Maximum flexibility - you can build literally any workflow
- No vendor lock-in
- Strong community and decades of documentation
- Works in air-gapped and on-premise environments
Cons:
- You are responsible for all infrastructure and maintenance
- Plugin ecosystem can be fragile (compatibility issues, abandoned plugins)
- Groovy-based Jenkinsfiles have a steep learning curve
- No built-in secrets management (requires plugins or external tools)
- Security vulnerabilities in Jenkins itself require frequent updates
- UI feels dated compared to modern alternatives
Azure Pipelines
Azure Pipelines is the CI/CD platform within Microsoft's Azure DevOps suite. It is the natural choice for teams deploying to Azure or working in a Microsoft ecosystem.
Pricing:
- 1,800 minutes/month free for public projects
- 1 free parallel job with 1,800 minutes/month for private projects
- Additional parallel jobs cost $40/month each
- Self-hosted agents are free (unlimited parallel jobs)
Config example (azure-pipelines.yml):
trigger:
branches:
include:
- main
pool:
vmImage: 'ubuntu-latest'
variables:
buildConfiguration: 'Release'
dotnetVersion: '8.x'
stages:
- stage: Build
jobs:
- job: BuildAndTest
steps:
- task: UseDotNet@2
inputs:
packageType: 'sdk'
version: '$(dotnetVersion)'
- task: DotNetCoreCLI@2
displayName: 'Restore packages'
inputs:
command: 'restore'
projects: '**/*.csproj'
- task: DotNetCoreCLI@2
displayName: 'Build'
inputs:
command: 'build'
projects: '**/*.csproj'
arguments: '--configuration $(buildConfiguration) --no-restore'
- task: DotNetCoreCLI@2
displayName: 'Run tests'
inputs:
command: 'test'
projects: '**/*Tests.csproj'
arguments: '--configuration $(buildConfiguration) --collect:"XPlat Code Coverage"'
- task: PublishCodeCoverageResults@2
inputs:
summaryFileLocation: '$(Agent.TempDirectory)/**/coverage.cobertura.xml'
- stage: Deploy
dependsOn: Build
condition: and(succeeded(), eq(variables['Build.SourceBranch'], 'refs/heads/main'))
jobs:
- deployment: DeployToStaging
environment: 'staging'
strategy:
runOnce:
deploy:
steps:
- task: AzureWebApp@1
inputs:
azureSubscription: 'my-azure-subscription'
appName: 'my-app-staging'
package: '$(Pipeline.Workspace)/**/*.zip'
Pros:
- Best integration with Azure services and Microsoft ecosystem
- Strong support for .NET, Visual Studio, and Windows-based builds
- Built-in deployment environments with approval gates
- YAML and classic (visual designer) pipeline options
- Good free tier for open-source projects
Cons:
- Azure DevOps UI can be confusing for newcomers
- YAML syntax differs from GitHub Actions and GitLab CI
- Marketplace has fewer extensions than GitHub Actions
- Documentation can be hard to navigate
- Less community content and tutorials compared to GitHub Actions
CircleCI
CircleCI is a hosted CI/CD platform known for fast build times and powerful caching. It is a popular choice for teams that want high-performance pipelines without managing infrastructure.
Pricing:
- Free plan includes 6,000 build minutes/month (on resource class medium)
- Performance plan starts at $15/month with 80,000 credits (approximately 5,000 minutes)
- Scale plan for large organizations with custom pricing
- Self-hosted runner option available
Config example (.circleci/config.yml):
version: 2.1
orbs:
node: circleci/node@5.2
jobs:
build-and-test:
docker:
- image: cimg/node:20.11
- image: cimg/postgres:15.4
environment:
POSTGRES_DB: testdb
POSTGRES_USER: testuser
POSTGRES_PASSWORD: testpass
steps:
- checkout
- node/install-packages:
pkg-manager: npm
- run:
name: Run tests
command: npm test
- run:
name: Run linter
command: npm run lint
- store_test_results:
path: test-results
- store_artifacts:
path: coverage
workflows:
build-test-deploy:
jobs:
- build-and-test
- deploy:
requires:
- build-and-test
filters:
branches:
only: main
Pros:
- Excellent caching and parallelism features
- Orbs (reusable config packages) simplify common setups
- SSH debugging into failed builds
- Fast runner startup times
- Good Docker layer caching
Cons:
- Credit-based pricing can be confusing
- Free tier has become less generous over time
- Config syntax has a learning curve compared to GitHub Actions
- Fewer integrations than GitHub Actions marketplace
Buildkite
Buildkite takes a different approach: it provides the orchestration layer in the cloud, but your builds run on your own infrastructure using self-hosted agents. This gives you maximum control over the build environment while offloading pipeline management.
Pricing:
- Free for open-source and educational projects
- Starts at $15/user/month for teams
- No per-minute charges (you provide the compute)
- Enterprise plan with custom pricing
Pros:
- Builds run on your own infrastructure for maximum speed and security
- Simple YAML-based pipeline configuration
- Excellent for large monorepos (used by Shopify, Canva, and Wayfair)
- Very fast pipeline startup and execution
- Plugins ecosystem for common build tools
Cons:
- You must manage your own build agents
- Less turnkey than fully hosted solutions
- Smaller community than GitHub Actions or Jenkins
- No free hosted compute (unlike GitHub Actions or CircleCI)
Quick comparison table
| Feature | GitHub Actions | GitLab CI | Jenkins | Azure Pipelines | CircleCI |
|---|---|---|---|---|---|
| Free tier | 2,000 min/mo | 400 min/mo | Free (self-hosted) | 1,800 min/mo | 6,000 min/mo |
| Config format | YAML | YAML | Groovy | YAML | YAML |
| Self-hosted runners | Yes | Yes | Required | Yes | Yes |
| Container support | Excellent | Excellent | Via plugins | Good | Excellent |
| Marketplace/Plugins | 20,000+ | 500+ | 1,800+ | 1,000+ | 400+ orbs |
| Learning curve | Low | Medium | High | Medium | Medium |
| Best for | GitHub teams | GitLab teams | Enterprise/custom | Azure/Microsoft | Performance |
Real Config Files
Here are production-ready CI/CD configurations for four common technology stacks. Each example includes build, test, security scanning, and deployment stages.
GitHub Actions workflow for a Node.js app
This workflow builds a Node.js application, runs tests with coverage, performs security scanning with Semgrep, and deploys to AWS.
name: CI/CD Pipeline
on:
push:
branches: [main]
pull_request:
branches: [main]
env:
NODE_VERSION: 20
AWS_REGION: us-east-1
jobs:
lint-and-typecheck:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: ${{ env.NODE_VERSION }}
cache: 'npm'
- run: npm ci
- run: npm run lint
- run: npm run typecheck
test:
runs-on: ubuntu-latest
services:
postgres:
image: postgres:16
env:
POSTGRES_DB: testdb
POSTGRES_USER: testuser
POSTGRES_PASSWORD: testpass
ports:
- 5432:5432
options: >-
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: ${{ env.NODE_VERSION }}
cache: 'npm'
- run: npm ci
- run: npm test -- --coverage --ci
env:
DATABASE_URL: postgresql://testuser:testpass@localhost:5432/testdb
- uses: actions/upload-artifact@v4
with:
name: coverage-report
path: coverage/
security-scan:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: returntocorp/semgrep-action@v1
with:
config: >-
p/javascript
p/typescript
p/nodejs
p/owasp-top-ten
env:
SEMGREP_APP_TOKEN: ${{ secrets.SEMGREP_APP_TOKEN }}
build-and-push:
needs: [lint-and-typecheck, test, security-scan]
if: github.ref == 'refs/heads/main'
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: aws-actions/configure-aws-credentials@v4
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ env.AWS_REGION }}
- uses: aws-actions/amazon-ecr-login@v2
id: ecr-login
- run: |
docker build -t ${{ steps.ecr-login.outputs.registry }}/my-app:${{ github.sha }} .
docker push ${{ steps.ecr-login.outputs.registry }}/my-app:${{ github.sha }}
deploy-staging:
needs: build-and-push
runs-on: ubuntu-latest
environment: staging
steps:
- uses: aws-actions/configure-aws-credentials@v4
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ env.AWS_REGION }}
- run: |
aws ecs update-service \
--cluster staging-cluster \
--service my-app \
--force-new-deployment
deploy-production:
needs: deploy-staging
runs-on: ubuntu-latest
environment: production
steps:
- uses: aws-actions/configure-aws-credentials@v4
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ env.AWS_REGION }}
- run: |
aws ecs update-service \
--cluster production-cluster \
--service my-app \
--force-new-deployment
GitLab CI for a Python app
This configuration builds a Python application with Poetry, runs tests with pytest, scans with SonarQube, and deploys to Kubernetes.
stages:
- build
- test
- security
- deploy
variables:
PIP_CACHE_DIR: "$CI_PROJECT_DIR/.cache/pip"
POETRY_VIRTUALENVS_IN_PROJECT: "true"
SONAR_HOST_URL: "https://sonarqube.example.com"
cache:
key:
files:
- poetry.lock
paths:
- .cache/pip
- .venv/
build:
stage: build
image: python:3.12
script:
- pip install poetry
- poetry install --no-interaction
artifacts:
paths:
- .venv/
expire_in: 1 hour
unit-tests:
stage: test
image: python:3.12
script:
- pip install poetry
- poetry run pytest tests/unit --cov=app --cov-report=xml --junitxml=report.xml
artifacts:
reports:
junit: report.xml
coverage_report:
coverage_format: cobertura
path: coverage.xml
coverage: '/(?i)total.*? (100(?:\.0+)?\%|[1-9]?\d(?:\.\d+)?\%)$/'
integration-tests:
stage: test
image: python:3.12
services:
- postgres:16
- redis:7
variables:
POSTGRES_DB: testdb
POSTGRES_USER: testuser
POSTGRES_PASSWORD: testpass
DATABASE_URL: "postgresql://testuser:testpass@postgres:5432/testdb"
REDIS_URL: "redis://redis:6379"
script:
- pip install poetry
- poetry run pytest tests/integration --timeout=120
sonarqube-check:
stage: security
image:
name: sonarsource/sonar-scanner-cli:latest
entrypoint: [""]
script:
- sonar-scanner
-Dsonar.projectKey=$CI_PROJECT_NAME
-Dsonar.sources=app/
-Dsonar.tests=tests/
-Dsonar.python.coverage.reportPaths=coverage.xml
-Dsonar.host.url=$SONAR_HOST_URL
-Dsonar.token=$SONAR_TOKEN
semgrep-scan:
stage: security
image: returntocorp/semgrep
script:
- semgrep ci --config p/python --config p/flask --config p/django
deploy-staging:
stage: deploy
image: bitnami/kubectl:latest
script:
- kubectl set image deployment/my-app
my-app=registry.example.com/my-app:$CI_COMMIT_SHA
--namespace=staging
environment:
name: staging
url: https://staging.example.com
rules:
- if: $CI_COMMIT_BRANCH == "main"
deploy-production:
stage: deploy
image: bitnami/kubectl:latest
script:
- kubectl set image deployment/my-app
my-app=registry.example.com/my-app:$CI_COMMIT_SHA
--namespace=production
environment:
name: production
url: https://app.example.com
rules:
- if: $CI_COMMIT_BRANCH == "main"
when: manual
Jenkinsfile for a Java app
This Jenkinsfile builds a Java application with Maven, runs tests, performs SonarQube analysis, and deploys to multiple environments.
pipeline {
agent any
tools {
jdk 'JDK-17'
maven 'Maven-3.9'
}
environment {
SONAR_HOST = 'https://sonarqube.example.com'
DOCKER_REGISTRY = 'registry.example.com'
APP_NAME = 'my-java-app'
}
options {
timeout(time: 30, unit: 'MINUTES')
buildDiscarder(logRotator(numToKeepStr: '10'))
disableConcurrentBuilds()
}
stages {
stage('Compile') {
steps {
sh 'mvn clean compile -DskipTests'
}
}
stage('Unit Tests') {
steps {
sh 'mvn test'
}
post {
always {
junit '**/target/surefire-reports/*.xml'
jacoco execPattern: '**/target/jacoco.exec'
}
}
}
stage('Integration Tests') {
steps {
sh 'mvn verify -DskipUnitTests -Pintegration-tests'
}
post {
always {
junit '**/target/failsafe-reports/*.xml'
}
}
}
stage('SonarQube Analysis') {
steps {
withSonarQubeEnv('SonarQube') {
sh '''
mvn sonar:sonar \
-Dsonar.projectKey=${APP_NAME} \
-Dsonar.host.url=${SONAR_HOST}
'''
}
}
}
stage('Quality Gate') {
steps {
timeout(time: 5, unit: 'MINUTES') {
waitForQualityGate abortPipeline: true
}
}
}
stage('Build Docker Image') {
when {
branch 'main'
}
steps {
sh """
docker build -t ${DOCKER_REGISTRY}/${APP_NAME}:${BUILD_NUMBER} .
docker push ${DOCKER_REGISTRY}/${APP_NAME}:${BUILD_NUMBER}
"""
}
}
stage('Deploy to Staging') {
when {
branch 'main'
}
steps {
sh "./deploy.sh staging ${DOCKER_REGISTRY}/${APP_NAME}:${BUILD_NUMBER}"
}
}
stage('Deploy to Production') {
when {
branch 'main'
}
input {
message 'Deploy to production?'
ok 'Yes, deploy it'
submitter 'devops-team'
}
steps {
sh "./deploy.sh production ${DOCKER_REGISTRY}/${APP_NAME}:${BUILD_NUMBER}"
}
}
}
post {
success {
slackSend color: 'good', message: "Build succeeded: ${env.JOB_NAME} #${env.BUILD_NUMBER}"
}
failure {
slackSend color: 'danger', message: "Build FAILED: ${env.JOB_NAME} #${env.BUILD_NUMBER}"
}
}
}
Azure Pipelines for a .NET app
This pipeline builds a .NET application, runs tests, performs security scanning with Snyk, and deploys to Azure App Service.
trigger:
branches:
include:
- main
paths:
exclude:
- '*.md'
- 'docs/**'
pr:
branches:
include:
- main
pool:
vmImage: 'ubuntu-latest'
variables:
buildConfiguration: 'Release'
dotnetVersion: '8.x'
azureSubscription: 'my-azure-subscription'
appNameStaging: 'my-app-staging'
appNameProd: 'my-app-prod'
stages:
- stage: Build
displayName: 'Build and Test'
jobs:
- job: Build
steps:
- task: UseDotNet@2
displayName: 'Install .NET SDK'
inputs:
packageType: 'sdk'
version: '$(dotnetVersion)'
- task: DotNetCoreCLI@2
displayName: 'Restore packages'
inputs:
command: 'restore'
projects: '**/*.csproj'
- task: DotNetCoreCLI@2
displayName: 'Build'
inputs:
command: 'build'
projects: '**/*.csproj'
arguments: '--configuration $(buildConfiguration) --no-restore'
- task: DotNetCoreCLI@2
displayName: 'Run unit tests'
inputs:
command: 'test'
projects: '**/*Tests.csproj'
arguments: '--configuration $(buildConfiguration) --collect:"XPlat Code Coverage" --results-directory $(Agent.TempDirectory)'
- task: PublishCodeCoverageResults@2
displayName: 'Publish coverage'
inputs:
summaryFileLocation: '$(Agent.TempDirectory)/**/coverage.cobertura.xml'
- task: DotNetCoreCLI@2
displayName: 'Publish'
inputs:
command: 'publish'
projects: 'src/MyApp/MyApp.csproj'
arguments: '--configuration $(buildConfiguration) --output $(Build.ArtifactStagingDirectory)'
- task: PublishBuildArtifacts@1
displayName: 'Upload artifacts'
inputs:
pathToPublish: '$(Build.ArtifactStagingDirectory)'
artifactName: 'drop'
- stage: SecurityScan
displayName: 'Security Scan'
dependsOn: Build
jobs:
- job: SnykScan
steps:
- task: SnykSecurityScan@1
inputs:
serviceConnectionEndpoint: 'snyk-connection'
testType: 'app'
monitorWhen: 'always'
failOnIssues: true
severityThreshold: 'high'
- stage: DeployStaging
displayName: 'Deploy to Staging'
dependsOn: SecurityScan
condition: and(succeeded(), eq(variables['Build.SourceBranch'], 'refs/heads/main'))
jobs:
- deployment: DeployStaging
environment: 'staging'
strategy:
runOnce:
deploy:
steps:
- task: AzureWebApp@1
inputs:
azureSubscription: '$(azureSubscription)'
appType: 'webAppLinux'
appName: '$(appNameStaging)'
package: '$(Pipeline.Workspace)/drop/**/*.zip'
- stage: DeployProduction
displayName: 'Deploy to Production'
dependsOn: DeployStaging
condition: succeeded()
jobs:
- deployment: DeployProd
environment: 'production'
strategy:
runOnce:
deploy:
steps:
- task: AzureWebApp@1
inputs:
azureSubscription: '$(azureSubscription)'
appType: 'webAppLinux'
appName: '$(appNameProd)'
package: '$(Pipeline.Workspace)/drop/**/*.zip'
Pipeline Design Patterns
There is no single correct way to design a CI/CD pipeline. The right pattern depends on your team's branching strategy, repository structure, and deployment requirements. Here are the most common patterns.
Trunk-based development pipeline
In trunk-based development, all developers commit directly to the main branch (or use very short-lived feature branches that last hours, not days). The pipeline runs on every commit to main and deploys automatically.
main branch ──> Build ──> Test ──> Deploy staging ──> Auto-deploy prod
This pattern works best for small teams with strong test coverage and fast pipelines. It requires discipline: every commit to main must be deployable. Feature flags are used to hide incomplete work.
When to use: Teams of 2-10 developers, mature test suites, web applications with fast rollback capability.
Feature branch pipeline
Feature branch workflows are the most common pattern. Developers create branches for each feature, open pull requests, and merge after CI passes and code review is approved.
feature branch ──> Build ──> Test ──> Security scan ──> PR review
|
merge to main
|
Build ──> Deploy staging ──> Deploy prod
The pipeline runs differently depending on the trigger. On pull requests, it runs build, test, and analysis steps but does not deploy. On merges to main, it runs the full pipeline including deployment.
When to use: Most teams. This is the default pattern for teams of any size using GitHub or GitLab.
Monorepo pipeline
In a monorepo, multiple applications or services live in a single repository. The pipeline must be smart enough to only build and deploy the services that actually changed.
# GitHub Actions monorepo example
on:
push:
branches: [main]
jobs:
detect-changes:
runs-on: ubuntu-latest
outputs:
api: ${{ steps.changes.outputs.api }}
web: ${{ steps.changes.outputs.web }}
shared: ${{ steps.changes.outputs.shared }}
steps:
- uses: actions/checkout@v4
- uses: dorny/paths-filter@v3
id: changes
with:
filters: |
api:
- 'services/api/**'
- 'packages/shared/**'
web:
- 'services/web/**'
- 'packages/shared/**'
shared:
- 'packages/shared/**'
build-api:
needs: detect-changes
if: needs.detect-changes.outputs.api == 'true'
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- run: cd services/api && npm ci && npm test && npm run build
build-web:
needs: detect-changes
if: needs.detect-changes.outputs.web == 'true'
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- run: cd services/web && npm ci && npm test && npm run build
When to use: Teams managing multiple related services, large codebases with shared libraries, organizations that want a single source of truth.
Multi-environment deployment
Most production pipelines deploy through multiple environments. The key decisions are how many environments you need and what gates exist between them.
Build ──> Test ──> Deploy DEV (auto) ──> Deploy STAGING (auto) ──> Smoke tests ──> Deploy PROD (manual approval)
A common setup uses three environments:
-
Development - Automatically deployed on every merge to
main. Used by developers for integration testing. - Staging - Automatically deployed after dev succeeds. Mirrors production configuration. Used by QA for final validation.
- Production - Deployed after manual approval. May use canary or blue-green deployment strategies.
# GitHub Actions multi-environment example
deploy-dev:
needs: test
runs-on: ubuntu-latest
environment: development
steps:
- run: ./deploy.sh dev
deploy-staging:
needs: deploy-dev
runs-on: ubuntu-latest
environment: staging
steps:
- run: ./deploy.sh staging
- run: ./run-smoke-tests.sh staging
deploy-production:
needs: deploy-staging
runs-on: ubuntu-latest
environment:
name: production
url: https://app.example.com
steps:
- run: ./deploy.sh production
- run: ./run-smoke-tests.sh production
Parallel testing
For large test suites, running all tests sequentially can take too long. Parallel testing splits the test suite across multiple runners to reduce total execution time.
# GitHub Actions parallel testing with matrix
test:
runs-on: ubuntu-latest
strategy:
matrix:
shard: [1, 2, 3, 4]
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: 20
cache: 'npm'
- run: npm ci
- run: npx jest --shard=${{ matrix.shard }}/4
# GitLab CI parallel testing
test:
stage: test
parallel: 4
script:
- pytest tests/ --splits 4 --group $CI_NODE_INDEX
A test suite that takes 20 minutes sequentially can complete in 5 minutes when split across four parallel runners. The trade-off is increased compute cost, but for most teams the time savings justify the expense.
Security in CI/CD
A CI/CD pipeline is the perfect place to enforce security checks because every code change must pass through it. Security scanning in the pipeline catches vulnerabilities before they reach production.
Secret management
Never hardcode secrets in pipeline configuration files. Every CI/CD platform provides a secrets management feature:
-
GitHub Actions: Repository secrets and environment secrets, accessed via
${{ secrets.MY_SECRET }} - GitLab CI: CI/CD variables with masking and protection options
- Jenkins: Credentials plugin with secret text, username/password, and SSH key types
- Azure Pipelines: Variable groups and Azure Key Vault integration
Best practices for secrets in CI/CD:
- Rotate secrets regularly (at least every 90 days)
- Use environment-specific secrets (different credentials for staging vs production)
- Never print secrets in build logs (most platforms mask them automatically)
- Use short-lived tokens instead of long-lived credentials when possible
- Audit secret access with your platform's audit log
SAST integration
Static Application Security Testing (SAST) tools scan your source code for security vulnerabilities without executing the application. Two of the most popular SAST tools for CI/CD pipelines are Semgrep and SonarQube.
Semgrep in GitHub Actions:
security-scan:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: returntocorp/semgrep-action@v1
with:
config: >-
p/owasp-top-ten
p/cwe-top-25
p/security-audit
env:
SEMGREP_APP_TOKEN: ${{ secrets.SEMGREP_APP_TOKEN }}
Semgrep scans in seconds, supports 30+ languages, and can be configured with custom rules for your organization's specific security requirements. The free tier supports teams of up to 10 developers with full cross-file analysis.
SonarQube quality gates:
SonarQube can be configured to fail the pipeline if the code does not meet predefined quality and security thresholds:
- No new critical or blocker security issues
- Code coverage on new code above 80%
- No new security hotspots without review
- Duplication on new code below 3%
These quality gates ensure that the security bar never drops, even as the codebase grows.
Dependency scanning
Your application's dependencies are a major attack surface. Dependency scanning tools check your third-party libraries against databases of known vulnerabilities (CVEs).
# GitHub Actions dependency scanning with Snyk
dependency-scan:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: snyk/actions/node@master
env:
SNYK_TOKEN: ${{ secrets.SNYK_TOKEN }}
with:
args: --severity-threshold=high
Other dependency scanning options include GitHub's built-in Dependabot, GitLab's dependency scanning, and OWASP Dependency-Check (free and open source).
Container scanning
If your pipeline builds Docker images, scan those images for vulnerabilities before pushing them to a registry.
# Trivy container scanning
container-scan:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- run: docker build -t my-app:${{ github.sha }} .
- uses: aquasecurity/trivy-action@master
with:
image-ref: my-app:${{ github.sha }}
format: 'sarif'
output: 'trivy-results.sarif'
severity: 'CRITICAL,HIGH'
- uses: github/codeql-action/upload-sarif@v3
with:
sarif_file: 'trivy-results.sarif'
Trivy, Grype, and Snyk Container are popular choices for container scanning. They check the base image, OS packages, and application dependencies within the container.
Supply chain security
Supply chain attacks target the tools and dependencies that your pipeline relies on. Notable incidents like the SolarWinds attack and the xz backdoor have made supply chain security a top priority.
Protections include:
-
Pin action versions by SHA instead of tag:
uses: actions/checkout@a5ac7e51b41094c92402da3b24376905380afc29instead ofuses: actions/checkout@v4 - Sign and verify artifacts with Sigstore/Cosign
- Use lockfiles (package-lock.json, poetry.lock, go.sum) and verify checksums
- Limit pipeline permissions with the principle of least privilege
- Enable GitHub's artifact attestations for provenance tracking
Code Review in CI/CD
Automated code review is a natural extension of the CI/CD pipeline. When a developer opens a pull request, the pipeline can run code analysis tools that post findings directly as PR comments.
Automated review integration
Tools like CodeRabbit, PR-Agent, and GitHub Copilot can be integrated into your CI/CD pipeline to provide AI-powered code review on every pull request. These tools analyze the diff, identify potential issues, and post inline comments with explanations and fix suggestions.
For rule-based review, Semgrep and SonarQube post findings as PR comments when configured with their respective GitHub or GitLab integrations. This gives developers immediate feedback on security and quality issues without waiting for a human reviewer.
# Example: Semgrep PR comments in GitHub Actions
name: Semgrep PR Analysis
on:
pull_request: {}
jobs:
semgrep:
runs-on: ubuntu-latest
container:
image: returntocorp/semgrep
steps:
- uses: actions/checkout@v4
- run: semgrep ci
env:
SEMGREP_APP_TOKEN: ${{ secrets.SEMGREP_APP_TOKEN }}
When connected to the Semgrep Cloud Platform, findings from PR scans appear as inline comments directly on the pull request, showing the exact line of code with the vulnerability and a recommended fix.
Quality gates
Quality gates are pass/fail criteria that the pipeline enforces on every PR. If the code does not meet the defined thresholds, the pipeline fails and the PR cannot be merged.
Common quality gate criteria:
- Zero new critical security issues - Any new critical or high-severity vulnerability blocks the merge
- Test coverage threshold - New code must have at least 80% test coverage
- No new code smells - Maintainability issues above a certain severity block the merge
- Linting passes - Code must conform to the team's style rules
Quality gates work best when they are strict on new code but lenient on existing code. This prevents the situation where a developer fixing a small bug is blocked because an unrelated part of the codebase has pre-existing issues.
Build validation for PRs
GitHub's branch protection rules and GitLab's merge request approvals can require that specific CI pipeline checks pass before a PR can be merged. This ensures that no code reaches main without going through the full quality pipeline.
Recommended branch protection settings:
- Require status checks to pass (build, test, lint, security scan)
- Require at least one approving review
- Require branches to be up-to-date before merging
- Dismiss stale reviews when new commits are pushed
- Require conversation resolution (ensures developers address automated feedback)
Monitoring and Observability
A CI/CD pipeline is a system that needs monitoring just like any other production system. When pipelines are slow or unreliable, developer productivity suffers.
Pipeline metrics
The key metrics to track for your CI/CD pipeline are:
- Pipeline duration - How long does the full pipeline take to complete? Aim for under 10 minutes for PR pipelines.
- Success rate - What percentage of pipeline runs succeed? Below 90% indicates flaky tests or infrastructure issues.
- Queue time - How long do jobs wait before a runner picks them up? High queue times mean you need more runners.
- Recovery time - How long does it take to fix a broken pipeline? This is a leading indicator of team health.
Failure alerting
Pipeline failures should be surfaced immediately to the developers who caused them. Every CI/CD platform supports notifications via:
- Slack or Microsoft Teams messages
- Email notifications
- GitHub commit status checks
- Mobile push notifications (via platform apps)
Configure alerts to be specific and actionable. A message that says "Build #1234 failed" is far less useful than "Build #1234 failed at the unit test stage: 3 tests failed in auth.test.ts (line 45, 78, 112)."
DORA metrics
The DORA (DevOps Research and Assessment) framework defines four key metrics for measuring software delivery performance:
- Deployment frequency - How often does your team deploy to production? Elite teams deploy multiple times per day.
- Lead time for changes - How long from commit to production deployment? Elite teams achieve less than one day.
- Change failure rate - What percentage of deployments cause failures? Elite teams keep this below 5%.
- Time to restore service - How long to recover from a production failure? Elite teams restore service in under one hour.
These metrics provide an objective measure of your CI/CD pipeline's effectiveness. If your deployment frequency is increasing while your change failure rate is decreasing, your pipeline is working well.
Dashboard tools
Several tools provide CI/CD pipeline observability:
- GitLab Value Stream Analytics - Built into GitLab, tracks DORA metrics natively
- Grafana - Create custom dashboards pulling data from CI/CD platform APIs
- Datadog CI Visibility - Tracks pipeline performance, flaky tests, and bottlenecks
- Sleuth - DORA metrics tracking with deploy tracking and change failure detection
- LinearB - Engineering metrics including CI/CD pipeline performance
The simplest approach for most teams is to start with your CI/CD platform's built-in analytics (GitHub Actions has workflow run insights, GitLab has CI/CD analytics) and add a dedicated tool only if you need deeper visibility.
Best Practices
After working with CI/CD pipelines across dozens of teams and technology stacks, here are the practices that consistently separate fast, reliable pipelines from slow, frustrating ones.
Keep pipelines fast (under 10 minutes)
Developer productivity drops dramatically when pipelines take longer than 10 minutes. A study by the Continuous Delivery Foundation found that developers context-switch to other work when builds take more than 5 minutes, and the cost of that context-switching often exceeds the cost of faster build infrastructure.
Strategies to keep pipelines fast:
- Run linting and unit tests in parallel, not sequentially
- Use test splitting to distribute tests across multiple runners
- Move slow integration tests to a separate pipeline that runs post-merge
- Use incremental builds instead of full rebuilds
- Start with the fastest checks (linting) and fail fast before running slower checks (integration tests)
Cache dependencies
Downloading and installing dependencies on every pipeline run wastes time and bandwidth. Every CI/CD platform supports caching.
# GitHub Actions caching
- uses: actions/setup-node@v4
with:
node-version: 20
cache: 'npm' # Built-in npm caching
# GitLab CI caching
cache:
key:
files:
- package-lock.json
paths:
- node_modules/
# CircleCI caching
- restore_cache:
keys:
- deps-{{ checksum "package-lock.json" }}
- deps-
Cache keys should be based on lockfile checksums so the cache is invalidated when dependencies change. A well-configured cache can reduce pipeline duration by 30-60%.
Fail fast
Order pipeline stages so that the fastest, most likely-to-fail steps run first. If the linter catches a formatting issue in 10 seconds, there is no reason to wait for a 5-minute test suite to tell you the same thing.
A good stage order is:
- Lint and format check (10-30 seconds)
- Type checking (30-60 seconds)
- Unit tests (1-3 minutes)
- Security scanning (1-2 minutes)
- Integration tests (3-10 minutes)
- Build and package (1-3 minutes)
- Deploy
If step 1 fails, the developer gets feedback in under a minute and can fix the issue before steps 2-7 even start.
Infrastructure as Code
Your deployment infrastructure should be defined in code, version-controlled alongside your application, and deployed through your CI/CD pipeline. Tools like Terraform, Pulumi, and AWS CDK let you define infrastructure in files that can be reviewed, tested, and rolled back just like application code.
# Terraform deployment in GitHub Actions
deploy-infrastructure:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: hashicorp/setup-terraform@v3
- run: terraform init
- run: terraform plan -out=tfplan
- run: terraform apply tfplan
if: github.ref == 'refs/heads/main'
This eliminates configuration drift, makes infrastructure changes auditable, and ensures that staging and production environments stay in sync.
Pipeline as Code
Your CI/CD pipeline definition should live in your repository alongside the code it builds and deploys. This is the "Pipeline as Code" principle, and every modern CI/CD tool supports it:
- GitHub Actions:
.github/workflows/*.yml - GitLab CI:
.gitlab-ci.yml - Jenkins:
Jenkinsfile - Azure Pipelines:
azure-pipelines.yml - CircleCI:
.circleci/config.yml
Pipeline as Code means that pipeline changes go through the same code review process as application changes. When someone modifies the deployment pipeline, that change is reviewed, tested, and tracked in version control. This is far safer than clicking through a web UI to modify pipeline settings.
Use branch protection
Never allow direct pushes to your main branch. Require that all changes go through a pull request with:
- At least one CI pipeline passing
- At least one human approval
- Up-to-date branch (rebased on latest main)
This ensures that no code reaches production without being built, tested, scanned, and reviewed.
Handle flaky tests
Flaky tests - tests that sometimes pass and sometimes fail without any code change - are the most insidious threat to CI/CD pipeline reliability. When developers start ignoring test failures because "that test is just flaky," you have lost the safety net that CI provides.
Address flaky tests by:
- Quarantining known flaky tests (run them but do not block the pipeline)
- Tracking flakiness rates and fixing the worst offenders first
- Setting a team policy that flaky tests must be fixed or deleted within a week
- Using tools like Datadog CI Visibility or BuildPulse that automatically detect and track flaky tests
Implement rollback strategies
Every deployment strategy should include a rollback plan. Common approaches:
- Blue-green deployment: Maintain two identical production environments. Deploy to the inactive one, verify, then switch traffic. Roll back by switching traffic back.
- Canary deployment: Deploy the new version to a small percentage of traffic (e.g., 5%), monitor for errors, then gradually increase. Roll back by routing all traffic to the old version.
- Feature flags: Deploy the code but hide new features behind flags. If something breaks, disable the flag without redeploying.
- Database migrations: Always make migrations backward-compatible so that the previous application version can work with the new schema.
Getting started
If you are setting up CI/CD for the first time, here is the simplest path:
- Choose your platform. If your code is on GitHub, use GitHub Actions. If it is on GitLab, use GitLab CI. Do not overthink this.
- Start with a basic pipeline. Build and test on every pull request. Nothing more.
- Add caching. Once the basic pipeline works, add dependency caching to speed it up.
- Add security scanning. Add Semgrep or Snyk to scan for vulnerabilities on every PR.
- Add deployment. Start with automated deployment to a staging environment. Add production deployment with manual approval.
- Add quality gates. Configure branch protection rules to require CI checks to pass before merging.
- Monitor and iterate. Track pipeline duration and success rate. Fix flaky tests. Optimize slow stages.
You do not need a perfect pipeline on day one. Start simple, measure what matters, and improve incrementally. A basic pipeline that runs on every PR is infinitely better than a sophisticated pipeline that nobody sets up.
Frequently Asked Questions
What is a CI/CD pipeline?
A CI/CD pipeline is an automated workflow that builds, tests, and deploys code changes. CI (Continuous Integration) automatically builds and tests code on every commit. CD (Continuous Delivery/Deployment) automatically deploys tested code to staging or production. The pipeline ensures every change goes through a consistent quality and security process.
What is the difference between CI and CD?
CI (Continuous Integration) focuses on automatically building and testing code when developers push changes. CD has two meanings: Continuous Delivery automatically prepares code for release (requires manual deployment approval), while Continuous Deployment automatically deploys every passing change to production without manual intervention.
Which CI/CD tool should I use?
GitHub Actions is the best choice for GitHub-hosted projects — it's free for public repos and deeply integrated. GitLab CI is best for GitLab users. Jenkins is best for complex enterprise pipelines with custom requirements. Azure Pipelines is best for Microsoft/Azure shops. CircleCI and Buildkite are strong alternatives for high-performance needs.
How much does CI/CD cost?
Many CI/CD tools have free tiers. GitHub Actions includes 2,000 minutes/month free for private repos (unlimited for public). GitLab CI includes 400 minutes/month free. Jenkins is free and open source (you pay for infrastructure). Paid plans typically start at $15-50/month for additional compute minutes.
How do I add code review to a CI/CD pipeline?
Add a code analysis step to your pipeline that runs on pull requests. Common tools include Semgrep (security scanning), SonarQube (quality gates), and AI reviewers (CodeRabbit, PR-Agent). Configure them to post findings as PR comments and optionally block merges when critical issues are found.
Originally published at aicodereview.cc
Top comments (0)