GitHub Actions Patterns & Best Practices
A comprehensive guide to building production-grade CI/CD pipelines with GitHub Actions. This document covers reusable workflows, composite actions, matrix strategies, caching, secrets management, and advanced patterns that reduce duplication and improve pipeline reliability.
Table of Contents
- Workflow Organization
- Reusable Workflows
- Composite Actions
- Matrix Strategies
- Caching & Performance
- Secrets & Authentication
- OIDC & Keyless Authentication
- Concurrency Controls
- Environment Protection Rules
- Error Handling & Debugging
- Security Hardening
- Monorepo Patterns
Workflow Organization
A well-organized .github/workflows/ directory is crucial as your pipeline grows. Group workflows by purpose and use clear naming conventions.
Recommended structure:
.github/
workflows/
ci.yml # Primary CI: test + lint on every push/PR
deploy-staging.yml # Deploy to staging on develop branch
deploy-production.yml # Deploy to production on main (with approval)
release.yml # Create releases on version tags
security-scan.yml # Scheduled security scans
dependency-update.yml # Weekly dependency audit
stale-issues.yml # Housekeeping: close stale issues
composite-actions/
setup-python/action.yml # Reusable setup steps
docker-build-push/action.yml
Naming conventions:
- Use kebab-case for filenames:
deploy-staging.yml, notDeployStaging.yml - Prefix scheduled jobs:
scheduled-or use descriptive names likedependency-update - Use the
name:field to give workflows human-readable names in the GitHub UI
Keep workflows focused. Each workflow should have a single responsibility. A CI workflow tests code; a deploy workflow deploys it. Avoid combining unrelated jobs in a single workflow — it makes debugging harder and prevents independent re-runs.
Reusable Workflows
Reusable workflows (workflow_call trigger) let you define a workflow once and call it from multiple repositories or other workflows. This is the primary mechanism for DRY pipelines across an organization.
Defining a reusable workflow:
# .github/workflows/reusable-test.yml
name: Reusable Test Suite
on:
workflow_call:
inputs:
python-version:
type: string
default: "3.12"
test-command:
type: string
default: "pytest --cov"
secrets:
codecov-token:
required: false
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
with:
python-version: ${{ inputs.python-version }}
cache: pip
- run: pip install -r requirements.txt
- run: ${{ inputs.test-command }}
Calling it from another workflow:
# .github/workflows/ci.yml
name: CI
on: [push, pull_request]
jobs:
test:
uses: ./.github/workflows/reusable-test.yml
with:
python-version: "3.12"
secrets:
codecov-token: ${{ secrets.CODECOV_TOKEN }}
Key constraints:
- Reusable workflows can be nested up to 4 levels deep
- A reusable workflow job cannot use
envcontext from the caller - Called workflows run in the same repository context (or cross-repo with
owner/repo/.github/workflows/file.yml@ref) - Secrets must be explicitly passed (or use
secrets: inheritto pass all)
When to use reusable workflows vs composite actions:
- Use reusable workflows when you need to define entire jobs with their own
runs-on, services, or strategy matrix - Use composite actions when you need a reusable set of steps within a job
Composite Actions
Composite actions bundle multiple steps into a single action that can be used as a step in any job. They live in a directory with an action.yml file.
Key advantages over reusable workflows:
- Can be used as a step alongside other steps in the same job
- Support inputs and outputs for data passing
- Can be versioned and shared via marketplace or repository references
Example — Setup with caching:
# .github/composite-actions/setup-python/action.yml
name: Setup Python Environment
description: Install Python with pip caching and project dependencies
inputs:
python-version:
description: Python version to install
default: "3.12"
runs:
using: composite
steps:
- uses: actions/setup-python@v5
with:
python-version: ${{ inputs.python-version }}
cache: pip
- run: pip install -r requirements.txt
shell: bash
Important: Every run step in a composite action must specify shell:. This is required because composite actions don't inherit the workflow's default shell.
Passing data between steps:
- name: Compute version
id: version
shell: bash
run: echo "tag=v$(date +%Y%m%d.%H%M%S)" >> $GITHUB_OUTPUT
- name: Use computed version
shell: bash
run: echo "Deploying ${{ steps.version.outputs.tag }}"
Matrix Strategies
Matrix builds run the same job configuration across multiple parameter combinations. Use them to test across language versions, operating systems, or configuration variants.
Basic matrix:
strategy:
fail-fast: false # Don't cancel other jobs if one fails
matrix:
python-version: ["3.10", "3.11", "3.12"]
os: [ubuntu-latest, macos-latest]
This creates 6 jobs (3 versions x 2 operating systems).
Include/exclude for fine-grained control:
strategy:
matrix:
python-version: ["3.10", "3.11", "3.12"]
os: [ubuntu-latest, macos-latest]
exclude:
# Skip Python 3.10 on macOS (not a supported combo)
- python-version: "3.10"
os: macos-latest
include:
# Add a special Windows job for the latest Python only
- python-version: "3.12"
os: windows-latest
extra-args: "--timeout=300"
Dynamic matrix from JSON:
jobs:
generate:
runs-on: ubuntu-latest
outputs:
matrix: ${{ steps.set-matrix.outputs.matrix }}
steps:
- id: set-matrix
run: |
echo 'matrix={"service":["api","worker","frontend"]}' >> $GITHUB_OUTPUT
build:
needs: generate
strategy:
matrix: ${{ fromJSON(needs.generate.outputs.matrix) }}
runs-on: ubuntu-latest
steps:
- run: echo "Building ${{ matrix.service }}"
Best practices:
- Set
fail-fast: falsefor test matrices so you can see all failures at once - Use
includeto add one-off jobs without expanding the entire matrix - Keep matrices under 256 combinations (GitHub limit) — typically under 20 for practical purposes
Caching & Performance
Caching dependencies is the single biggest performance improvement for most workflows. GitHub provides 10 GB of cache storage per repository.
Language-specific caching (recommended):
# Python — built-in cache support
- uses: actions/setup-python@v5
with:
python-version: "3.12"
cache: pip
# Node.js — built-in cache support
- uses: actions/setup-node@v4
with:
node-version: 22
cache: npm
# Go — built-in cache support
- uses: actions/setup-go@v5
with:
go-version: "1.22"
cache: true
Manual caching for other tools:
- uses: actions/cache@v4
with:
path: |
~/.cache/pre-commit
~/.local/share/virtualenvs
key: ${{ runner.os }}-precommit-${{ hashFiles('.pre-commit-config.yaml') }}
restore-keys: |
${{ runner.os }}-precommit-
Docker layer caching:
- uses: docker/build-push-action@v5
with:
cache-from: type=gha
cache-to: type=gha,mode=max
The mode=max option caches all layers, not just the final image layers. This dramatically improves cache hit rates for multi-stage builds.
Performance tips:
- Use
npm ciinstead ofnpm install— it's faster and deterministic - Use
--frozen-lockfilewith yarn/pnpm - Run independent jobs in parallel (don't add unnecessary
needs:dependencies) - Use
concurrencyto cancel redundant in-progress runs - Upload artifacts only when needed, with appropriate
retention-days
Secrets & Authentication
GitHub Actions provides several mechanisms for managing sensitive values.
Repository secrets:
env:
API_KEY: ${{ secrets.API_KEY }}
Secrets are masked in logs automatically. Never echo them or write them to files that get uploaded as artifacts.
Environment secrets allow different values per environment (staging, production):
jobs:
deploy:
environment: production # Uses production-specific secrets
steps:
- run: deploy --token ${{ secrets.DEPLOY_TOKEN }}
Organization secrets are shared across repositories:
- Configured at the organization level
- Can be scoped to specific repositories or all repositories
- Repository secrets override organization secrets with the same name
Best practices:
- Use environment-scoped secrets for deployment credentials
- Rotate secrets regularly — GitHub provides an API for this
- Never store secrets in workflow files, even in encoded form
- Use
secrets: inheritsparingly — prefer explicit secret passing for auditability - For third-party actions, pin to a specific commit SHA rather than a tag to prevent supply chain attacks
OIDC & Keyless Authentication
OpenID Connect (OIDC) eliminates the need for long-lived cloud credentials stored as secrets. Instead, GitHub issues a short-lived JWT that your cloud provider trusts.
AWS with OIDC:
permissions:
id-token: write # Required for OIDC
contents: read
steps:
- uses: aws-actions/configure-aws-credentials@v4
with:
role-to-assume: arn:aws:iam::123456789012:role/github-actions
aws-region: us-east-1
# No access keys needed!
Setting up AWS OIDC trust:
- Create an OIDC identity provider in AWS IAM pointing to
token.actions.githubusercontent.com - Create an IAM role with a trust policy that restricts access to your specific repository and branch
- Reference the role ARN in your workflow
Trust policy example:
{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Principal": {
"Federated": "arn:aws:iam::123456789012:oidc-provider/token.actions.githubusercontent.com"
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"StringEquals": {
"token.actions.githubusercontent.com:aud": "sts.amazonaws.com"
},
"StringLike": {
"token.actions.githubusercontent.com:sub": "repo:my-org/my-repo:ref:refs/heads/main"
}
}
}]
}
OIDC is supported by AWS, Azure, GCP, HashiCorp Vault, and many other providers. It is strongly recommended over static credentials for all production deployments.
Concurrency Controls
Concurrency groups prevent multiple instances of the same workflow from running simultaneously, which is critical for deployments and resource-heavy jobs.
Cancel redundant PR checks:
concurrency:
group: ci-${{ github.ref }}
cancel-in-progress: true
This cancels any in-progress CI run when a new commit is pushed to the same branch. Saves runner minutes and prevents stale results.
Serialize deployments (never cancel):
concurrency:
group: deploy-production
cancel-in-progress: false # Queue instead of cancelling
For deployments, you typically want queuing rather than cancellation. Setting cancel-in-progress: false ensures the current deployment finishes before the next one starts.
Per-environment concurrency:
concurrency:
group: deploy-${{ github.event.inputs.environment }}
cancel-in-progress: false
Environment Protection Rules
GitHub Environments add approval gates, wait timers, and deployment branch restrictions.
Configure in Settings > Environments:
- Required reviewers: One or more team members must approve before the job runs
- Wait timer: Delay deployment by N minutes (useful for canary deployments)
- Deployment branches: Restrict which branches can deploy to this environment
Using environments in workflows:
jobs:
deploy-staging:
environment: staging
runs-on: ubuntu-latest
steps:
- run: ./deploy.sh staging
deploy-production:
needs: deploy-staging
environment:
name: production
url: https://myapp.example.com # Shows in the GitHub UI
runs-on: ubuntu-latest
steps:
- run: ./deploy.sh production
Error Handling & Debugging
Conditional execution:
- name: Upload coverage
if: always() # Run even if previous steps failed
uses: actions/upload-artifact@v4
- name: Notify on failure
if: failure() # Only run when a previous step failed
run: curl -X POST ${{ secrets.SLACK_WEBHOOK }} -d '{"text":"Build failed!"}'
- name: Cleanup
if: cancelled() # Only run when the workflow was cancelled
run: ./cleanup.sh
Step outputs for conditional logic:
- name: Check for changes
id: changes
run: |
if git diff --name-only HEAD~1 | grep -q '^src/'; then
echo "src_changed=true" >> $GITHUB_OUTPUT
fi
- name: Run tests
if: steps.changes.outputs.src_changed == 'true'
run: pytest
Debug logging: Re-run a failed workflow with "Enable debug logging" checkbox, or set the ACTIONS_STEP_DEBUG secret to true for verbose output.
Timeout protection:
jobs:
test:
runs-on: ubuntu-latest
timeout-minutes: 15 # Kill if stuck (default is 360 minutes!)
Always set explicit timeouts. The 6-hour default can waste runner minutes if a test hangs.
Security Hardening
Pin action versions to commit SHAs:
# Bad — tag can be moved by the action author
- uses: actions/checkout@v4
# Good — immutable reference
- uses: actions/checkout@b4ffde65f46336ab88eb53be808477a3936bae11 # v4.1.1
Use tools like Dependabot or pin-github-action to automate SHA pinning.
Minimal permissions:
# Set restrictive defaults at the workflow level
permissions:
contents: read
jobs:
deploy:
permissions:
contents: read
deployments: write # Only this job gets extra permissions
Always follow the principle of least privilege. Start with permissions: read-all or specific read permissions, then add write permissions only where needed.
Protect against script injection:
# Vulnerable — PR title could contain malicious commands
- run: echo "PR: ${{ github.event.pull_request.title }}"
# Safe — use an intermediate environment variable
- env:
PR_TITLE: ${{ github.event.pull_request.title }}
run: echo "PR: $PR_TITLE"
Any ${{ }} expression in a run: block is interpolated before the shell runs, making it vulnerable to injection. Always assign untrusted inputs to environment variables first.
Fork safety for pull_request_target:
The pull_request_target trigger runs with the base branch's code but gives access to secrets. Never use it to execute code from the PR (e.g., don't checkout the PR head and run its tests with secrets available). Use pull_request for untrusted code execution.
Monorepo Patterns
For monorepos with multiple services, use path filters and dynamic matrices to run only relevant jobs.
Path-filtered triggers:
on:
push:
paths:
- "services/api/**"
- "shared/lib/**" # Shared code affects the API too
Dynamic service detection:
yaml
jobs:
detect-changes:
runs-on: ubuntu-latest
outputs:
services: ${{ steps.filter.outputs.changes }}
steps:
- uses: actions/checkout@v4
- uses: dorny/paths-filter@v3
id: filter
with:
filters: |
api:
- 'services/api/**'
frontend:
- 'services/frontend/**'
worker:
- 'services/worker/**'
*... [content trimmed for length — full version in the complete kit]*
---
> **This is 1 of 6 resources in the [DevOps Toolkit Pro](https://datanest-stores.pages.dev/devops-toolkit/) toolkit.** Get the complete [Github Actions Workflows] with all files, templates, and documentation for $XX.
>
> **[Get the Full Kit →](https://buy.stripe.com/LINK)**
>
> Or grab the entire DevOps Toolkit Pro bundle (6 products) for $178 — save 30%.
>
> **[Get the Complete Bundle →](https://buy.stripe.com/aFa3cv6iH0qqaBsdSFgjH18)**
---
## Related Articles
- [Backup Disaster Recovery: Backup & Disaster Recovery Kit](https://dev.to/thesius_code_7a136ae718b7/backup-disaster-recovery-backup-disaster-recovery-kit-34o2)
- [Cicd Pipeline Blueprints: CI/CD Patterns & Best Practices Guide](https://dev.to/thesius_code_7a136ae718b7/cicd-pipeline-blueprints-cicd-patterns-best-practices-guide-5d7i)
- [Container Security Toolkit: Container Security Guide](https://dev.to/thesius_code_7a136ae718b7/container-security-toolkit-container-security-guide-4k23)
Top comments (0)