How to Add Screenshot Tests to Your GitHub Actions CI Pipeline
Your team ships a feature. The PR passes all tests. Code review approves it. It gets merged.
Three hours later, a Slack message: "Homepage looks broken on mobile."
The tests all passed. The code was reviewed. But something visual broke. No one caught it.
This happens because your tests don't look at what users see. They only test code logic.
Visual regression testing solves this: every PR automatically captures screenshots, compares them with the baseline, and posts results in the PR comment.
No more surprises in production.
The Problem: Code Tests Miss Visual Regressions
Your CI pipeline looks like this:
PR submitted
→ Run unit tests ✅
→ Run integration tests ✅
→ Run linting ✅
→ Approve and merge
But nobody checks:
- Does the homepage still look right?
- Did the CSS load?
- Are images displaying?
- Is the mobile layout broken?
- Did the button move?
These aren't code bugs. They're visual regressions. And they slip through because your tests don't capture screenshots.
Real examples:
- CSS minification breaks selectors — Code tests pass, but styling is gone
- Image CDN timeout — No HTTP errors, but images don't load
- Mobile breakpoint CSS removed — Desktop still works, but mobile is broken
- Responsive grid changes — Layout looks fine locally, broken on production
- Gradient colors changed — Tests pass, but design doesn't match brand
Code-only testing can't catch any of these.
The Solution: Automated Screenshot Tests in GitHub Actions
Instead of waiting for users to report visual bugs, your CI captures screenshots automatically:
PR submitted
→ Run code tests ✅
→ Run screenshot tests
→ Spin up preview environment
→ Take screenshots of key pages
→ Compare with baseline
→ Post results to PR comment
→ Approve based on visual diff
Every PR gets visual validation before it ships.
Here's the complete GitHub Actions workflow:
# .github/workflows/screenshot-tests.yml
name: Visual Regression Tests
on:
pull_request:
branches: [main]
jobs:
screenshot-tests:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v3
- name: Deploy to preview environment
run: |
# Deploy your app to a staging URL
# Example: Vercel preview, Netlify, or custom staging
npm run build
npm run deploy:preview
env:
VERCEL_TOKEN: ${{ secrets.VERCEL_TOKEN }}
- name: Wait for deployment
run: sleep 30
- name: Run screenshot tests
id: screenshots
run: |
PREVIEW_URL=${{ secrets.PREVIEW_URL }}
# Create directory for screenshots
mkdir -p screenshots/current
# Pages to test
pages=(
"/"
"/pricing"
"/docs"
"/dashboard"
)
# Capture screenshots of each page
for page in "${pages[@]}"; do
echo "📸 Screenshotting $page..."
curl -X POST https://api.pagebolt.dev/v1/screenshot \
-H "Authorization: Bearer ${{ secrets.PAGEBOLT_API_KEY }}" \
-H "Content-Type: application/json" \
-d '{
"url": "'$PREVIEW_URL$page'",
"format": "png",
"fullPage": true,
"width": 1920,
"height": 1080
}' \
-o "screenshots/current/${page//\//_}.png"
done
echo "✅ Screenshots captured"
echo "screenshot_count=4" >> $GITHUB_OUTPUT
- name: Compare with baseline
id: comparison
run: |
# For each screenshot, compare with baseline
changes_found=false
for file in screenshots/current/*.png; do
filename=$(basename "$file")
baseline="screenshots/baseline/$filename"
if [ ! -f "$baseline" ]; then
echo "📌 No baseline for $filename (first run)"
cp "$file" "$baseline"
continue
fi
# Use ImageMagick to compare
diff=$(compare -metric AE "$baseline" "$file" /dev/null 2>&1 || true)
if [ "$diff" -gt 100 ]; then
echo "⚠️ Visual changes detected in $filename"
changes_found=true
else
echo "✅ No changes in $filename"
fi
done
if [ "$changes_found" = true ]; then
echo "changes_detected=true" >> $GITHUB_OUTPUT
else
echo "changes_detected=false" >> $GITHUB_OUTPUT
fi
- name: Upload screenshots
if: always()
uses: actions/upload-artifact@v3
with:
name: screenshots
path: screenshots/current/
- name: Post results to PR
if: always()
uses: actions/github-script@v6
with:
script: |
const fs = require('fs');
const path = require('path');
const changesDetected = '${{ steps.comparison.outputs.changes_detected }}' === 'true';
const status = changesDetected ? '⚠️ Visual changes detected' : '✅ No visual changes';
const color = changesDetected ? 'warning' : 'success';
let comment = `## ${status}\n\n`;
comment += `Screenshot tests completed at ${new Date().toISOString()}\n\n`;
comment += `**Pages tested:** 4\n`;
comment += `**Baseline comparison:** Complete\n\n`;
if (changesDetected) {
comment += '### 🔍 Review Changes\n';
comment += 'Visual differences were detected. Review the [screenshot artifacts](https://github.com/${{ github.repository }}/actions/runs/${{ github.run_id }}) to see the changes.\n\n';
comment += '**To update baseline:**\n';
comment += '```
{% endraw %}
bash\n';
comment += 'git add screenshots/baseline/\n';
comment += 'git commit -m "Update visual baselines"\n';
comment += '
{% raw %}
```\n';
} else {
comment += 'All pages match their visual baselines. Safe to merge.\n';
}
// Post comment to PR
github.rest.issues.createComment({
issue_number: context.issue.number,
owner: context.repo.owner,
repo: context.repo.repo,
body: comment
});
What this workflow does:
- Deploys to preview environment (Vercel, Netlify, etc.)
- Waits for deployment to be ready
- Calls PageBolt API to screenshot each page
- Compares with baseline using ImageMagick
- Posts results to PR comment with status + links to artifacts
Real-World Setup
Step 1: Add secrets to GitHub
Go to Settings → Secrets and Variables → Actions and add:
-
PAGEBOLT_API_KEY— Your PageBolt API key -
PREVIEW_URL— Your preview environment URL (e.g.,https://my-app-pr-123.vercel.app) -
VERCEL_TOKEN— (Optional, if using Vercel)
Step 2: Store baseline screenshots in git
# First run creates baselines
git add screenshots/baseline/
git commit -m "Add visual baselines"
git push
Step 3: Every PR now runs visual tests
When a contributor opens a PR:
PR opens
→ GitHub Actions triggers
→ Screenshots captured
→ Compared with baseline
→ Results posted to PR comment
→ Reviewer can approve or request changes
Advanced: Multi-Device Testing
Test across desktop, tablet, and mobile:
- name: Screenshot multiple devices
run: |
devices=(
'{"width": 1920, "height": 1080, "label": "Desktop"}'
'{"width": 768, "height": 1024, "label": "Tablet"}'
'{"width": 375, "height": 667, "label": "Mobile"}'
)
for device in "${devices[@]}"; do
width=$(echo $device | jq -r '.width')
height=$(echo $device | jq -r '.height')
label=$(echo $device | jq -r '.label')
curl -X POST https://api.pagebolt.dev/v1/screenshot \
-H "Authorization: Bearer ${{ secrets.PAGEBOLT_API_KEY }}" \
-H "Content-Type: application/json" \
-d '{
"url": "${{ env.PREVIEW_URL }}/",
"format": "png",
"width": '$width',
"height": '$height'
}' \
-o "screenshots/current/home-${label}.png"
done
Result: One PR tests desktop + tablet + mobile automatically.
Integration with Popular Platforms
Vercel (Recommended)
Vercel automatically creates preview URLs for every PR. Just pass the URL to PageBolt:
# Vercel automatically sets VERCEL_PREVIEW_URL
- name: Screenshot Vercel preview
run: |
curl -X POST https://api.pagebolt.dev/v1/screenshot \
-H "Authorization: Bearer ${{ secrets.PAGEBOLT_API_KEY }}" \
-H "Content-Type: application/json" \
-d '{
"url": "${{ env.VERCEL_PREVIEW_URL }}",
"format": "png"
}' \
-o screenshot.png
Netlify
- name: Deploy to Netlify
id: netlify
uses: netlify/actions/cli@master
with:
args: deploy --prod
- name: Get Netlify preview URL
run: |
PREVIEW_URL="${{ steps.netlify.outputs.deploy-url }}"
echo "PREVIEW_URL=$PREVIEW_URL" >> $GITHUB_ENV
Docker / Custom Staging
- name: Start Docker container
run: docker run -d -p 3000:3000 my-app:latest
- name: Wait for app
run: sleep 10
- name: Screenshot app
run: |
curl -X POST https://api.pagebolt.dev/v1/screenshot \
-H "Authorization: Bearer ${{ secrets.PAGEBOLT_API_KEY }}" \
-H "Content-Type: application/json" \
-d '{
"url": "http://localhost:3000",
"format": "png"
}' \
-o screenshot.png
Cost & Scale
Every PR triggers screenshots. This drives recurring daily API usage:
| Team Size | PRs/Week | Screenshots/Week | Plan |
|---|---|---|---|
| 2-5 devs | 5 | 20 (4 pages × 5 PRs) | Free (100/mo) |
| 10-20 devs | 20 | 80 | Starter ($29/mo, 5K/mo) |
| 30-50 devs | 50 | 200 | Growth ($99/mo, 50K/mo) |
| 100+ devs | 200+ | 800+ | Scale ($299/mo, unlimited) |
Key insight: One team, one feature, one week = 50 PRs tested = 200 API calls (4 pages per PR).
Monthly: 800 screenshots = easy Growth plan justification.
Real-World Scenario
Your design system changes a color from blue to purple.
Without visual tests:
- PR merged
- Deployed to production
- Users see unexpected color change
- Slack: "Did we change the brand color?"
- Rollback required
With visual tests:
- PR submitted
- Screenshots captured (shows purple everywhere)
- Comparison: "Homepage changed significantly"
- Reviewer sees the visual diff in PR comment
- Approves or requests changes before merge
Result: Visual regressions caught before they hit production.
Best Practices
1. Choose the right pages to test
- Homepage (most visible)
- Key user flows (signup, checkout, etc.)
- Mobile breakpoints
- Dark mode (if applicable)
2. Use meaningful baseline commits
git commit -m "Update baselines after design refresh"
3. Review visual diffs carefully
- Expected changes? Approve the new baseline
- Unexpected? Request changes or rollback
4. Run on every PR
- Not just on
mainmerges - Catch regressions before they're committed
5. Keep baselines in git
- Version control your visual standards
- Track design changes over time
- Easy to see what changed and when
The Bottom Line
Visual regression testing in GitHub Actions is the missing link in modern CI/CD pipelines.
Code tests verify logic. Screenshot tests verify what users see.
Together, they guarantee:
- ✅ Code works correctly
- ✅ Visuals look correct
- ✅ Both pass before any PR merges
Every PR = automatic screenshots = recurring API usage = subscription justification.
Ready to add visual testing to your CI pipeline?
Copy the workflow above, add your PageBolt API key, and run. Free tier: 100 screenshots/month. Growth plan: 50,000/month (perfect for teams with 20+ weekly PRs).
Top comments (0)