TL;DR: Scanners that run automatically, findings that aggregate in one place, reports that don't make stakeholders' eyes glaze over. For small-to-medium engineering teams who need real security without hiring a dedicated AppSec team.
Who is this guide for?
Teams without a dedicated AppSec function, platform engineers, or DevOps teams who want a practical, tool-agnostic blueprint for continuous security in CI/CD.
Security scanners are cheap. Security architecture that developers don't hate is expensive.
Most teams end up with a mess: SAST runs somewhere in Jenkins, Snyk emails get ignored, and pentest reports live in Google Drive where findings go to die. Developers don't ignore security findings because they're lazy—they ignore them because findings arrive in 47 different places with zero context.
This is the architecture I built and actually use. Not a vendor pitch. Not enterprise theater. Just the stack that works when you need security that scales.
The End-to-End Flow
Everything flows into Faraday for deduplication and triage. Developers see findings in their PRs. Security team manages everything from one dashboard.
Design Philosophy
Before throwing tools at the problem, get the fundamentals right:
Shift-left without becoming a bottleneck — Security runs in CI/CD, not as a gate that developers circumvent with "temporary" bypasses that become permanent.
Single source of truth — One dashboard to rule them all. No more "where did I see that SQL injection again?"
Humans still matter — Automated scanners catch the obvious. Manual testing finds the business logic flaws that actually get exploited.
Actionable by default — Every finding needs an owner, a severity that makes sense, and remediation guidance that isn't "fix it."
Tool-agnostic — Your architecture shouldn't implode when you swap one scanner for another.
The Stack (Recommended + Alternatives)
SAST (Static Analysis)
Recommended: Semgrep — Fast, multilingual, doesn't make PRs take 10 minutes. Free tier is generous.
Alternative: SonarQube — Consider if you need code quality metrics beyond security or have 50+ microservices needing centralized quality gates.
SCA (Dependency Scanning)
Recommended: Snyk — Free tier is excellent. GitHub integration is native. Dashboard is actually usable.
Alternative: Trivy — Great for container/IaC scanning. Better for cost-conscious teams scaling up.
Critical detail: For most teams, the Snyk web dashboard is sufficient—just connect your repos in their UI. If you want findings in GitHub Security tab, add the optional SARIF upload workflow (shown below), but it's not required.
DAST (Dynamic Analysis)
Recommended: OWASP ZAP — Industry standard, great for CI/CD, active community.
Alternative: Nuclei — Template-based scanning. Faster for API-focused testing.
Manual: Burp Suite Professional — Non-negotiable for auth bypasses, IDOR, race conditions, and business logic bugs that scanners miss entirely.
Infrastructure Visibility
Recommended: Nmap — Know what ports you've exposed before attackers do.
Alternative: Masscan + Naabu — Faster for large IP ranges, but Nmap's service detection is unmatched.
Aggregation
Recommended: Faraday — I tried DefectDojo. The UI made me sad. Faraday's API is cleaner and the workflow makes more sense for my team.
Alternative: DefectDojo — Broader importer ecosystem. Better if you're integrating 15+ tools.
Reporting
Recommended: SysReptor — Write in Markdown, design with HTML/CSS. Most flexible.
Alternative: PwnDoc — Simpler templates, less customization. Good for teams who just want "done."
Implementation: Week by Week
Week 1: The Foundation
Start with the highest ROI, lowest friction changes.
1. SAST on Every PR
Semgrep catches SQL injection, XSS, and crypto misuse before code merges:
name: Semgrep Security Scan
on:
pull_request:
push:
branches: [main, develop]
jobs:
semgrep:
runs-on: ubuntu-latest
container:
image: returntocorp/semgrep
steps:
- uses: actions/checkout@v4
- name: Run Semgrep
run: semgrep ci --sarif --output=semgrep.sarif
env:
SEMGREP_APP_TOKEN: ${{ secrets.SEMGREP_APP_TOKEN }}
- name: Upload to GitHub Security
uses: github/codeql-action/upload-sarif@v3
if: always()
with:
sarif_file: semgrep.sarif
Findings show up inline in PR reviews. Developers actually see them. That's the whole point.
2. Dependency Scanning (The Easy Win)
Don't overthink this.
Go to snyk.io
Sign up with GitHub OAuth
Click "Add Projects"
Select your repos
Done
Snyk will scan on every PR automatically. Their dashboard shows you everything. You don't need a GitHub Action for this unless you want SARIF uploads to GitHub Security tab (which is nice but optional).
If you DO want the GitHub integration for SARIF uploads (optional):
name: Snyk Security
on:
pull_request:
jobs:
snyk:
runs-on: ubuntu-latest
permissions:
security-events: write
contents: read
steps:
- uses: actions/checkout@v4
- name: Run Snyk
uses: snyk/actions/node@master
env:
SNYK_TOKEN: ${{ secrets.SNYK_TOKEN }}
with:
args: --sarif-file-output=snyk.sarif
- name: Upload SARIF to GitHub Security tab
uses: github/codeql-action/upload-sarif@v3
if: always()
with:
sarif_file: snyk.sarif
Real talk: The Snyk dashboard already shows everything. This workflow just puts findings in GitHub's Security tab too. Nice to have, not required.
3. Nightly DAST Baseline
Run passive scans against staging without auth complexity:
name: ZAP Baseline Scan
on:
schedule:
- cron: '0 2 * * *' # 2 AM daily
workflow_dispatch:
jobs:
zap:
runs-on: ubuntu-latest
steps:
- name: ZAP Baseline Scan
uses: zaproxy/action-baseline@v0.12.0
with:
target: 'https://staging.example.com'
rules_file_name: '.zap/rules.tsv'
cmd_options: '-a'
- name: Upload Report
uses: actions/upload-artifact@v4
if: always()
with:
name: zap-report
path: report_html.html
Start with baseline mode. It's passive, won't break anything, and you can tune false positives before enabling active attacks.
4. Know Your Attack Surface
Weekly infrastructure scans:
name: Nmap Perimeter Scan
on:
schedule:
- cron: '0 4 * * 0' # Sunday 4 AM
workflow_dispatch:
jobs:
nmap:
runs-on: ubuntu-latest
steps:
- name: Install Nmap
run: sudo apt-get update && sudo apt-get install -y nmap
- name: Scan Production
run: |
nmap -sV --top-ports 1000 \
-oX nmap-scan.xml \
prod.example.com api.example.com
- name: Upload Results
uses: actions/upload-artifact@v4
with:
name: nmap-report
path: nmap-scan.xml
Establish a baseline of expected ports. Alert on changes. New port 3306 exposed? Someone probably didn't mean to do that.
Week 2-4: Centralization
You're now collecting findings from 4+ sources. Time to aggregate them before you drown in noise.
Setting Up Faraday
Faraday gives you:
API-first design (easy to script imports)
Workspace management per project/client
Web UI that doesn't make you want to quit security
Deduplication that actually works
Docker setup:
docker run -d \
--name faraday \
-p 5985:5985 \
-e PGSQL_HOST=postgres \
-e PGSQL_USER=faraday \
-e PGSQL_PASSWD=changeme \
faradaysec/faraday
Access at http://localhost:5985 (default creds: faraday/changeme—change them immediately).
Automated Imports
Push findings from CI into Faraday:
import requests
import json
FARADAY_URL = "https://faraday.example.com"
API_TOKEN = "your-api-token"
def upload_to_faraday(workspace, tool_name, report_file):
"""Import scan results into Faraday workspace."""
headers = {
"Authorization": f"Token {API_TOKEN}",
"Content-Type": "application/json"
}
with open(report_file, 'r') as f:
report_data = f.read()
# Faraday auto-detects report format
response = requests.post(
f"{FARADAY_URL}/api/v3/ws/{workspace}/upload_report",
headers=headers,
files={'file': (report_file, report_data)}
)
if response.status_code == 200:
print(f"✓ Uploaded {tool_name} scan successfully")
return response.json()
else:
print(f"✗ Upload failed: {response.text}")
return None
# Usage in CI:
# upload_to_faraday('project-alpha', 'ZAP', 'zap-report.xml')
# upload_to_faraday('project-alpha', 'Semgrep', 'semgrep.sarif')
GitHub Action integration:
- name: Upload to Faraday
env:
FARADAY_URL: ${{ secrets.FARADAY_URL }}
FARADAY_TOKEN: ${{ secrets.FARADAY_TOKEN }}
run: |
python scripts/faraday_upload.py \
--workspace ${{ github.event.repository.name }} \
--tool ZAP \
--file zap-report.xml
Handling False Positives
Build a suppression system or you'll go insane:
# .security-exceptions.yml
suppressions:
- tool: semgrep
rule: "javascript.lang.security.audit.xss.template-string"
paths:
- "src/__tests__/*"
- "src/fixtures/*"
reason: "Test fixtures intentionally use unsafe patterns"
approved_by: "security-team@example.com"
expires: "2025-12-31"
- tool: snyk
cve: "CVE-2023-12345"
reason: "False positive - we don't use vulnerable code path"
ticket: "SEC-456"
approved_by: "security-team@example.com"
Filter findings in CI before uploading:
# filter_findings.py
import yaml
def should_suppress(finding, suppressions):
"""Check if finding matches suppression rules."""
for rule in suppressions:
if rule['tool'] != finding['tool']:
continue
if 'rule' in rule and rule['rule'] == finding['rule_id']:
if 'paths' in rule:
if any(path in finding['file'] for path in rule['paths']):
return True
else:
return True
if 'cve' in rule and rule['cve'] == finding['cve']:
return True
return False
# Use in GitHub Actions:
# python filter_findings.py --input semgrep.json --output filtered.json
In your upload script:
with open('.security-exceptions.yml') as f:
suppressions = yaml.safe_load(f)['suppressions']
filtered_findings = [
f for f in findings
if not should_suppress(f, suppressions)
]
upload_to_faraday(workspace, 'Semgrep', filtered_findings)
This keeps false positives out of Faraday entirely. Much cleaner than triaging the same noise every week.
Ongoing: Enhancement
Authenticated DAST
Baseline scans only cover logged-out pages. Upgrade to scan the 80% of your app behind authentication:
ZAP Authentication Context (create .zap/auth-context.conf):
# Form-based authentication example
env:
contexts:
- name: "staging-auth"
urls:
- "https://staging.example.com.*"
authentication:
method: "formBasedAuthentication"
parameters:
loginUrl: "https://staging.example.com/login"
loginRequestData: "username={%username%}&password={%password%}"
verification:
method: "response"
loggedInRegex: "\\Qlogout\\E"
loggedOutRegex: "\\Qlogin\\E"
users:
- name: "test-user"
credentials:
username: "${TEST_USER}"
password: "${TEST_PASS}"
GitHub Action with auth:
- name: ZAP Full Scan with Auth
uses: zaproxy/action-full-scan@v0.10.0
with:
target: 'https://staging.example.com'
cmd_options: '-a -j -n auth-context.conf'
env:
TEST_USER: ${{ secrets.TEST_USER }}
TEST_PASS: ${{ secrets.TEST_PASS }}
Real example: We caught a path traversal vulnerability in our file download API that only manifested for authenticated users. ZAP's authenticated scan found it; baseline mode never would have.
Manual Testing (The Important Part)
Quarterly security sprints for high-risk features:
Week 1: Scope
New payment integration? Test it.
Admin panel rewrite? Test it.
Multi-tenant feature? Definitely test it.
Week 2-3: Test with Burp
IDOR across tenant boundaries
Race conditions in payment flows
Session handling edge cases
Auth bypasses through state manipulation
Week 4: Document & Track
Log findings in Faraday with PoCs
Create Jira tickets with severity labels
Generate stakeholder report
No scanner will catch:
User A accessing User B's data through ID manipulation
Payment bypass by canceling during webhook processing
Admin access through malformed OAuth state
SSRF chains that require manual exploitation
Real examples:
Prototype pollution from Semgrep: Caught unsafe
Object.assign()usage in our webhook handler that could've let attackers pollute global object properties. Fixed before merge.Path traversal from ZAP: Authenticated scan found a
../injection in our file download API that let users access arbitrary files. Would've missed this without auth context enabled.IDOR from manual Burp testing: Found that changing
user_idparameter in profile update endpoint let users modify other accounts. Simple, devastating, and scanners never flagged it because the API response looked identical for authorized/unauthorized attempts.
That's why you still need humans.
Reporting That Doesn't Suck
Export from Faraday and generate proper reports:
Option 1: SysReptor (my recommendation)
Write findings in Markdown
Customize templates with HTML/CSS
Generate PDF with one click
Self-hosted or cloud
Option 2: PwnDoc
Simpler, less customization
Good default templates
Vulnerability database built-in
# Export from Faraday
curl -X GET "${FARADAY_URL}/api/v3/ws/project-alpha/vulns" \
-H "Authorization: Token ${API_TOKEN}" \
-o findings.json
# Import to SysReptor or PwnDoc
# Generate quarterly report PDF
The Glue Code That Matters
Success lives in the 5-10 scripts that bridge tools:
1. SARIF Normalizer
Different tools output SARIF differently. Normalize it:
import json
def normalize_sarif(file_path, tool_name):
"""Fix SARIF inconsistencies."""
with open(file_path) as f:
sarif = json.load(f)
for run in sarif.get('runs', []):
# Ensure tool name is set
run['tool']['driver']['name'] = tool_name
# Fix relative paths
for result in run.get('results', []):
for loc in result.get('locations', []):
uri = loc['physicalLocation']['artifactLocation']['uri']
if not uri.startswith(('file://', 'https://')):
loc['physicalLocation']['artifactLocation']['uri'] = f'file://{uri}'
with open(file_path, 'w') as f:
json.dump(sarif, f, indent=2)
2. Slack Alerts for Critical Findings
import requests
def notify_slack(finding):
"""Alert on High/Critical findings."""
if finding['severity'] not in ['Critical', 'High']:
return
message = {
"text": f"🚨 {finding['severity']} Security Finding",
"blocks": [
{
"type": "section",
"text": {
"type": "mrkdwn",
"text": f"*{finding['title']}*\n{finding['description'][:200]}"
}
},
{
"type": "section",
"fields": [
{"type": "mrkdwn", "text": f"*Tool:*\n{finding['tool']}"},
{"type": "mrkdwn", "text": f"*Severity:*\n{finding['severity']}"},
{"type": "mrkdwn", "text": f"*Location:*\n`{finding['file']}:{finding['line']}`"}
]
},
{
"type": "actions",
"elements": [{
"type": "button",
"text": {"type": "plain_text", "text": "View in Faraday"},
"url": finding['faraday_url']
}]
}
]
}
requests.post(SLACK_WEBHOOK_URL, json=message)
Metrics That Actually Matter
Track these in Faraday custom fields and Jira labels:
Mean Time to Remediate (MTTR):
Critical: < 7 days (Faraday field:
remediation_deadline)High: < 30 days
Medium: < 90 days
Track in Faraday using custom field date_closed - date_created. Alert on SLA breaches via Slack webhook.
Coverage:
% of repos with SAST enabled (GitHub API: count repos with
.github/workflows/semgrep.yml)% of deployments scanned by DAST (track in deployment metadata)
Manual testing hours per quarter (Jira issue time tracking: sum hours on tickets tagged
security-sprint)
Trends (Faraday dashboard queries):
New vulnerabilities per sprint:
GET /api/v3/ws/{workspace}/vulns?created_after=2024-01-01Remediation velocity: Count of
status=closedgrouped by weekMost common CWE:
GROUP BY cweto identify patterns worth fixing upstream
Developer Experience:
False positive rate:
(marked_as_false_positive / total_findings) × 100(should be <20%)Time to triage: Track via Faraday field
date_triaged - date_createdAverage PR scan duration: GitHub Actions metrics (should be <3 min)
Dashboard query example:
# Weekly remediation report
response = requests.get(
f"{FARADAY_URL}/api/v3/ws/project-alpha/vulns",
params={
'status': 'closed',
'closed_after': '2024-11-01',
'group_by': 'severity'
},
headers={'Authorization': f'Token {API_KEY}'}
)
print(f"Closed this month: {response.json()}")
If MTTR is trending up, you're creating more findings than your team can fix. Scale back scanning frequency or prioritize better.
What NOT to Do
Running multiple dashboards — Pick Faraday OR DefectDojo. Not both. Tool sprawl kills adoption faster than false positives.
Emailing PDF reports — Evidence belongs with findings in your aggregation platform. Email is where information goes to die.
Treating all severities equally — A critical SQL injection in production auth is not the same as a medium XSS in your 404 page. Prioritize by (exploitability × impact × exposure).
Unauthenticated DAST only — 80% of your attack surface is behind login. Scan it.
No suppression workflow — 30% false positive rate = developers ignore your tools. Build a clear process for accepting/dismissing findings.
Forgetting APIs — Modern apps are API-first. Don't just scan the web UI. Use Postman collections or OpenAPI specs with ZAP.
What This Actually Looks Like
Monday morning: Developer opens PR. Semgrep flags a SQL injection in inline comments. They fix it before requesting review.
Tuesday afternoon: You review Faraday dashboard, triage 8 new ZAP findings, mark 3 as false positives, create Jira tickets for 5.
Wednesday: Product asks about security for the upcoming release. You filter Faraday by "High+ findings added this sprint" and export a CSV. Takes 30 seconds.
End of quarter: Security sprint finds an IDOR that would've let users access other accounts. Document in Faraday, create emergency Jira ticket, generate SysReptor report for leadership with PoC screenshots.
Compliance audit: Auditor wants proof of regular scanning. Export 6 months of scan history from Faraday. They're happy. You're happy.
A Practical Closing Thought
You don’t need to implement everything at once. Start with whatever gives your team the most visibility — SAST, SCA, DAST, infra scans — and evolve the rest over time. The architecture I shared is just what worked for me; the best security setup is the one your developers actually stick to. Build it incrementally, tune aggressively, and keep the feedback loop tight.
Resources
Questions about implementing this? Already running something similar? Drop your thoughts in the comments.

Top comments (0)