Every team has had a bug slip through code review and make it to production.
Not because the developer didn't know better. Not because the reviewer wasn't paying attention. But because code review is a human process — and humans get tired, rushed, and distracted.
Quality gates fix this by automating the first pass. If your code scores below a minimum threshold, the build fails automatically. No human intervention needed. Bad code never reaches production.
In this tutorial I'll show you how to set one up in GitHub Actions in under 5 minutes.
What is a code quality gate?
A quality gate is a pass/fail check in your CI/CD pipeline that evaluates code against a defined standard. If the code doesn't meet the standard — the pipeline fails and the PR cannot be merged.
You've probably used basic quality gates already without calling them that:
- ESLint failing a build on lint errors
- Tests failing before a deploy
- TypeScript refusing to compile with type errors
AI-powered quality gates take this further by catching things static analysis can't — logic bugs, security vulnerabilities, missing error handling, and performance anti-patterns.
What you need
- A GitHub repository
- A GetCodeReviews API key (free at getcodereviews.com)
- 5 minutes
Step 1 — Get your API key
Sign up at getcodereviews.com and go to Dashboard → API Keys → Generate Key.
You'll see a key that looks like this:
csai_a1b2c3d4e5f6...
Save this — you'll add it to GitHub Secrets in the next step.
Step 2 — Add your API key to GitHub Secrets
- Go to your GitHub repo
- Click Settings → Secrets and variables → Actions
- Click New repository secret
- Name:
CODESCAN_API_KEY - Value: paste your API key
- Click Add secret
Step 3 — Create the workflow file
Create .github/workflows/code-quality.yml in your repo:
name: Code Quality Gate
on:
pull_request:
types: [opened, synchronize, reopened]
permissions:
pull-requests: write
contents: read
jobs:
quality-gate:
name: AI Code Review
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Get changed files
id: changed
run: |
echo "files=$(git diff --name-only origin/${{ github.base_ref }}...HEAD \
| grep -E '\.(js|ts|jsx|tsx|py|go|rs|java|php|rb)$' \
| head -10 \
| tr '\n' ' ')" >> $GITHUB_OUTPUT
- name: Run quality gate
if: steps.changed.outputs.files != ''
run: |
MIN_SCORE=70
TOTAL=0
COUNT=0
FAILED=0
for FILE in ${{ steps.changed.outputs.files }}; do
if [ -f "$FILE" ]; then
echo "Reviewing: $FILE"
CODE=$(cat "$FILE" | head -200)
EXT="${FILE##*.}"
RESPONSE=$(curl -s -X POST https://getcodereviews.com/api/review \
-H "Content-Type: application/json" \
-H "x-api-key: ${{ secrets.CODESCAN_API_KEY }}" \
-d "{\"code\": $(echo "$CODE" | jq -Rs .), \"language\": \"$EXT\", \"source\": \"github\"}")
SCORE=$(echo $RESPONSE | jq -r '.result.score // 0')
SUMMARY=$(echo $RESPONSE | jq -r '.result.summary // "No summary"')
echo " Score: $SCORE/100 — $SUMMARY"
TOTAL=$((TOTAL + SCORE))
COUNT=$((COUNT + 1))
if [ "$SCORE" -lt "$MIN_SCORE" ]; then
FAILED=$((FAILED + 1))
echo " ❌ Below minimum score of $MIN_SCORE"
else
echo " ✅ Passed"
fi
fi
done
if [ "$COUNT" -gt 0 ]; then
AVG=$((TOTAL / COUNT))
echo ""
echo "Average score: $AVG/100 across $COUNT files"
echo "Failed files: $FAILED"
if [ "$FAILED" -gt 0 ]; then
echo ""
echo "❌ Quality gate failed — $FAILED file(s) scored below $MIN_SCORE/100"
echo "Fix the issues flagged above and push again."
exit 1
else
echo "✅ Quality gate passed!"
fi
fi
Commit and push this file. That's it — your quality gate is live.
Step 4 — Test it
Open a pull request with any code change. You'll see the AI Code Review check appear in the PR checks section.
If the code scores above 70/100 it passes with a green checkmark. Below 70 and it fails with details of exactly what needs fixing.
Here's an example of what the output looks like in the Actions log:
Reviewing: src/api/users.js
Score: 45/100 — Critical security issues found
❌ Below minimum score of 70
Reviewing: src/utils/helpers.ts
Score: 88/100 — Clean code with minor suggestions
✅ Passed
Average score: 66/100 across 2 files
Failed files: 1
❌ Quality gate failed — 1 file(s) scored below 70/100
Fix the issues flagged above and push again.
Step 5 — Customize your threshold
Change the MIN_SCORE value in the workflow to match your team's standards:
MIN_SCORE=50 # Relaxed — catches only critical issues
MIN_SCORE=70 # Standard — balanced quality control (recommended)
MIN_SCORE=85 # Strict — high quality standard
Start at 70 and adjust based on your codebase. If you're adding this to an existing project, start lower (50-60) and raise it gradually as you fix existing issues.
What it catches
The AI reviews each changed file for:
🔴 Critical — SQL injection, XSS, hardcoded secrets, missing authentication checks
⚠️ Warnings — missing error handling, off-by-one errors, memory leaks, deprecated patterns
💡 Suggestions — performance improvements, best practice violations, refactoring opportunities
Each issue comes with a specific fix — not vague advice like "improve this function" but actual code you can copy and apply.
Blocking merges on failure (optional)
To prevent PRs from being merged when the quality gate fails:
- Go to your repo → Settings → Branches
- Click Add branch protection rule
- Branch name pattern:
main - Check Require status checks to pass before merging
- Search for and select AI Code Review
- Save
Now the merge button is greyed out until the quality gate passes. Bad code literally cannot reach your main branch.
Why this matters
Most security vulnerabilities don't get caught in code review because reviewers are focused on logic and functionality — not scanning every line for SQL injection patterns or checking if secrets are hardcoded.
Automated quality gates handle the first pass automatically. Your team still does the architectural review. The AI handles the security and bug check first — every single time, without getting tired.
The result: your code review process focuses on what humans are actually good at — understanding business logic, architecture decisions, and edge cases — instead of playing grep for security vulnerabilities.
Try it free
GetCodeReviews has a free tier with 10 reviews/month — enough to try it on your next PR.
Pro plan ($29/month) includes 200 reviews, full GitHub Actions integration, VS Code inline diagnostics, and analytics to track your team's code quality over time.
Built questions or hit an issue setting this up? Drop a comment below — I read and reply to every one.
Top comments (0)