What if every pull request got a security review before merge?
Not a linter check. Not a regex-based scanner. An actual review — the kind a senior security engineer would do — pointing out SQL injection, hardcoded secrets, command injection, and path traversal bugs, with inline comments on the exact lines that are broken.
I built that. It took a weekend. It costs about $0.003 per review. And it runs on Cloudflare Workers with zero servers to manage.
Let me show you how.
The Problem Nobody Talks About
Here's a dirty secret about most engineering teams: security reviews don't happen.
Oh sure, there's a quarterly pen test. Maybe a SAST tool that generates 400 findings nobody reads. But per-PR, inline, "hey this specific line has a command injection" review? That requires a security engineer looking at every diff. And most teams don't have one. Even the ones that do — they can't review every PR.
Meanwhile, 80%+ of breaches start with a vulnerability that was visible in the code at commit time.
I got tired of this gap. So I built CodeGuardAI — a GitHub App that hooks into your PRs and posts security-focused reviews using Claude as the analysis engine.
Architecture: Embarrassingly Simple
The entire thing is four TypeScript files running on a single Cloudflare Worker:
src/
├── index.ts # Webhook handler + routing (Hono)
├── github.ts # GitHub API client (auth, fetch diffs, post reviews)
├── reviewer.ts # AI review engine (Claude API + diff parsing)
├── types.ts # TypeScript interfaces
└── (that's it)
The flow:
- GitHub sends a webhook when a PR is opened or updated
- Worker verifies the HMAC signature
- Fetches the PR diff via GitHub API
- Sends the diff to Claude with a security-focused system prompt
- Parses the structured JSON response
- Posts inline review comments back on the PR
No queues. No containers. No Redis. Just a Worker that wakes up, does the job, and goes back to sleep. Cloudflare's waitUntil() handles the async processing so the webhook returns 202 Accepted immediately.
Data layer? A single D1 (SQLite) database tracking installations, reviews, and usage. Three tables. That's the whole backend.
The Interesting Parts
Let me walk through the pieces that actually matter.
1. Webhook Security
Every GitHub webhook comes with an HMAC-SHA256 signature. You must verify it, and you must do it with constant-time comparison (or you're vulnerable to timing attacks):
export async function verifyWebhookSignature(
payload: string,
signature: string | null,
secret: string
): Promise<boolean> {
if (!signature) return false;
const sig = signature.startsWith("sha256=")
? signature.slice(7)
: signature;
const encoder = new TextEncoder();
const key = await crypto.subtle.importKey(
"raw",
encoder.encode(secret),
{ name: "HMAC", hash: "SHA-256" },
false,
["sign"]
);
const signed = await crypto.subtle.sign(
"HMAC", key, encoder.encode(payload)
);
const expected = Array.from(new Uint8Array(signed))
.map((b) => b.toString(16).padStart(2, "0"))
.join("");
// Constant-time comparison
if (sig.length !== expected.length) return false;
let result = 0;
for (let i = 0; i < sig.length; i++) {
result |= sig.charCodeAt(i) ^ expected.charCodeAt(i);
}
return result === 0;
}
This runs on the Web Crypto API — no Node.js crypto module needed. Works perfectly in Workers.
2. The System Prompt (Where the Magic Lives)
This is the part I iterated on the most. The system prompt turns a general-purpose LLM into a focused security reviewer:
const SYSTEM_PROMPT = `You are CodeGuardAI, an expert security-focused
code reviewer. You analyze pull request diffs and identify issues.
Focus areas (in priority order):
1. Security vulnerabilities: SQL injection, XSS, SSRF, path traversal,
command injection, prototype pollution, ReDoS
2. Hardcoded secrets: API keys, passwords, tokens, private keys
3. Authentication/Authorization flaws: Missing auth checks, broken
access control
4. Race conditions: TOCTOU bugs, unprotected shared state
5. Error handling: Information leakage, missing validation
6. Performance anti-patterns: N+1 queries, unbounded loops
Rules:
- Only comment on ADDED or MODIFIED lines (lines starting with +)
- Be specific — reference the exact code pattern
- Provide a fix suggestion when possible
- Don't flag style/formatting issues
- If the diff looks clean, say so briefly`;
Key decisions:
- Priority ordering matters. Claude respects the hierarchy — it won't waste comments on style when there's a SQL injection.
- "Only comment on added lines" prevents noise from reviewing unchanged context.
- Structured JSON output makes parsing deterministic. No regex extraction of natural language.
- "If clean, say so briefly" prevents the AI from inventing problems to justify its existence.
3. Diff Context Building
You can't just dump the entire repo into the context window. I build a focused diff payload with smart filtering:
function buildDiffContext(
files: FileChange[],
maxChars: number = 80000
): string {
let context = "";
let truncated = false;
for (const file of files) {
if (isIgnoredFile(file.filename)) continue;
const fileBlock = `\n--- ${file.filename} (${file.status}) ---\n` +
`${file.patch}\n`;
if (context.length + fileBlock.length > maxChars) {
truncated = true;
break;
}
context += fileBlock;
}
if (truncated) {
context += "\n[... additional files truncated ...]\n";
}
return context;
}
The isIgnoredFile() function skips lockfiles, sourcemaps, images, build artifacts, and vendored dependencies. No point burning tokens reviewing package-lock.json.
For large PRs (5000+ changed lines), I truncate to the first ~3000 lines and leave a comment telling the author to break up the PR. This is both a cost safeguard and genuinely good advice.
4. Posting Inline Reviews
GitHub's review API is powerful but finicky. You post a "review" with inline comments attached to specific lines:
export async function postReview(
token: string,
owner: string,
repo: string,
prNumber: number,
commitSha: string,
comments: ReviewComment[],
summary: string
): Promise<void> {
const body: Record<string, unknown> = {
commit_id: commitSha,
body: summary,
event: "COMMENT",
};
if (comments.length > 0) {
body.comments = comments;
}
const resp = await fetch(
`https://api.github.com/repos/${owner}/${repo}/pulls/${prNumber}/reviews`,
{
method: "POST",
headers: {
Authorization: `token ${token}`,
Accept: "application/vnd.github+json",
"User-Agent": "CodeGuardAI/1.0",
},
body: JSON.stringify(body),
}
);
// If inline comments fail (line not in diff), retry summary-only
if (!resp.ok && comments.length > 0 && resp.status === 422) {
await postReview(token, owner, repo, prNumber, commitSha, [], summary);
return;
}
}
The fallback logic is important — GitHub returns 422 if a comment references a line that's not in the diff context. Rather than losing the entire review, we retry with just the summary.
Each comment gets a severity emoji and a formatted body:
const severityEmoji =
comment.severity === "critical" ? "🚨"
: comment.severity === "warning" ? "⚠️"
: "💡";
reviewComments.push({
path: comment.file,
line: comment.line,
side: "RIGHT",
body: `${severityEmoji} **${comment.severity.toUpperCase()}**\n\n${comment.message}`,
});
What It Actually Finds
I tested this against a deliberately vulnerable PR with common security anti-patterns. Here's what CodeGuardAI flagged:
🚨 CRITICAL — Command Injection:
// The PR had this:
const output = execSync(`git log --author=${req.query.author}`);
// CodeGuardAI flagged it immediately with a fix:
// Use execFileSync with argument array instead
🚨 CRITICAL — Path Traversal:
// The PR had this:
const file = fs.readFileSync(`./uploads/${req.params.filename}`);
// CodeGuardAI: "User input in file path without sanitization.
// An attacker can use ../../../etc/passwd to read arbitrary files."
⚠️ WARNING — Hardcoded Secret:
// The PR had this:
const API_KEY = "sk-proj-abc123...";
// CodeGuardAI: "Hardcoded API key. Use environment variables."
All three findings appeared as inline comments on the exact lines in the PR diff. The summary at the top rated it CRITICAL risk.
The Economics
Let's talk money, because this is where it gets interesting.
A typical PR diff is 200-500 lines. With Claude Sonnet, that's roughly:
- Input tokens: ~2,000-4,000 (system prompt + diff)
- Output tokens: ~500-1,000 (JSON response)
- Cost per review: ~$0.01-0.03
With Claude Haiku, it's even cheaper — around $0.003 per review.
For a team doing 50 PRs/week, that's about $0.60/week or $2.50/month in API costs. The Cloudflare Worker free tier handles up to 100K requests/day. D1 is free for 5M reads/day.
Total infrastructure cost for a small team: basically zero.
Compare that to the cost of one security incident that could have been caught in code review.
What I'd Do Differently
After running this for a few weeks, here's what I've learned:
Model choice matters. Sonnet is better at catching subtle logic bugs. Haiku is fine for the obvious stuff (hardcoded secrets, injection patterns). I'm considering a tiered approach — Haiku for initial scan, Sonnet for files touching auth/crypto/networking.
False positives are the enemy. If the bot cries wolf too often, developers ignore it. The "don't be pedantic" instruction in the system prompt helps, but I'm still tuning.
Context window limits hurt. For massive PRs (1000+ files), you can't review everything. The truncation strategy works, but ideally you'd prioritize high-risk files (auth handlers, API routes, database queries) over UI components.
GitHub App auth is painful. JWT generation, installation tokens, the whole dance. Once it works, it works. But expect to spend time debugging RSA key formatting.
Try It
CodeGuardAI is live and free for public repos:
- Install: github.com/apps/codeguard-ai
- Landing page: codeguard-ai.nopkt.com
Install it, open a PR, and watch it work. The whole thing is ~600 lines of TypeScript running on the edge.
If you're building AI-powered developer tools, I'd love to hear what you're working on. The combination of LLMs + GitHub webhooks + edge computing is wildly underexplored. We're just scratching the surface.
Built with Claude, Hono, Cloudflare Workers, and D1. Deployed in under 300ms globally.
Top comments (0)