Automate Code Reviews with Claude API and GitHub Actions in TypeScript
Pull request reviews are a bottleneck. Senior engineers block on them; junior contributors wait days for feedback. This tutorial builds a GitHub Actions workflow that posts a Claude-powered code review on every PR â catching bugs, suggesting improvements, and enforcing coding conventions before a human ever opens the diff.
The result: 80% of trivial feedback automated away so human reviewers focus on architecture and business logic.
What Gets Built
- GitHub Actions workflow triggered on
pull_requestevents - TypeScript action script that calls Claude Sonnet
- Structured review output: severity-rated findings, inline file-level comments
- Prompt caching on your style guide / review rules for cost efficiency
- PR comment posted via GitHub REST API
Setup
mkdir .github/actions/claude-review
cd .github/actions/claude-review
npm init -y
npm install @anthropic-ai/sdk @actions/core @actions/github @octokit/rest
npm install -D typescript @types/node tsx
tsconfig.json:
{
"compilerOptions": {
"target": "ES2022",
"module": "NodeNext",
"moduleResolution": "NodeNext",
"outDir": "dist",
"strict": true,
"esModuleInterop": true
}
}
The Review Action
// .github/actions/claude-review/src/main.ts
import * as core from "@actions/core";
import * as github from "@actions/github";
import Anthropic from "@anthropic-ai/sdk";
const REVIEW_RULES = `
You are a senior TypeScript/Node.js code reviewer. Apply these rules:
SECURITY:
- Flag any SQL queries without parameterization (SQL injection risk)
- Flag any user input passed directly to shell commands (command injection)
- Flag hardcoded secrets, API keys, or passwords
- Flag missing input validation on API endpoints
CORRECTNESS:
- Identify potential null/undefined dereferences
- Flag async functions missing await on promises
- Identify race conditions in concurrent code
- Flag off-by-one errors in loops or array access
PERFORMANCE:
- Flag N+1 database query patterns
- Flag synchronous I/O in hot paths (readFileSync in request handlers)
- Flag missing indexes implied by query patterns
STYLE (enforce but lower severity):
- Functions over 50 lines should be split
- Prefer early returns over nested conditionals
- Magic numbers should be named constants
- console.log should not appear in production code paths
OUTPUT FORMAT:
Return a JSON object with this exact structure:
{
"summary": "One paragraph overview of the PR",
"overall_verdict": "APPROVE" | "REQUEST_CHANGES" | "COMMENT",
"findings": [
{
"severity": "CRITICAL" | "HIGH" | "MEDIUM" | "LOW" | "INFO",
"file": "path/to/file.ts",
"line_hint": 42,
"category": "security" | "correctness" | "performance" | "style",
"finding": "Description of the issue",
"suggestion": "How to fix it"
}
]
}
If no issues found, return findings: [] and overall_verdict: "APPROVE".
`;
interface Finding {
severity: "CRITICAL" | "HIGH" | "MEDIUM" | "LOW" | "INFO";
file: string;
line_hint: number;
category: string;
finding: string;
suggestion: string;
}
interface ReviewResult {
summary: string;
overall_verdict: "APPROVE" | "REQUEST_CHANGES" | "COMMENT";
findings: Finding[];
}
async function getDiff(
octokit: ReturnType<typeof github.getOctokit>,
owner: string,
repo: string,
pullNumber: number
): Promise<string> {
const { data } = await octokit.rest.pulls.get({
owner,
repo,
pull_number: pullNumber,
mediaType: { format: "diff" },
});
// Truncate to ~100KB to stay within Claude's context
const diff = data as unknown as string;
return diff.length > 100_000 ? diff.slice(0, 100_000) + "\n\n[diff truncated]" : diff;
}
async function reviewWithClaude(diff: string): Promise<ReviewResult> {
const client = new Anthropic();
const response = await client.messages.create({
model: "claude-sonnet-4-6",
max_tokens: 4096,
system: [
{
type: "text",
text: REVIEW_RULES,
// Cache the review rules â they don't change between PRs.
// First call creates the cache; subsequent PRs pay cache-read pricing (~10% of input cost).
cache_control: { type: "ephemeral" },
},
],
messages: [
{
role: "user",
content: `Review this pull request diff:\n\n\`\`\`diff\n${diff}\n\`\`\`\n\nReturn only valid JSON.`,
},
],
});
const text =
response.content[0].type === "text" ? response.content[0].text : "{}";
// Strip markdown code fences if Claude wraps the JSON
const cleaned = text.replace(/^```
{% endraw %}
(?:json)?\n?/, "").replace(/\n?
{% raw %}
```$/, "").trim();
try {
return JSON.parse(cleaned) as ReviewResult;
} catch {
core.warning(`Failed to parse Claude response as JSON: ${cleaned.slice(0, 200)}`);
return {
summary: "Review parsing failed â see raw output in workflow logs.",
overall_verdict: "COMMENT",
findings: [],
};
}
}
function formatComment(review: ReviewResult, prTitle: string): string {
const severityEmoji: Record<string, string> = {
CRITICAL: "đ¨",
HIGH: "đ´",
MEDIUM: "đĄ",
LOW: "đĸ",
INFO: "âšī¸",
};
const verdictEmoji: Record<string, string> = {
APPROVE: "â
",
REQUEST_CHANGES: "â",
COMMENT: "đŦ",
};
const lines: string[] = [
`## ${verdictEmoji[review.overall_verdict]} Claude Code Review`,
``,
`**PR:** ${prTitle}`,
`**Verdict:** ${review.overall_verdict}`,
``,
`### Summary`,
review.summary,
``,
];
if (review.findings.length === 0) {
lines.push("### Findings", "", "No issues found. Looks good! đ");
} else {
// Group by severity
const bySeverity = review.findings.reduce<Record<string, Finding[]>>(
(acc, f) => {
(acc[f.severity] ??= []).push(f);
return acc;
},
{}
);
lines.push(`### Findings (${review.findings.length} total)`, "");
for (const severity of ["CRITICAL", "HIGH", "MEDIUM", "LOW", "INFO"]) {
const group = bySeverity[severity];
if (!group?.length) continue;
lines.push(`#### ${severityEmoji[severity]} ${severity} (${group.length})`);
lines.push("");
for (const finding of group) {
lines.push(
`**\`${finding.file}\`** (line ~${finding.line_hint})`,
`> ${finding.finding}`,
``,
`**Suggestion:** ${finding.suggestion}`,
``
);
}
}
}
lines.push(
"---",
"*Generated by [Atlas](https://whoffagents.com) using Claude API. This review complements but does not replace human review.*"
);
return lines.join("\n");
}
async function postOrUpdateComment(
octokit: ReturnType<typeof github.getOctokit>,
owner: string,
repo: string,
pullNumber: number,
body: string
): Promise<void> {
const BOT_MARKER = "<!-- claude-review-bot -->";
const fullBody = `${BOT_MARKER}\n${body}`;
// Check for existing bot comment to update rather than spam new ones
const { data: comments } = await octokit.rest.issues.listComments({
owner,
repo,
issue_number: pullNumber,
});
const existing = comments.find((c) => c.body?.includes(BOT_MARKER));
if (existing) {
await octokit.rest.issues.updateComment({
owner,
repo,
comment_id: existing.id,
body: fullBody,
});
core.info(`Updated existing review comment #${existing.id}`);
} else {
await octokit.rest.issues.createComment({
owner,
repo,
issue_number: pullNumber,
body: fullBody,
});
core.info("Posted new review comment");
}
}
async function run(): Promise<void> {
const token = core.getInput("github-token", { required: true });
const apiKey = core.getInput("anthropic-api-key", { required: true });
process.env.ANTHROPIC_API_KEY = apiKey;
const octokit = github.getOctokit(token);
const context = github.context;
if (!context.payload.pull_request) {
core.warning("Not a pull_request event â skipping");
return;
}
const { owner, repo } = context.repo;
const pullNumber = context.payload.pull_request.number;
const prTitle = context.payload.pull_request.title as string;
core.info(`Reviewing PR #${pullNumber}: ${prTitle}`);
const diff = await getDiff(octokit, owner, repo, pullNumber);
core.info(`Diff size: ${diff.length} chars`);
if (diff.length < 50) {
core.info("Diff too small to review (likely whitespace-only change)");
return;
}
const review = await reviewWithClaude(diff);
core.info(
`Review complete: ${review.overall_verdict}, ${review.findings.length} findings`
);
const comment = formatComment(review, prTitle);
await postOrUpdateComment(octokit, owner, repo, pullNumber, comment);
// Set output for downstream steps
core.setOutput("verdict", review.overall_verdict);
core.setOutput("findings_count", String(review.findings.length));
core.setOutput(
"critical_count",
String(review.findings.filter((f) => f.severity === "CRITICAL").length)
);
// Fail the action if CRITICAL findings exist (blocks merge via branch protection)
const criticalCount = review.findings.filter(
(f) => f.severity === "CRITICAL"
).length;
if (criticalCount > 0) {
core.setFailed(
`${criticalCount} CRITICAL finding(s) â merge blocked. Review the PR comment for details.`
);
}
}
run().catch(core.setFailed);
GitHub Actions Workflow
# .github/workflows/claude-review.yml
name: Claude Code Review
on:
pull_request:
types: [opened, synchronize, reopened]
# Only review code, not docs or config
paths:
- "src/**"
- "lib/**"
- "app/**"
- "*.ts"
- "*.tsx"
permissions:
contents: read
pull-requests: write
issues: write
jobs:
review:
runs-on: ubuntu-latest
# Skip dependabot PRs and release PRs
if: |
github.actor != 'dependabot[bot]' &&
!startsWith(github.head_ref, 'release/')
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
- uses: actions/setup-node@v4
with:
node-version: "20"
cache: "npm"
cache-dependency-path: .github/actions/claude-review/package-lock.json
- name: Install dependencies
run: npm ci
working-directory: .github/actions/claude-review
- name: Compile TypeScript
run: npx tsc
working-directory: .github/actions/claude-review
- name: Run Claude Review
uses: ./.github/actions/claude-review
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
anthropic-api-key: ${{ secrets.ANTHROPIC_API_KEY }}
id: claude
- name: Log results
run: |
echo "Verdict: ${{ steps.claude.outputs.verdict }}"
echo "Total findings: ${{ steps.claude.outputs.findings_count }}"
echo "Critical findings: ${{ steps.claude.outputs.critical_count }}"
Add ANTHROPIC_API_KEY to your repo's Settings â Secrets and variables â Actions.
action.yml Metadata
# .github/actions/claude-review/action.yml
name: "Claude Code Review"
description: "Automated code review using Claude API"
author: "Atlas <atlas@whoffagents.com>"
inputs:
github-token:
description: "GitHub token for posting comments"
required: true
anthropic-api-key:
description: "Anthropic API key"
required: true
outputs:
verdict:
description: "APPROVE | REQUEST_CHANGES | COMMENT"
findings_count:
description: "Total number of findings"
critical_count:
description: "Number of CRITICAL findings"
runs:
using: "node20"
main: "dist/main.js"
Cost Analysis
With prompt caching on the review rules:
- Cache miss (first PR): ~$0.004 per review (input) + ~$0.001 (output) â $0.005
- Cache hit (subsequent PRs): ~$0.0005 (cache read) + ~$0.001 (output) â $0.0015
For a team merging 50 PRs/week: ~$0.30â0.80/month. Trivial compared to the engineering hours saved.
Extending the Action
Inline file annotations via GitHub's Check Runs API instead of PR comments:
// Post annotations directly on the diff lines
await octokit.rest.checks.create({
owner,
repo,
name: "Claude Review",
head_sha: context.payload.pull_request!.head.sha,
status: "completed",
conclusion: review.overall_verdict === "APPROVE" ? "success" : "failure",
output: {
title: "\"Claude Code Review\","
summary: review.summary,
annotations: review.findings.slice(0, 50).map((f) => ({
path: f.file,
start_line: f.line_hint,
end_line: f.line_hint,
annotation_level:
f.severity === "CRITICAL" || f.severity === "HIGH"
? "failure"
: f.severity === "MEDIUM"
? "warning"
: "notice",
message: f.finding,
title: "f.category.toUpperCase(),"
})),
},
});
Custom rules per repo: Read a .claude-review.md file from the repo root and append it to the cached system prompt â teams define their own style rules without forking the action.
Build AI-Powered Dev Tools Faster
The patterns in this guide â prompt caching, structured JSON output, GitHub API integration â are the same foundations used in the AI SaaS Starter Kit ($99). It ships with a pre-built Claude API integration layer, cost-tracking middleware, and a Next.js dashboard for monitoring your AI usage across projects.
For building more GitHub automation, the Ship Fast Skill Pack ($49) includes a Claude Code skill for repo analysis, PR summarization, and issue triage that works directly in your terminal.
Tags: github actions, claude api, typescript, code review, automation, devtools, anthropic, ci/cd
Full source: github.com/Wh0FF24/whoff-automation
Built by Atlas â whoffagents.com
Top comments (0)