Automating Code Reviews with AST Parsing in Node.js
Code reviews catch bugs, enforce conventions, and keep codebases healthy. But humans are inconsistent. We miss things on Friday afternoons. We forget which patterns were deprecated last sprint. We overlook a console.log that slipped into production code for the third time this month.
Abstract Syntax Trees (ASTs) let you automate the tedious parts of code review with surgical precision. In this article, we'll build a practical AST-based code review tool from scratch using @babel/parser and @babel/traverse, create custom rules that catch real problems, and package everything into a CLI you can drop into any project.
What Is An AST And Why Should You Care?
When JavaScript engines execute your code, they don't read text. They parse it into a tree structure that represents the meaning of your program. That tree is the Abstract Syntax Tree.
Consider this line:
const apiKey = "sk-abc123";
The parser transforms it into a tree of nodes:
VariableDeclaration
└─ VariableDeclarator
├─ Identifier (name: "apiKey")
└─ StringLiteral (value: "sk-abc123")
Every variable, function call, import statement, and operator becomes a node with a type, location, and child nodes. This structure is what makes tools like ESLint, Prettier, Babel, and webpack possible. They all parse your code into an AST, analyze or transform it, and (sometimes) generate new code from the result.
The key insight: once you have an AST, pattern matching on code structure becomes trivial. You stop wrestling with regex and start asking direct questions about what the code does.
Setting Up: Parser And Traversal
We need two packages:
npm install @babel/parser @babel/traverse
@babel/parser turns source code into an AST. @babel/traverse walks the tree and lets you "visit" specific node types.
Here's the minimal setup:
const parser = require("@babel/parser");
const traverse = require("@babel/traverse").default;
const fs = require("fs");
function parseFile(filePath) {
const code = fs.readFileSync(filePath, "utf-8");
const ast = parser.parse(code, {
sourceType: "module",
plugins: [
"jsx",
"typescript",
"classProperties",
"optionalChaining",
"nullishCoalescingOperator",
],
});
return { ast, code };
}
The plugins array is important. Without typescript, parsing .ts files will throw. Without jsx, any React component crashes the parser. Add the plugins for whatever syntax your codebase uses.
Now let's traverse:
function analyzeFile(filePath) {
const { ast, code } = parseFile(filePath);
const issues = [];
traverse(ast, {
CallExpression(path) {
// Visit every function call in the file
const { callee } = path.node;
if (
callee.type === "MemberExpression" &&
callee.object.name === "console" &&
callee.property.name === "log"
) {
issues.push({
rule: "no-console-log",
message: "console.log() found — remove before merging",
line: path.node.loc.start.line,
column: path.node.loc.start.column,
});
}
},
});
return issues;
}
traverse takes the AST and an object of "visitor" methods. Each method name corresponds to an AST node type. When the traversal encounters a node of that type, it calls your function with a path object that wraps the node and provides context (parent, siblings, scope).
That's the entire pattern. Every rule we write follows this structure: visit a node type, inspect its properties, report if something is wrong.
Building Custom Lint Rules
Rule 1: Detect console.log Statements
We covered this above, but let's make it more robust. We should catch console.warn, console.error, and console.info too — or allow some and flag others:
const BANNED_CONSOLE_METHODS = new Set(["log", "debug", "info"]);
function noConsoleLogs(ast) {
const issues = [];
traverse(ast, {
CallExpression(path) {
const { callee } = path.node;
if (
callee.type === "MemberExpression" &&
callee.object.type === "Identifier" &&
callee.object.name === "console" &&
BANNED_CONSOLE_METHODS.has(callee.property.name)
) {
issues.push({
rule: "no-console",
message: `console.${callee.property.name}() should be removed`,
line: path.node.loc.start.line,
});
}
},
});
return issues;
}
Rule 2: Find Unused Imports
Unused imports bloat bundles and confuse readers. Here's how to catch them:
function unusedImports(ast) {
const issues = [];
const importedNames = new Map(); // name -> line number
const usedNames = new Set();
traverse(ast, {
ImportDeclaration(path) {
for (const specifier of path.node.specifiers) {
importedNames.set(specifier.local.name, path.node.loc.start.line);
}
},
// Track all identifier usage outside of import declarations
Identifier(path) {
if (path.parent.type === "ImportSpecifier") return;
if (path.parent.type === "ImportDefaultSpecifier") return;
if (path.parent.type === "ImportNamespaceSpecifier") return;
usedNames.add(path.node.name);
},
});
for (const [name, line] of importedNames) {
if (!usedNames.has(name)) {
issues.push({
rule: "no-unused-imports",
message: `'${name}' is imported but never used`,
line,
});
}
}
return issues;
}
This is a two-pass approach within a single traversal. First, we collect every imported identifier. Then we track every identifier used elsewhere. After traversal, we compare the two sets. Anything imported but never referenced is unused.
Caveat: This is a simplified version. A production implementation should use Babel's scope analysis (path.scope.getBinding()) for accuracy, because our naive approach might miss cases where an identifier is used as an object key or in a type annotation. But for a code review tool that flags potential issues for human review, this works well.
Rule 3: Magic Numbers
Magic numbers are numeric literals that appear in code without explanation. They make code unreadable:
// Bad: what does 86400 mean?
setTimeout(refresh, 86400 * 1000);
// Good
const SECONDS_PER_DAY = 86400;
setTimeout(refresh, SECONDS_PER_DAY * 1000);
Here's a rule that catches them:
const ALLOWED_NUMBERS = new Set([0, 1, -1, 2, 100]);
function noMagicNumbers(ast) {
const issues = [];
traverse(ast, {
NumericLiteral(path) {
const value = path.node.value;
// Allow small common numbers
if (ALLOWED_NUMBERS.has(value)) return;
// Allow numbers in const declarations (they're named)
if (path.parent.type === "VariableDeclarator") return;
// Allow array indices
if (path.parent.type === "MemberExpression" && path.parent.computed) {
return;
}
// Allow numbers in default parameter values
if (path.parent.type === "AssignmentPattern") return;
issues.push({
rule: "no-magic-numbers",
message: `Magic number ${value} — extract to a named constant`,
line: path.node.loc.start.line,
});
},
});
return issues;
}
Practical Examples: Security And Pattern Enforcement
Detecting API Key Leaks
Hardcoded secrets are a security nightmare. We can detect common patterns by inspecting string literals assigned to variables with suspicious names:
const SECRET_PATTERNS = [
/api[_-]?key/i,
/secret/i,
/password/i,
/token/i,
/auth/i,
/credential/i,
/private[_-]?key/i,
];
const VALUE_PATTERNS = [
/^sk-[a-zA-Z0-9]{20,}$/, // OpenAI-style keys
/^ghp_[a-zA-Z0-9]{36}$/, // GitHub personal access tokens
/^AKIA[0-9A-Z]{16}$/, // AWS access key IDs
/^xox[bpsa]-[a-zA-Z0-9-]{10,}$/, // Slack tokens
];
function noHardcodedSecrets(ast) {
const issues = [];
traverse(ast, {
VariableDeclarator(path) {
const { id, init } = path.node;
if (!init || init.type !== "StringLiteral") return;
const varName = id.name || "";
const value = init.value;
// Check if variable name looks like a secret
const nameIsSuspicious = SECRET_PATTERNS.some((p) => p.test(varName));
// Check if value looks like an actual key
const valueIsKey = VALUE_PATTERNS.some((p) => p.test(value));
if (nameIsSuspicious && value.length > 8) {
issues.push({
rule: "no-hardcoded-secrets",
message: `Possible hardcoded secret in '${varName}' — use environment variables`,
line: path.node.loc.start.line,
severity: "error",
});
}
if (valueIsKey) {
issues.push({
rule: "no-hardcoded-secrets",
message: `Value matches known API key pattern — never commit secrets`,
line: path.node.loc.start.line,
severity: "error",
});
}
},
// Also check object properties: { apiKey: "sk-..." }
ObjectProperty(path) {
const key = path.node.key;
const value = path.node.value;
if (value.type !== "StringLiteral") return;
const keyName = key.name || key.value || "";
const nameIsSuspicious = SECRET_PATTERNS.some((p) => p.test(keyName));
if (nameIsSuspicious && value.value.length > 8) {
issues.push({
rule: "no-hardcoded-secrets",
message: `Possible hardcoded secret in property '${keyName}'`,
line: path.node.loc.start.line,
severity: "error",
});
}
},
});
return issues;
}
This catches things that regex-based tools like git-secrets often miss, because we understand the structure of the code. We know the difference between a variable named apiKey holding a string and the word "apiKey" appearing in a comment.
Finding Deprecated Patterns
Suppose your team deprecated moment.js in favor of date-fns, and you're migrating away from lodash.get to optional chaining:
const DEPRECATED = {
moment: {
message: "Use date-fns instead of moment.js",
replacement: "date-fns",
},
"lodash.get": {
message: "Use optional chaining (?.) instead of lodash.get",
replacement: "?.",
},
request: {
message: "Use fetch or undici instead of the request library",
replacement: "undici",
},
};
function noDeprecatedImports(ast) {
const issues = [];
traverse(ast, {
ImportDeclaration(path) {
const source = path.node.source.value;
if (DEPRECATED[source]) {
issues.push({
rule: "no-deprecated-imports",
message: DEPRECATED[source].message,
line: path.node.loc.start.line,
severity: "warning",
});
}
},
CallExpression(path) {
// Catch require('moment') too
if (
path.node.callee.name === "require" &&
path.node.arguments[0]?.type === "StringLiteral"
) {
const source = path.node.arguments[0].value;
if (DEPRECATED[source]) {
issues.push({
rule: "no-deprecated-imports",
message: DEPRECATED[source].message,
line: path.node.loc.start.line,
severity: "warning",
});
}
}
},
});
return issues;
}
Enforcing Naming Conventions
React components should be PascalCase. Event handlers should start with handle. Constants should be UPPER_SNAKE_CASE. Let's enforce it:
function namingConventions(ast) {
const issues = [];
traverse(ast, {
// React components must be PascalCase
FunctionDeclaration(path) {
const name = path.node.id?.name;
if (!name) return;
// If it returns JSX, it's a component
let returnsJSX = false;
path.traverse({
ReturnStatement(retPath) {
const arg = retPath.node.argument;
if (arg && (arg.type === "JSXElement" || arg.type === "JSXFragment")) {
returnsJSX = true;
}
},
});
if (returnsJSX && !/^[A-Z][a-zA-Z0-9]*$/.test(name)) {
issues.push({
rule: "component-naming",
message: `Component '${name}' should be PascalCase`,
line: path.node.loc.start.line,
});
}
},
// Constants (top-level const with literal values) should be UPPER_SNAKE_CASE
VariableDeclaration(path) {
if (path.node.kind !== "const") return;
if (path.parent.type !== "Program") return; // top-level only
for (const decl of path.node.declarations) {
if (!decl.id.name) continue;
const init = decl.init;
// Only flag literals and arrays/objects of literals
const isLiteral =
init &&
(init.type === "NumericLiteral" ||
init.type === "StringLiteral" ||
init.type === "BooleanLiteral");
if (isLiteral && !/^[A-Z][A-Z0-9_]*$/.test(decl.id.name)) {
issues.push({
rule: "constant-naming",
message: `Top-level constant '${decl.id.name}' should be UPPER_SNAKE_CASE`,
line: path.node.loc.start.line,
});
}
}
},
});
return issues;
}
Building The CLI Tool
Now let's assemble everything into a CLI that scans a codebase:
#!/usr/bin/env node
// ast-review.js
const fs = require("fs");
const path = require("path");
const parser = require("@babel/parser");
const traverse = require("@babel/traverse").default;
// ── File Discovery ──────────────────────────────────────────────
function getFiles(dir, extensions = [".js", ".jsx", ".ts", ".tsx"]) {
const results = [];
function walk(currentDir) {
const entries = fs.readdirSync(currentDir, { withFileTypes: true });
for (const entry of entries) {
const fullPath = path.join(currentDir, entry.name);
// Skip common non-source directories
if (entry.isDirectory()) {
if (["node_modules", ".git", "dist", "build", "coverage"].includes(entry.name)) {
continue;
}
walk(fullPath);
} else if (extensions.includes(path.extname(entry.name))) {
results.push(fullPath);
}
}
}
walk(dir);
return results;
}
// ── Parser ──────────────────────────────────────────────────────
function parseCode(code, filePath) {
const isTS = filePath.endsWith(".ts") || filePath.endsWith(".tsx");
const isJSX = filePath.endsWith(".jsx") || filePath.endsWith(".tsx");
const plugins = ["classProperties", "optionalChaining", "nullishCoalescingOperator"];
if (isTS) plugins.push("typescript");
if (isJSX || !isTS) plugins.push("jsx"); // JSX is common even in .js files
return parser.parse(code, {
sourceType: "module",
allowImportExportEverywhere: true,
plugins,
});
}
// ── Rules ───────────────────────────────────────────────────────
const rules = {
"no-console": noConsoleLogs,
"no-unused-imports": unusedImports,
"no-magic-numbers": noMagicNumbers,
"no-hardcoded-secrets": noHardcodedSecrets,
"no-deprecated-imports": noDeprecatedImports,
"naming-conventions": namingConventions,
};
// (Include all the rule functions from above sections here)
// ── Runner ──────────────────────────────────────────────────────
function runReview(targetDir, enabledRules) {
const files = getFiles(targetDir);
const allIssues = [];
for (const filePath of files) {
let code, ast;
try {
code = fs.readFileSync(filePath, "utf-8");
ast = parseCode(code, filePath);
} catch (err) {
console.error(` Parse error in ${filePath}: ${err.message}`);
continue;
}
for (const [ruleName, ruleFn] of Object.entries(enabledRules)) {
const issues = ruleFn(ast);
for (const issue of issues) {
allIssues.push({
...issue,
file: path.relative(targetDir, filePath),
});
}
}
}
return allIssues;
}
// ── Output ──────────────────────────────────────────────────────
function formatIssues(issues) {
if (issues.length === 0) {
console.log("\n No issues found.\n");
return;
}
// Group by file
const grouped = {};
for (const issue of issues) {
if (!grouped[issue.file]) grouped[issue.file] = [];
grouped[issue.file].push(issue);
}
console.log(`\n Found ${issues.length} issue(s) in ${Object.keys(grouped).length} file(s):\n`);
for (const [file, fileIssues] of Object.entries(grouped)) {
console.log(` ${file}`);
for (const issue of fileIssues) {
const severity = issue.severity === "error" ? "ERROR" : "WARN ";
console.log(` line ${issue.line} ${severity} ${issue.message} (${issue.rule})`);
}
console.log();
}
}
// ── CLI Entry ───────────────────────────────────────────────────
const targetDir = process.argv[2] || ".";
const resolvedDir = path.resolve(targetDir);
if (!fs.existsSync(resolvedDir)) {
console.error(`Directory not found: ${resolvedDir}`);
process.exit(1);
}
console.log(`\n AST Code Review: scanning ${resolvedDir}\n`);
const issues = runReview(resolvedDir, rules);
formatIssues(issues);
process.exit(issues.some((i) => i.severity === "error") ? 1 : 0);
Run it:
node ast-review.js ./src
Output looks like:
AST Code Review: scanning /home/user/project/src
Found 7 issue(s) in 3 file(s):
utils/api.js
line 3 ERROR Possible hardcoded secret in 'apiKey' — use environment variables (no-hardcoded-secrets)
line 14 WARN console.log() should be removed (no-console)
components/Dashboard.jsx
line 1 WARN 'useState' is imported but never used (no-unused-imports)
line 42 WARN Magic number 86400 — extract to a named constant (no-magic-numbers)
index.js
line 2 WARN Use date-fns instead of moment.js (no-deprecated-imports)
The exit code is 1 when any error-severity issues are found, making it perfect for CI pipelines:
# .github/workflows/review.yml
- name: AST Code Review
run: node ast-review.js ./src
ESLint Custom Rules vs. DIY AST Tools: When To Use Which
ESLint already uses ASTs. You can write custom ESLint rules. So why build your own tool?
Use ESLint custom rules when:
- Your rule fits the standard lint pattern (visit node, report problem, optionally auto-fix).
- You want auto-fix support. ESLint's
fixerAPI is mature and handles source map tracking. - Your team already uses ESLint and you want IDE integration (red squiggly lines in VS Code).
- You need rules that interact with ESLint's scope analysis and type-aware linting via
@typescript-eslint/parser.
Here's what an ESLint custom rule looks like for comparison:
// eslint-rules/no-hardcoded-secrets.js
module.exports = {
meta: {
type: "problem",
docs: { description: "Disallow hardcoded secrets" },
schema: [],
},
create(context) {
return {
VariableDeclarator(node) {
if (node.init?.type !== "Literal" || typeof node.init.value !== "string") return;
if (/api[_-]?key|secret|password|token/i.test(node.id.name) && node.init.value.length > 8) {
context.report({ node, message: "Possible hardcoded secret" });
}
},
};
},
};
The visitor pattern is identical. The difference is the wrapper.
Use a standalone AST tool when:
- You need cross-file analysis. ESLint rules run per-file. Your tool can track patterns across an entire codebase — dead exports, circular dependencies, API surface analysis.
- You're building a migration tool that needs to transform code, not just flag it. Babel's
path.replaceWith()makes code transformations clean. - You want to analyze non-standard files or combine AST analysis with other data sources (git blame, API docs, dependency graphs).
- You need a one-off analysis. Writing an ESLint plugin with proper meta, docs, tests, and config is overhead when you just need a quick scan.
- You want to build a tool you can publish independently (like a CLI or a CI bot).
In practice, many teams use both: ESLint for day-to-day linting in the editor, and custom AST scripts for deeper analysis in CI.
Performance Tips For Large Codebases
Parse Only What You Need
If you're only checking imports, skip the entire function body traversal:
traverse(ast, {
ImportDeclaration(path) {
// check imports
},
// By not defining other visitors, traverse skips deeper work
// for other node types, but still walks the tree.
});
For genuinely large files, consider using @babel/parser with ranges: false and tokens: false to reduce memory:
const ast = parser.parse(code, {
sourceType: "module",
plugins: ["typescript", "jsx"],
ranges: false,
tokens: false,
});
Parallelize File Processing
Node.js is single-threaded, but worker_threads can parallelize parsing across CPU cores:
const { Worker, isMainThread, parentPort, workerData } = require("worker_threads");
const os = require("os");
if (isMainThread) {
const files = getFiles("./src");
const cpuCount = os.cpus().length;
const chunkSize = Math.ceil(files.length / cpuCount);
const workers = [];
for (let i = 0; i < cpuCount; i++) {
const chunk = files.slice(i * chunkSize, (i + 1) * chunkSize);
if (chunk.length === 0) continue;
const worker = new Worker(__filename, { workerData: { files: chunk } });
workers.push(
new Promise((resolve, reject) => {
worker.on("message", resolve);
worker.on("error", reject);
})
);
}
Promise.all(workers).then((results) => {
const allIssues = results.flat();
formatIssues(allIssues);
});
} else {
const { files } = workerData;
const issues = [];
for (const file of files) {
// parse and analyze each file
issues.push(...analyzeFile(file));
}
parentPort.postMessage(issues);
}
On a 10,000-file codebase, this can cut analysis time from 30 seconds to under 5.
Cache Parse Results
Parsing is the expensive step. If files haven't changed, reuse the cached AST:
const crypto = require("crypto");
const cacheDir = path.join(os.tmpdir(), "ast-review-cache");
fs.mkdirSync(cacheDir, { recursive: true });
function getCachedAST(filePath, code) {
const hash = crypto.createHash("md5").update(code).digest("hex");
const cachePath = path.join(cacheDir, `${hash}.json`);
if (fs.existsSync(cachePath)) {
return JSON.parse(fs.readFileSync(cachePath, "utf-8"));
}
const ast = parseCode(code, filePath);
fs.writeFileSync(cachePath, JSON.stringify(ast));
return ast;
}
This gives you instant re-runs when only a few files have changed — common during development.
Early Termination
If you're running in CI and only care about errors (not warnings), bail early:
function runReviewFastFail(targetDir, enabledRules) {
const files = getFiles(targetDir);
for (const filePath of files) {
const code = fs.readFileSync(filePath, "utf-8");
const ast = parseCode(code, filePath);
for (const [ruleName, ruleFn] of Object.entries(enabledRules)) {
const issues = ruleFn(ast);
const errors = issues.filter((i) => i.severity === "error");
if (errors.length > 0) {
console.error(`ERROR in ${filePath}:`);
errors.forEach((e) => console.error(` line ${e.line}: ${e.message}`));
process.exit(1); // Fail immediately
}
}
}
}
Putting It All Together
Here is a complete, self-contained version of the review tool that you can copy into a project and run immediately:
#!/usr/bin/env node
// ast-review.js — Drop-in AST code review tool
const fs = require("fs");
const path = require("path");
const parser = require("@babel/parser");
const traverse = require("@babel/traverse").default;
// ── Configuration ───────────────────────────────────────────────
const CONFIG = {
extensions: [".js", ".jsx", ".ts", ".tsx"],
ignoreDirs: ["node_modules", ".git", "dist", "build", "coverage", "__mocks__"],
bannedConsole: ["log", "debug", "info"],
allowedMagicNumbers: [0, 1, -1, 2, 10, 100],
secretPatterns: [/api[_-]?key/i, /secret/i, /password/i, /token/i, /private[_-]?key/i],
deprecated: {
moment: "Use date-fns instead",
request: "Use undici or native fetch",
"lodash.get": "Use optional chaining (?.)",
},
};
// ── File Discovery ──────────────────────────────────────────────
function getFiles(dir) {
const results = [];
function walk(d) {
for (const entry of fs.readdirSync(d, { withFileTypes: true })) {
const full = path.join(d, entry.name);
if (entry.isDirectory() && !CONFIG.ignoreDirs.includes(entry.name)) walk(full);
else if (CONFIG.extensions.includes(path.extname(entry.name))) results.push(full);
}
}
walk(dir);
return results;
}
// ── Analyze ─────────────────────────────────────────────────────
function analyze(filePath) {
const code = fs.readFileSync(filePath, "utf-8");
const plugins = ["classProperties", "optionalChaining", "nullishCoalescingOperator"];
if (/\.tsx?$/.test(filePath)) plugins.push("typescript");
if (/\.jsx$|\.js$|\.tsx$/.test(filePath)) plugins.push("jsx");
const ast = parser.parse(code, { sourceType: "module", plugins, allowImportExportEverywhere: true });
const issues = [];
const imports = new Map();
const usedIds = new Set();
traverse(ast, {
// Rule: no console
CallExpression(p) {
const c = p.node.callee;
if (c.type === "MemberExpression" && c.object.name === "console" && CONFIG.bannedConsole.includes(c.property.name)) {
issues.push({ rule: "no-console", message: `console.${c.property.name}()`, line: p.node.loc.start.line });
}
// Rule: deprecated require()
if (c.name === "require" && p.node.arguments[0]?.type === "StringLiteral") {
const src = p.node.arguments[0].value;
if (CONFIG.deprecated[src]) {
issues.push({ rule: "deprecated", message: CONFIG.deprecated[src], line: p.node.loc.start.line, severity: "warning" });
}
}
},
// Rule: secrets
VariableDeclarator(p) {
if (p.node.init?.type === "StringLiteral") {
const name = p.node.id.name || "";
if (CONFIG.secretPatterns.some((r) => r.test(name)) && p.node.init.value.length > 8) {
issues.push({ rule: "secret", message: `Hardcoded secret in '${name}'`, line: p.node.loc.start.line, severity: "error" });
}
}
},
// Rule: magic numbers
NumericLiteral(p) {
if (CONFIG.allowedMagicNumbers.includes(p.node.value)) return;
if (p.parent.type === "VariableDeclarator" || p.parent.type === "AssignmentPattern") return;
issues.push({ rule: "magic-number", message: `Magic number ${p.node.value}`, line: p.node.loc.start.line });
},
// Collect imports
ImportDeclaration(p) {
const src = p.node.source.value;
if (CONFIG.deprecated[src]) {
issues.push({ rule: "deprecated", message: CONFIG.deprecated[src], line: p.node.loc.start.line, severity: "warning" });
}
for (const s of p.node.specifiers) imports.set(s.local.name, p.node.loc.start.line);
},
Identifier(p) {
if (!["ImportSpecifier", "ImportDefaultSpecifier", "ImportNamespaceSpecifier"].includes(p.parent.type)) {
usedIds.add(p.node.name);
}
},
});
// Rule: unused imports
for (const [name, line] of imports) {
if (!usedIds.has(name)) {
issues.push({ rule: "unused-import", message: `'${name}' imported but unused`, line });
}
}
return issues;
}
// ── Main ────────────────────────────────────────────────────────
const dir = path.resolve(process.argv[2] || ".");
const files = getFiles(dir);
let total = 0;
for (const f of files) {
try {
const issues = analyze(f);
if (issues.length > 0) {
console.log(`\n ${path.relative(dir, f)}`);
for (const i of issues) {
const sev = i.severity === "error" ? "ERROR" : "WARN ";
console.log(` line ${i.line} ${sev} ${i.message} (${i.rule})`);
}
total += issues.length;
}
} catch (e) {
console.error(` Parse error: ${path.relative(dir, f)}: ${e.message}`);
}
}
console.log(`\n ${total} issue(s) in ${files.length} file(s) scanned.\n`);
process.exit(total > 0 ? 1 : 0);
Wrapping Up
ASTs turn code review from a manual, error-prone process into something deterministic and fast. The rules in this article cover the most common review feedback — leftover debug statements, unused code, magic numbers, hardcoded secrets, deprecated dependencies, and inconsistent naming. That's probably 60% of all code review comments in most teams.
The beautiful thing about this approach is its extensibility. Need to enforce that all API routes have authentication middleware? Write a visitor that checks CallExpression nodes for your router methods and verifies the argument list includes your auth function. Need to ensure every React component has a displayName? Visit ArrowFunctionExpression nodes that return JSX and check for a subsequent displayName assignment.
Start with the three or four rules that address your team's most common review feedback. Run the tool in CI. Watch the number of "please remove console.log" comments drop to zero. Then add more rules as patterns emerge.
The code review meeting just got shorter.
Top comments (0)