With the rise of AI-generated code, reviewing pull requests has become more challenging than before.
On several projects, I noticed the same pattern. Pull requests were getting bigger and more frequent, which made reviewing thoroughly increasingly difficult. The challenge was not complexity but volume.
With AI accelerating code production, the gap became obvious. We can generate code fast, but reviewing with the same level of rigor is harder.
Instead of trying to review faster, I chose to review differently. I started extracting my own review patterns and turned them into an AI Skill, now available as Open Source.
Context: code review is the new bottleneck
Code review used to scale with the team. More developers meant more reviewers, and the balance stayed relatively stable.
This is no longer the case.
With AI-assisted development, code volume has grown dramatically. Pull requests are more frequent and often larger, while reviews happen under constant time pressure. Feedback tends to become superficial, architectural issues can slip through, and coding standards slowly drift.
The problem is not about speed anymore, it is about keeping a consistent level of quality across the codebase.
From intuition to system
Experienced developers rely on a set of implicit rules when reviewing code. Over time, we build a mental model of what good code looks like. We expect naming to reflect behavior, side effects to be explicit, and UI layers to remain isolated from business logic.
The problem is that these rules are rarely formalized. They live in experience, which makes them hard to scale across a team.
The AI Skill was designed to take these patterns and turn them into something structured and reusable. It does not replace the reviewer. Instead, it supports them by surfacing relevant issues earlier, reducing cognitive load, and making expectations explicit.
Setup: getting the skill ready
The skill is built on the Model Context Protocol (MCP), allowing it to integrate into any compatible environment like Cursor.
For the skill to function, your editor must have an active MCP connection to GitHub or GitLab. I highly recommend using the native MCPs provided by these platforms (GitHub/GitLab) to ensure the best stability, security and performance when fetching pull request data.
Once your environment is connected to your repository, the specific installation of the frontend code review skill is detailed in the repository's documentation:
Implementation: how the skill actually works
Once configured, the skill fits directly into your existing workflow. It starts with a simple command in your editor:
/frontend-code-review Please review this pull request <link>
The skill retrieves the pull request and begins a discovery phase. It detects the stack, identifies the tools in use, and tries to understand the nature of the changes. This step is critical because it allows the skill to stay contextual and avoid irrelevant checks.
Only relevant references are loaded based on the changed file types. A CSS change triggers CSS-specific validations, while a TypeScript component is analyzed with frontend architecture patterns in mind.
Internally, the analysis relies on structured review patterns. Nothing is posted automatically. The developer review the report, filter the findings, and decide what should actually be shared on the pull request.
The Knowledge Base: modular reference guides
The power of the skill lies in its reference modules. Instead of a "black box" logic, the AI uses specific Markdown files as a source of truth for each domain.
You can explore the full set of rules in the repository, which covers:
- Security & Reliability - Detects XSS vulnerabilities, sanitization issues, and PII protection in logs or storage.
- Accessibility (WCAG) - Covers focus management, ARIA roles, and keyboard accessibility for custom interactive elements.
- Performance & DOM - Targets layout thrashing, memory leaks (missing listener cleanup), and script loading strategies.
- Architecture & Logic - Validates separation of concerns, naming semantics, and boundary conditions.
- Modern JS/TS - Enforces type safety, explicit return types, and modern syntax.
- Project Conventions - Adapts to the project's specific coding style, linter rules, and module format detected during discovery.
This modularity allows the skill to be highly precise. If you only want to focus on a specific area, you can simply instruct it: "Review only the accessibility and security aspects of this PR".
Structuring feedback: reducing noise
Noise is not unique to AI-assisted reviews. Even in human reviews, too many comments at once can bury important feedback or make it harder to prioritize.
The Skill addresses this by introducing a clear classification of findings. Each comment is labeled with a level that helps prioritize the review:
- Blocking - Security issues, critical bugs, or broken logic
- Important - Architecture, performance, and accessibility
- Suggestion - Readability improvements
- Minor - Grouped hygiene items
Another important feature is the 'Attention Required' flag. Some situations cannot be reliably evaluated by AI, especially when visual impact or complex business intent is involved. In these cases, the skill explicitly requests human validation.
Practical takeaways
AI does not remove the need for human expertise, it shifts where effort is applied.
Using AI as a detection layer rather than a decision-maker works well. It surfaces patterns quickly, while we validate the final output. This keeps the review process reliable and reduces the mental overhead of scanning large diffs.
Generating too many comments dilutes the value of the review. Filtering and prioritizing is more important than coverage. On larger frontend pull requests, we have seen reviews become up to 8x faster by focusing only on the high-level findings.
Open Source and next steps
This project started as an internal experiment to automate my own review patterns.
I decided to open source it because opening it to the community accelerates its evolution. Rules can be discussed, challenged, and improved collectively. The goal is not to create a perfect reviewer, but to provide a shared baseline of structured review patterns.
All the code and the skill configuration are available in my GitHub with the link below.
Conclusion
Code review must evolve to keep up with accelerated development. Relying solely on manual review is no longer sustainable, especially as AI generates code at an unprecedented rate.
By using a structured AI Skill, we can automate the detection of high-level patterns including security and accessibility before a human even looks at the diff. This restores a necessary balance where AI handles the repetitive and tedious scanning, while human expertise stays focused on the architectural decisions and business logic that require real judgment.
Ultimately, the goal is not to delegate our responsibility, but to exercise it where it provides the most value.
Discover the frontend code review skill on GitHub
Top comments (0)