Introduction
As AI‑generated code becomes more common in everyday development, teams need a
reliable way to spot the subtle issues that arise when code is accepted
without proper human review. The OpenClaw skill called vibe‑check fills that
gap by acting as an automated auditor that looks for patterns often labelled
"vibe coding sins". This article walks through what the skill does, how it
works, and how you can integrate it into your workflow to keep your codebase
clean and maintainable.
What Is Vibe‑Check?
Vibe‑check is a skill hosted in the OpenClaw skills repository. Its purpose is
to analyse source files or directories and produce a scored report card that
highlights problematic patterns. The skill does not rely on a single
language‑specific linter; instead it combines heuristic checks with optional
LLM‑powered analysis to detect issues that are typical of code that has been
copied from AI suggestions without sufficient scrutiny.
The skill is invoked through a simple command line interface wrapped in a bash
script. When activated, it asks the user what to analyse, runs a series of
checks, and then presents a markdown report that is designed to be easy to
read and screenshot‑worthy.
How the Skill Works
Step 1 – Determine the Target
The first interaction asks the user to specify the code to be examined.
Accepted inputs include a single file path, a directory path, or a git diff
specification. Examples of valid targets are:
- app.py
- src/
- .
- my-project/
- HEAD~3 (last three commits)
This flexibility lets you run a quick check on a single file, a deep dive on
an entire repository, or a focused review of recent changes.
Step 2 – Run the Analysis
Once the target is set, the skill executes its main script
$SKILL_DIR/scripts/vibe-check.sh. The script accepts several flags that
modify its behaviour:
- No flags – basic analysis without fix suggestions.
-
--fix– includes unified diff patches that show how each finding could be corrected. -
--diff HEAD~N– analyses only the changes introduced in the last N commits. -
--staged– limits the scan to files that are currently staged for commit. -
--output report.md– writes the markdown report to a file instead of printing to stdout.
The analysis pipeline consists of several helper scripts:
-
scripts/analyze.sh– performs the core linting, falling back to heuristic rules when no LLM API key is available. -
scripts/git-diff.sh– extracts the relevant files from a git diff when the--diffflag is used. -
scripts/report.sh– formats the results into a markdown report card. -
scripts/common.sh– holds shared constants and utility functions used by the other scripts.
Step 3 – Present the Report
The output of vibe-check.sh is a markdown document that contains:
- An overall vibe score expressed as a percentage and a letter grade.
- A breakdown of scores per sin category.
- A list of the top findings across the codebase.
- Per‑file details when needed.
- If
--fixwas used, a set of unified diff patches that illustrate suggested fixes.
The report is deliberately designed to be visual‑friendly, making it easy to
share in chat tools or embed in documentation.
Sin Categories and Their Weights
Vibe‑check evaluates code across eight distinct categories. Each category
contributes a specific weight to the final score, reflecting its relative
importance in maintaining code quality.
| Category | Weight | What It Catches |
|---|---|---|
| Error Handling | 20% | Missing try/catch blocks, bare exceptions, lack of edge‑case handling. |
| Input Validation | 15% | Absence of type checks, bounds checking, or any validation of external data. |
| Duplication | 15% | Copy‑pasted logic, violations of the DRY principle. |
| Dead Code | 10% | Unused imports, commented‑out blocks, unreachable statements. |
| Magic Values | 10% | Hard‑coded strings, numbers, or URLs that should be defined as constants. |
| Test Coverage | 10% | Missing test files, lack of test patterns, or no assertions. |
| Naming Quality | 10% | Vague names like data, result, temp, or single‑letter variables; misleading identifiers. |
| Security | 10% | Use of dangerous functions such as eval or exec, hard‑coded secrets, potential SQL injection. |
The weighted sum produces a score from 0 to 100, which is then mapped to a
letter grade.
Scoring Guide
The skill uses the following grading scale:
- A (90‑100) – Pristine code with only minor issues.
- B (80‑89) – Clean code that may have a few negligible problems.
- C (70‑79) – Decent code but lazy patterns have started to appear.
- D (60‑69) – Code that needs human attention; several vibe‑coding sins are present.
- F ( <60) – Heavy vibe coding detected; substantial refactoring is recommended.
Seeing a grade of D or F should prompt a closer look at the flagged
categories, while an A or B indicates that the codebase is generally healthy.
Supported Languages and Limitations
As of version 0.1.1, vibe‑check provides reliable analysis for Python,
TypeScript, and JavaScript files. The heuristic fallback ensures that the tool
still works even when no LLM API key is configured, although the depth of
insight may be reduced in that mode.
The skill does not currently support languages such as Java, Go, or Rust. If
your project contains a mix of supported and unsupported files, the analysis
will skip the unsupported ones and report only on the compatible code.
Practical Examples
Auditing a Directory
Suppose you want to check the entire src/ folder of a Node.js project. You
would run:
$SKILL_DIR/scripts/vibe-check.sh src/
The output will be a markdown report showing an overall score, category
breakdown, and a list of the top findings across all files in src/.
Checking with Fix Suggestions
If you also want concrete patches to apply, add the --fix flag:
$SKILL_DIR/scripts/vibe-check.sh --fix src/
The report will now include, after the scorecard, a series of unified diff
blocks that you can copy‑paste or apply with git apply to remediate each
issue.
Reviewing Recent Changes
To focus only on the code that changed in the last three commits, use the diff
mode:
$SKILL_DIR/scripts/vibe-check.sh --diff HEAD~3
This is especially useful during pull request reviews, as it limits the
analysis to the delta rather than the whole repository.
Saving the Report
For archival or sharing purposes, you can write the markdown output to a file:
$SKILL_DIR/scripts/vibe-check.sh --fix --output vibe-report.md src/
The file vibe-report.md can then be attached to a ticket, posted in a wiki,
or included in a continuous integration comment.
Discord v2 Delivery Mode
When the skill is invoked from within a Discord channel that supports OpenClaw
v2026.2.14 or newer, the agent adapts its response to fit the chat
environment:
- It first sends a compact summary containing the overall grade, numeric score, number of files analysed, and the top three findings.
- The summary is kept under approximately 1200 characters and avoids wide markdown tables to keep the message readable.
- If Discord components (buttons, select menus) are available, the agent adds quick‑action buttons such as "Show Top Findings", "Show Fix Suggestions", and "Run Diff Mode".
- When components are not available, the same options are presented as a numbered list for the user to choose from.
- Should the user request the full report, the agent sends it in short chunks of no more than 15 lines each to avoid overwhelming the chat.
This approach ensures that the skill remains useful in both low‑bandwidth text
environments and richer GUI‑based chats.
Best Practices for Using Vibe‑Check
To get the most out of the vibe‑check skill, consider the following
recommendations:
- Run the skill regularly as part of your pre‑commit hook or CI pipeline to catch vibe‑coding issues early.
- When you receive a low grade, start by addressing the highest‑weight categories (Error Handling and Input Validation) because they have the biggest impact on the overall score.
- Use the
--fixmode as a learning tool; examine the suggested patches to understand why a particular pattern was flagged. - For large monorepos, limit the scope to specific directories or use the
--diffflag to focus on recent work. - If you have access to an LLM API key, configure it to enable the more accurate analysis mode; otherwise, rely on the heuristic fallback which still catches many common problems.
- Share the markdown report with your team during code reviews to foster a shared understanding of what constitutes good vibe‑free code.
Conclusion
The OpenClaw vibe‑check skill offers a pragmatic, automated way to detect the
subtle quality issues that often slip into codebases when AI‑generated
snippets are accepted without sufficient review. By breaking down problems
into clearly weighted categories, providing a readable scored report, and
offering optional fix suggestions, the skill equips developers with actionable
insights that can be acted upon immediately.
Whether you are auditing a single script, scanning an entire repository, or
reviewing recent commits, vibe‑check adapts to your workflow and delivers a
report that is both informative and easy to share. Integrating this skill into
your development process helps keep your code clean, maintainable, and free of
the telltale signs of vibe coding, ultimately leading to fewer bugs and a more
confident team.
Skill can be found at:
check/SKILL.md>
Top comments (0)