DEV Community

Cover image for AI Coding Tip 006 - Review Every Line Before Commit
Maxi Contieri
Maxi Contieri

Posted on

AI Coding Tip 006 - Review Every Line Before Commit

You own the code, not the AI

TL;DR: If you can't explain all your code, don't commit it.

Common Mistake ❌

You prompt and paste AI-generated code directly into your project without thinking twice.

You trust the AI without verification and create workslop that ~someone else~ you will have to clean up later.

You assume the code works because it looks correct (or complicated enough to impress anyone).

You skip a manual review when the AI assistant generates large blocks because, well, it's a lot of code.

You treat AI output as production-ready code and ship it without a second thought.

If you're making code reviews, you get tired of large pull requests (probably generated by AI) that feel like reviewing a novel.

Let's be honest: AI isn't accountable for your mistakes, you are. And you want to keep your job and be seen as mandatory for the software engineering process.

Problems Addressed 😔

  • Security vulnerabilities and flaws: AI generates code with Not sanitized inputs SQL injection, XSS, Email, Packages Hallucination, or hardcoded credentials
  • Logic errors: The AI misunderstands your requirements and solves the wrong problem
  • Technical debt: Generated code uses outdated patterns or creates maintenance nightmares
  • Lost accountability: You cannot explain code you didn't review
  • Hidden defects: Issues that appear in production cost 30-100x more to fix
  • Knowledge gaps: You miss learning opportunities when you blindly accept solutions
  • Team friction: Your reviewers waste time catching issues you should have found
  • Productivity Paradox: AI shifts the bottleneck from writing to integration
  • Lack of Trust: The team's trust erodes when unowned code causes failures
  • Noisier Code: AI-authored PRs contained 1.7x more issues than human-only PRs.

How to Do It 🛠️

  1. Ask the AI to generate the code you need using English language
  2. Read every single line the AI produced, understand it, and challenge it if necessary
  3. Check that the solution matches your actual requirements
  4. Verify the code handles edge cases and errors
  5. Look for security issues (injection, auth, data exposure)
  6. Test the code locally with real scenarios
  7. Run your linters, prettifiers and security scanners
  8. Remove any debug code or comments you don't need
  9. Refactor the code to match your team's style
  10. Add or update tests for the new functionality (ask the AI for help)
  11. Write a clear commit message explaining what changed
  12. Only then commit the code
  13. You are not going to lose your job (by now)

Benefits 🎯

You catch defects before they reach production.

You understand the code you commit.

You maintain accountability for your changes.

You learn from your copilot's approach and become a better developer in the process.

You build personal accountability.

You build better human team collaboration and trust.

You prevent security breaches like the Moltbook incident.

You avoid long-term maintenance costs.

You keep your reputation and accountability intact.

You're a professional who shows respect for your human code reviewers.

You are not disposable.

Context 🧠

AI assistants like GitHub Copilot, ChatGPT, and Claude help you code faster.

These tools generate code from natural language prompts and vibe coding.

AI models are probabilistic, not logical.

They predict the next token based on patterns.

When you work on complex systems, the AI might miss a specific edge case that only a human knows.

Manual review is the only way to close the gap between "code that looks good" and "code that is correct."

The AI doesn't understand your business logic or the real world bijection between your MAPPER and your model.

The AI cannot know your security requirements (unless you are explicit or execute a skill).

The AI cannot test the code against your specific environment.

You remain responsible for every line in your codebase.

Production defects from unreviewed AI code cost companies millions.

Code review catches many security risks that automated tools miss.

Your organization holds you accountable for the code you commit.

This applies whether you write code manually or use AI assistance.

Prompt Reference 📝

Bad Prompts

class DatabaseManager:
    _instance = None # Singleton Anti Pattern
    def __new__(cls):
        if cls._instance is None:
            cls._instance = super().__new__(cls)
        return cls._instance
    def get_data(self, id):
        return eval(f"SELECT * FROM users WHERE id={id}") 
        # SQL injection!

    ## 741 more cryptic lines
Enter fullscreen mode Exit fullscreen mode

Good Prompts

from typing import Optional
import sqlite3

class DatabaseManager:
  def __init__(self, db_path: str):
    self.db_path = db_path

  def get_user(self, user_id: int) -> Optional[dict]:
    try:
      with sqlite3.connect(self.db_path) as conn:
        conn.row_factory = sqlite3.Row
        cursor = conn.cursor()
        cursor.execute("SELECT * FROM users WHERE id = ?", (user_id,))
        row = cursor.fetchone()
        return dict(row) if row else None
    except sqlite3.Error as e:
        print(f"Database error: {e}")
        return None

db = DatabaseManager("app.db")
user = db.get_user(123)
Enter fullscreen mode Exit fullscreen mode

Considerations ⚠️

You cannot blame the AI when defects appear in production.

The human is accountable, not the AI.

AI-generated code might violate your company's licensing policies.

The AI might use deprecated libraries or outdated patterns.

Generated code might not follow your team's conventions.

You need to understand the code to maintain it later.

Other developers will review your AI-assisted code just like any other.

Some AI models train on public repositories and might leak patterns.

Type 📝

[X] Semi-Automatic

Limitations ⚠️

You should use this tip for every code change. You should not skip it even for "simple" refactors.

Tags 🏷️

  • Readability

Level 🔋

[X] Beginner

Related Tips 🔗

  • Self-Review Your Code Before Requesting a Peer Review
  • Write Tests for AI-Generated Functions
  • Document AI-Assisted Code Decisions
  • Use Static Analysis on Generated Code
  • Understand Before You Commit

Conclusion 🏁

AI assistants accelerate your coding speed.

You still own every line you commit.

Manual review and code inspections catch what automated tools miss.

Before AI code generators became mainstream, a very good practice was to make a self review of the code before requesting peer review.

You learn more when you question the AI's choices and understand the 'why' behind them.

Your reputation depends on code quality, not how fast you can churn out code.

Take responsibility for the code you ship—your name is on it.

Review everything. Commit nothing blindly. Your future self will thank you. 🔍

Be incremental, make very small commits, and keep your content fresh.

More Information ℹ️

Martin Fowler's code review

Shortcut on performing reviews

Code Rabbit's findings on AI-generated code

The Productivity Paradox

Google Engineering Practices - Code Review

Code Review Best Practices by Atlassian

The Pragmatic Programmer - Code Ownership

IEEE Standards for Software Reviews

Also Known As 🎭

  • Human-in-the-Loop Code Review
  • AI Code Verification
  • AI-Assisted Development Accountability
  • LLM Output Validation
  • Copilot Code Inspection

Tools 🧰

  • SonarQube (static analysis)
  • Snyk (security scanning)
  • ESLint / Pylint (linters)
  • GitLab / GitHub (code review platforms)
  • Semgrep (pattern-based scanning)
  • CodeRabbit / AI-assisted code reviews

Disclaimer 📢

The views expressed here are my own.

I am a human who writes as best as possible for other humans.

I use AI proofreading tools to improve some texts.

I welcome constructive criticism and dialogue.

I shape these insights through 30 years in the software industry, 25 years of teaching, and writing over 500 articles and a book.


This article is part of the AI Coding Tip series.

Top comments (0)