DEV Community

Cover image for AI Tools Read Your Code But Not Your Mind. I Built a Fix.
Althaf
Althaf

Posted on • Originally published at hackernoon.com

AI Tools Read Your Code But Not Your Mind. I Built a Fix.

I Built a Tool That Writes Documentation for AI Coding Assistants - At Commit Time

The Problem Every Developer Using AI Tools Ignores

Here's a scenario you've lived through. You open Cursor or Claude Code, point it at a component, and ask it to add a feature. It generates code that compiles, passes lint, and looks reasonable. Then you realize it completely ignored the business rule that says users under 18 can't purchase certain items - a rule that exists nowhere in the code itself, only in your head and a Slack thread from six months ago.

This is what I call the context gap. AI tools can read your code. They cannot read your mind.

They see syntax, types, imports, and exports. They don't see why you chose that specific validation approach, what edge case you discovered in production last quarter, or why that seemingly redundant null check exists. That knowledge lives in developer brains, scattered docs, and tribal memory.

I built a tool to fix this.

What contextify-ai Does

contextify-ai is an npm package that auto-generates .context.md files for your components every time you commit. It hooks into your git pre-commit workflow, analyzes what changed using AST parsing, asks you what you intended, and generates a structured context file that sits right next to your component.

The key idea: one file, two audiences.

The top half is written in plain English for humans - purpose, business rules, edge cases, design decisions. The bottom half is structured YAML for AI tools - props, state, dependencies, render conditions. When an AI tool opens your component directory, it finds everything it needs to understand not just what the code does but why.

src/
  PaymentForm/
    PaymentForm.tsx
    PaymentForm.test.tsx
    PaymentForm.module.css
    PaymentForm.context.md    <-- this is new
Enter fullscreen mode Exit fullscreen mode

The Dual-Section Format

A .context.md file looks like this:

## PaymentForm

## Purpose
Handles credit card payment submission with real-time validation.
Enforces PCI compliance by never storing raw card numbers in component state.

## Business Rules
- Luhn algorithm validation runs on blur, not on every keystroke
- Submit button disables during processing to prevent double-charges
  (this was added after a production incident in Q3)
- Currency formatting follows the locale passed via props, not browser locale

## Edge Cases
- Expired cards show inline error, do not clear the form
- Network timeout after 30s triggers retry prompt, not automatic retry
- Zero-amount transactions are blocked client-side (API also validates)

## Decision Log
- Chose client-side Luhn over API validation to reduce round-trips
- Disabled autofill on CVV field for PCI compliance
- Used controlled inputs over uncontrolled to support the "save draft" feature

---

component:
  name: PaymentForm
  type: functional
  framework: react

interface:
  props:
    - name: amount
      type: number
      required: true
    - name: currency
      type: string
      required: true
      default: "USD"
    - name: onSuccess
      type: "(transactionId: string) => void"
      required: true

state:
  internal:
    - cardNumber: string
    - expiryDate: string
    - cvv: string
    - isProcessing: boolean
    - error: string | null

dependencies:
  external:
    - payment-gateway-sdk
    - date-fns

render_logic:
  conditions:
    - idle: "Default form state"
    - validating: "Luhn check in progress"
    - processing: "API call in flight, submit disabled"
    - success: "Shows confirmation, clears form"
    - error: "Shows inline error, form remains populated"
Enter fullscreen mode Exit fullscreen mode

A human developer reads the top section during onboarding or code review. An AI tool parses the YAML section to understand the component interface without reading 400 lines of source code.

The Smart-Diff: Why Not Regenerate Every Time?

If you regenerated the context file on every single commit, you'd burn through API credits fast and add latency to every commit. Most commits to a component are cosmetic - formatting fixes, variable renames, tweaking a string. These don't change the component's structure or behavior.

contextify-ai solves this with structural hashing. On each commit, it:

Parses the changed file with Babel
Extracts structural elements: exports, props, hooks, imports, function signatures
Normalizes them into a sorted JSON structure
Hashes with SHA-256
Compares against the hash stored in the existing .context.md

If the hash matches, the commit gets tagged [context: no-change] and no LLM call happens. If it changed, regeneration kicks in.

What triggers regeneration:

  • Added or removed props
  • New or deleted exports
  • Changed hook dependencies
  • Modified imports
  • Altered function signatures

What gets skipped:

  • Formatting and whitespace
  • Variable renames inside function bodies
  • String literal changes
  • Comment edits
  • In my analysis of commit patterns across open-source React repos, roughly 50-70% of component-touching commits are cosmetic. That's 50-70% fewer API calls.

The Developer-in-the-Loop Prompt

Here's where it gets interesting. When the smart-diff determines a component needs regeneration, the tool doesn't just throw code at an LLM. It asks you first:

contextify-ai: PaymentForm.tsx has structural changes.
> What changed and why?
Enter fullscreen mode Exit fullscreen mode

You type something like: "Added retry logic for network timeouts. Product wanted a retry prompt instead of auto-retry after the incident last week."

The tool then sends three things to the LLM:

  • Your stated intent
  • The AST-extracted structural changes
  • The raw code diff The LLM cross-references all three. If you said "added retry logic" but the diff shows you also changed the props interface, it flags that:
contextify-ai: Warning - detected changes not mentioned in your description:
  - New prop: maxRetries (number, default: 3)
  Proceed anyway? [y/n/revise]
Enter fullscreen mode Exit fullscreen mode

This catches two things: changes you forgot to mention and changes you didn't intend to make. It's a lightweight code review built into the commit flow.

Provider Agnostic

Not everyone has an Anthropic or OpenAI API key with budget to spare. contextify-ai supports:

  • Claude (Anthropic API)
  • GPT-4 / GPT-4o (OpenAI API)
  • GitHub Models - free tier, great for open-source contributors
  • Google Gemini - free tier available
  • Ollama - runs locally, zero cost, full privacy You configure your provider once and the tool handles the rest. For teams worried about sending code to external APIs, Ollama means everything stays on your machine.

Why Colocation Matters

The .context.md file sits next to the component it describes, following the same convention as .test.js, .module.css, and .stories.js files. This matters because AI tools with file system access - Claude Code, Cursor, Copilot Workspace - can discover these files automatically.

No plugin to install. No API to integrate. No configuration file to maintain. The AI tool reads PaymentForm.tsx, checks the same directory for PaymentForm.context.md, and parses the YAML. It now knows the props, state, dependencies, render conditions, and business rules before generating a single line of code.

This is the same reason CSS modules won: convention over configuration.

The Commit Flow

Here's what happens when you run git commit:

  1. Pre-commit hook fires
  2. contextify-ai lists staged component files
  3. For each file, runs smart-diff against existing context
  4. If structural change detected, prompts developer for intent
  5. Sends code + diff + intent to configured LLM
  6. Generates or updates .context.md with dual-section format
  7. Auto-stages the context file
  8. Tags commit message: [context: generated], [context: updated], or [context: no-change]

For commits touching multiple components, LLM calls run in parallel (default concurrency: 3). A typical 5-component commit adds about 7 seconds of overh
ead.

What This Means for Teams

Onboarding. A new developer opening a component directory gets immediate context without hunting through wikis, Slack, or asking the person who wrote it two years ago.

Code review. Reviewers see the .context.md diff alongside the code diff. If the business rules section changed, they know the commit has behavioral implications, not just cosmetic ones.

AI-assisted development. Every AI tool that can read files now has structured access to business logic, design decisions, and edge cases. The quality of AI-generated code goes up because the tool has context it never had before.

Audit trail. Commit tags make it trivial to track documentation freshness. git log --grep="[context:" shows you exactly when and how context files were created or updated.

Limitations I'm Honest About

  • LLMs can hallucinate business rules. The intent verification helps but doesn't eliminate this. Always review generated context files.
  • Merge conflicts. Two developers changing the same component on different branches will get merge conflicts in the .context.md file. Post-merge regeneration is usually cleaner than manual resolution.
  • JavaScript/TypeScript only for now. The Babel-based AST extraction limits the framework to the JS ecosystem. Other languages would need their own parser integrations.
  • Commit-time latency. Even with parallelization and smart-diff skipping, there's overhead. Teams that commit very frequently may want to configure the tool to run on specific branches only.

What's Next

  • VS Code extension for inline context previews and manual regeneration
  • MCP server integration so AI tools can query context through a standardized protocol instead of file system reads
  • CI/CD validation to block merges when context files are stale
  • Multi-language support starting with Python and Go

Try It

The project is open source. Install it, configure your LLM provider, and run your first commit. The context files generate themselves.

npm install contextify-ai --save-dev
npx contextify-ai init
Enter fullscreen mode Exit fullscreen mode

Support This Initiative
If this tool solves a problem you've dealt with, consider supporting the project:

Star the repo on:
GitHub - AlthafPattan/contextify-ai it helps others discover the tool
Contribute - open issues, submit PRs, or suggest features
Share this article with your team or developer community

Every star, fork, and contribution helps keep this project alive and growing.

Top comments (0)