If you've ever heard "I don't have enough context about your project" from an AI assistant, you know the frustration. Your AI writes perfectly reasonable code that would work great—in a completely different codebase with different conventions, dependencies, and patterns.
The problem isn't the AI. It's that you're not giving it enough to work with.
I used to paste in 3-4 files and hope for the best. Now I routinely stuff 10x more context into my prompts, and the difference is night and day. Here's my workflow.
The Context Triangle
Good AI context comes from three sources:
Project context — Architecture, conventions, patterns
Code context — Relevant files, imports, dependencies
Task context — What you're actually trying to accomplish
Most devs only give #3 and maybe a bit of #2. That's why the AI hallucinates.
Step 1: Create a Context Manifest
Create a CONTEXT.md in your project root. This is your project's cheat sheet for AI:
Project Context for AI Assistants
Project Overview
- Type: [microservice/monolith/library/frontend]
- Main language: [TypeScript/Python/etc]
- Framework: [Next.js/FastAPI/etc]
Architecture
- [Brief description of how things fit together]
- [Key directories and their purposes]
Coding Conventions
- File naming: [kebab-case/PascalCase/etc]
- Component structure: [describe pattern]
- State management: [how you handle state]
- Error handling: [your approach]
Important Patterns
- [Pattern 1]: [when to use it]
- [Pattern 2]: [when to use it]
Gotchas
- [Thing that breaks if you change it]
- [Legacy code you can't touch]
- [Performance considerations] You write this once, then reference it in every prompt. It's like giving your AI a project onboarding document.
Step 2: Use a Context Dumper Script
Instead of manually pasting files, use a script that dumps relevant context in a structured way:
!/bin/bash
dump-context.sh
echo "# Project Context"
cat CONTEXT.md
echo -e "\n# Relevant Files"
echo "## File: src/types/user.ts"
cat src/types/user.ts
echo -e "\n## File: src/services/auth.ts"
cat src/services/auth.ts
echo -e "\n# Package Dependencies (relevant)"
cat package.json | grep -A 20 '"dependencies"'
Run this before complex tasks, and pipe the output into your AI prompt. You get consistent, comprehensive context every time.
Step 3: Smart File Selection
Don't dump everything. Prioritize:
Types/interfaces — These define your data structures
Utility functions — Shared logic the AI should reuse
Similar existing implementations — Show, don't tell
Configuration files — Build tools, linters, formatters
Skip:
Generated files (node_modules, build output)
Tests (unless writing tests)
Massive config files (grab only relevant sections)
Step 4: Structured Prompting
Organize your context so the AI can parse it:
[Paste CONTEXT.md or project overview]
[Paste type definitions]
[Paste similar code that works]
[What you want, with constraints]
XML-style tags work surprisingly well. Most AI assistants are trained to recognize structured context.
Step 5: The "Context Budget" Mindset
AI models have token limits. Budget them like you would financial resources:
High-value context: Types, conventions, patterns (30%)
Code examples: Relevant implementations (40%)
Task description: Clear requirements (20%)
Constraints: What NOT to do (10%)
If you're hitting limits, strip out verbose comments and logs before pasting code. Keep the signal, lose the noise.
Real-World Example
Before (what I used to do):
"Add a function to validate email addresses in my user service"
After (what I do now):
From CONTEXT.md: We use a microservice architecture with TypeScript.
Validation happens in the service layer. We return custom ValidationErrors.
interface ValidationError {
field: string;
message: string;
}
interface User {
email: string;
// ... other fields
}
function validateUsername(username: string): ValidationError[] {
const errors: ValidationError[] = [];
if (!username || username.length < 3) {
errors.push({ field: 'username', message: 'Too short' });
}
return errors;
}
Create a validateEmail function following the same pattern as validateUsername.
Return ValidationError[] with field: 'email'.
Check for: empty, invalid format, already taken (call checkEmailExists).
The result? Code that actually integrates with my existing patterns instead of fighting them.
Tools That Help
Some tools make this easier:
Cursor IDE: Has a @codebase feature that automatically includes relevant files
Cline: Good at understanding project structure when you point it at the right directory
Aider: Lets you add files to context with /add and shows token usage
But honestly, the bash script + CONTEXT.md combo works everywhere and costs nothing.
The Tradeoffs
This isn't magic. Downsides:
Setup time: Takes 10-15 minutes to create your CONTEXT.md
Token costs: More context = more tokens (but fewer iterations)
Maintenance: Keep CONTEXT.md updated as your project evolves
For me, the time saved on back-and-forth with the AI pays for itself within the first few complex tasks.
Your Turn
Start with a minimal CONTEXT.md for your current project. Add to it as you notice patterns or conventions you want the AI to follow.
Within a week, you'll have a comprehensive context document that makes every AI interaction more productive.
And honestly? That "I don't know your codebase" response will become a rare annoyance instead of a daily blocker.
What's your approach to context management? Drop a comment if you've found techniques that work better—I'm always looking to improve this workflow.
Tags: #ai #productivity #tools #workflow #developerexperience
Top comments (0)