Over the past year, I’ve seen many software engineers and teams dive into AI adoption, only to face disappointment when the hype delivers little more than autocomplete tricks or basic unit test generation. The outputs often ignore standards, architecture, and conventions, creating more work instead of reducing it. Rather than abandoning this inevitable shift, we should learn from those who succeed.
What’s missing for those who struggle with AI adoption? It’s the ability to guide the model with context. Many tools allow you to automatically inject tailored instructions for each task, shaping the output for success. In this post, I’ll demonstrate how to do this with GitHub Copilot and Claude Code, setting you on the path to becoming an AI-native software engineer.
Why instructions change everything
- They make implicit team rules explicit and version-controlled.
- They steer the agent’s defaults: style, architecture, naming, testing, commit conventions, security guardrails.
- They raise consistency across the team and reduce review friction.
- They open the door towards more AI automation.
When instructions live in-repo, the agent reads them like any engineer would—then generates code that fits your standards by default.
Instruction file types: tool-specific vs. universal
Different AI tools support different instruction file conventions, but the underlying principles remain the same: provide context, enforce standards, and guide behaviour.
GitHub Copilot uses .github/copilot-instructions.md for repository-wide instructions that apply to all requests. For more granular control, you can create path-specific instructions using .github/instructions/*.instructions.md files with frontmatter scoping. These files can target specific paths, languages, or audiences (IDE vs. CLI), making them highly flexible for monorepos and polyglot projects.
Claude Code looks for CLAUDE.md in your project root. This single file contains all Claude-specific instructions, though you can organize complex guidance using markdown sections and headers.
Universal approach: Many teams adopt AGENTS.md as a tool-agnostic convention. Place an AGENTS.md file at the root or nested in directories, and any agent tool can consume it. This works well when you use multiple AI tools or want a single source of truth that isn't vendor-locked.
The choice depends on your team's needs:
- Use
.github/copilot-instructions.mdfor simple, repository-wide Copilot guidance. - Use
.github/instructions/*.instructions.mdwhen you need fine-grained, path-specific control. - Use
AGENTS.mdfor simplicity and portability across different AI tools.
Now let's explore how to architect these instructions for maximum effectiveness.
Architecture: layering context so it stays relevant and cheap
Think of context in layers:
Global repository instructions: High-level standards that apply everywhere (language style, testing baseline, security posture, commit messages, PR checklist).
Local, nested instructions: Directory-scoped
AGENTS.mdfiles or GitHub Copilot instruction files that apply to specific domains (e.g., backend/api vs. frontend/ui). These are only loaded when relevant, saving tokens and maintenance.Task-specific prompt files: Small, reusable prompt templates for common jobs (e.g., "add endpoint", "migrate tests", "write changelog"). They eliminate boilerplate prompting and keep users on the happy path.
The result: less noise, lower cost, more accurate outputs.
In Practice
Imagine a typical full-stack application: a React frontend talking to a Node.js API, each with distinct patterns. The frontend team enforces accessibility standards and component testing. The backend team requires validated payloads, structured error responses, and integration tests. Without instructions, an AI agent might generate a backend endpoint that returns raw errors or a frontend component missing ARIA labels — creating review friction and rework. Instruction files ensure the agent knows these domain-specific rules and conventions upfront.
File structure examples
GitHub Copilot approach with .github/ directory:
repo/
├── .github/
│ ├── copilot-instructions.md # Repository-wide Copilot instructions
│ ├── instructions/ # Path-specific instruction files with frontmatter scopes
│ │ ├── backend.md
│ │ ├── frontend.md
│ │ └── security.md
│ └── prompts/ # Task-specific prompt files (optional)
├── backend/
└── frontend/
Universal AGENTS.md approach (tool-agnostic):
repo/
├── AGENTS.md # Repository-wide instructions
├── backend/
│ ├── AGENTS.md # Backend-specific instructions
│ └── api/
│ └── AGENTS.md # API-specific instructions
└── frontend/
└── AGENTS.md # Frontend-specific instructions
Example instruction files
Repository-wide instructions (applies to entire codebase):
# Repository-wide Instructions
## Commits
- Use Conventional Commits (feat:, fix:, docs:, refactor:, test:, chore:)
- Reference issue numbers in commit body (e.g., "Relates to #123")
- Keep subject line under 72 characters
## Testing
- Require tests for all new features and bug fixes
- Maintain minimum 80% code coverage
- Run full test suite before submitting PR
## Code Review
- Keep PRs focused and under 400 lines when possible
- Self-review checklist: tests pass, no secrets, lint clean
- Tag at least one reviewer familiar with the domain
## Security
- Never commit secrets, API keys, or credentials
- Use environment variables for configuration
- Run security scanners in CI pipeline
- Validate and sanitize all external inputs
## Documentation
- Update README when adding new features or changing setup
- Document complex algorithms inline
- Keep API documentation in sync with code changes
## Writing Style
- Always wrap filenames in backticks (e.g., `config.json`, `AGENTS.md`, `.github/copilot-instructions.md`)
- Use clear, descriptive variable and function names
- Write comments for non-obvious logic, not for self-explanatory code
Backend-specific instructions:
---
name: Backend
scope:
paths: ["backend/**"]
languages: ["js", "ts"]
audience: ["ide", "cli"]
---
# Backend API Standards
## Goals
- Add/modify API endpoints with proper validation, telemetry, and tests.
## Code style
- Follow repo ESLint/Prettier. Prefer functional handlers; avoid singletons.
## HTTP/Errors
- Use RFC7807 problem+json. Map domain errors to 4xx with stable codes.
## Testing
- For each handler, add integration tests hitting the router; seed minimal fixtures.
Frontend-specific instructions:
---
name: Frontend
scope:
paths: ["frontend/**"]
languages: ["ts", "tsx"]
audience: ["ide", "cli"]
---
# Frontend Engineering Standards
- Use React + Vite; prefer function components and hooks.
- Enforce a11y: labels, roles, keyboard navigation.
- Co-locate component tests next to components with *.spec.tsx.
Key takeaways
- Keep each instruction file concise; multiple files may apply simultaneously.
- Scope narrowly by paths/languages to avoid conflicts and reduce tokens.
-
Instruction merging behavior: Both GitHub Copilot and
AGENTS.mdapproaches merge path-specific/scoped instructions with repository-wide files. All applicable instruction files are combined and sent to the model together, enabling layered context that becomes more specific as you navigate deeper into the codebase. - Frontmatter scoping (shown above) is specific to GitHub Copilot's
.github/instructions/files. -
AGENTS.mdfiles use directory proximity for scoping instead of frontmatter. - Both approaches achieve the same goal: context layering that keeps instructions relevant and maintainable.
Task automation: prompt files and sub-agents
Beyond static instructions, both GitHub Copilot and Claude Code support structured, reusable task templates that guide common workflows. These eliminate repetitive prompting and ensure consistent outputs across your team.
Prompt files (GitHub Copilot) are parameterized markdown templates stored in a prompts/ directory. They define inputs, steps, and constraints for recurring tasks like add API endpoint or migrate tests. Users invoke them from the IDE or CLI, supply the parameters, and get guided, standardized results.
Sub-agents (Claude Code) work similarly but can be more dynamic. Each sub-agent focuses on a specific responsibility (e.g., Security Review, API Synthesizer) with its own instruction set. Claude can either route to the appropriate sub-agent automatically based on your task and the sub-agent's description, or you can invoke a specific sub-agent explicitly by name in your prompts.
Both approaches serve the same purpose: turning common tasks into repeatable, version-controlled workflows.
Prompt files (GitHub Copilot)
Create a prompts/ directory (see VS Code Copilot Prompt Files) with reusable task prompts.
Prompt files can be stored in two locations:
-
Workspace-level:
.github/prompts/(available in current workspace only) - User-level: VS Code profile folder (available across all workspaces, syncable via Settings Sync)
prompts/
├── add-endpoint.md
├── migrate-tests.md
└── write-changelog.md
Example prompt file with frontmatter:
---
description: Create an API endpoint with validation, tests, and docs
mode: agent
tools: ['githubRepo', 'search/codebase']
model: Claude Sonnet 4
---
Use the repository's Backend instructions.
Add ${input:method} ${input:route} with validation and router-level tests.
Usage: In VS Code, use the / command in chat to invoke prompts:
/add-endpoint method=POST route=/api/users
The prompt file parameters are filled automatically, and Copilot generates code following the template's instructions.
Sub-agents (Claude Code)
See Claude Code: Sub-agents for detailed configuration. Each sub-agent can have its own focused instruction set and optional scope restrictions.
Just like prompt files, sub-agents can be stored in two locations:
-
Project-level:
.claude/agents/(available in current project only) -
User-level:
~/.claude/agents/(available across all projects)
Example sub-agent configuration:
---
name: add-endpoint
description: Create API endpoints with validation, tests, and documentation. Use proactively after modifying backend code.
tools: Read, Edit, Bash, Grep, Glob
model: inherit
---
Use the repository's Backend instructions.
Add endpoints with validation and router-level tests.
Usage: Invoke sub-agents explicitly by mentioning them with @:
@add-endpoint Create a POST /api/users endpoint
Or let Claude route automatically by describing the task without specifying a sub-agent. Claude will select the appropriate sub-agent based on context and task requirements.
Instruction writing and refinement
Start with generic, widely available best-practice instructions
- Use markdown header for emphasis and agent guidance.
- Keep instructions concise and to the point.
- Goal: Establish a baseline and understand prompting techniques.
TIP Get inspiration from the awesome-copilot repository.
Meta Prompting → Model driven refinement
- Use your AI tool to generate/refine instructions based on your repository's context.
- Ask the model to remove ambiguity and keep instructions concise.
- Goal: Generate prompts that are understandable by the AI agent.
- Example: Generate me Copilot instructions for my blog post content based on the style you find in the current posts inside /content/post*.md. Validate for ambiguity and keep the instructions concise.
TIP Use Visual Studio Code's built-in Generate Workspace Instructions File to get an out of the box prompt that generates the instructions file based on your repository's context.
Meta + Feedback → Self-analysis & future adjustment
- After generating output, ask the model to explain deviations from guidelines and adjust its approach for future tasks by modifying the instructions.
- Goal: Build a feedback loop for continuous improvement.
- Example: Go through our conversation and iterations we made and implement/adjust instructions to avoid these mistakes in the future.
Making agentic AI work for you
The difference between autocomplete on steroids and genuinely transformative AI assistance isn't the model — it's the context you provide. Teams who succeed with AI agents don't just adopt the tools; they codify their implicit knowledge into explicit, version-controlled instructions. They start by inventorying their domain rules and standards, then layer them strategically: repository-wide baselines, domain-specific guidance, and task-focused prompts. They keep these instructions close to the code, maintain them like any other asset, and refine them through feedback loops.
This isn't optional groundwork for some future state of AI development. It's the necessary first step that transforms AI from a novelty into a reliable engineering partner. Without it, you're asking agents to guess your standards. With it, you're building a foundation for automation that scales across your team and compounds over time.
References and further reading
References:
Top comments (0)