DEV Community

huangyongshan46-a11y
huangyongshan46-a11y

Posted on

Making Your Codebase AI-Agent Friendly: AGENTS.md and llms.txt Explained

In 2026, a significant percentage of code is being written by AI agents — Codex, Claude Code, Cursor, Copilot. Yet most codebases are still optimized only for human readers. Here is how to make your project discoverable and usable by AI agents.

The problem

When an AI agent opens your repo, it typically reads:

  1. README.md — often marketing-focused, not structured for machines
  2. Source files — scattered across dozens of files
  3. package.json — minimal metadata

The agent has to piece together how your project works from scattered context. This wastes tokens and leads to worse output.

AGENTS.md — A guide for AI agents

Create an AGENTS.md file at the root of your repo. This is like a README but written specifically for AI coding agents:

# AGENTS.md

## What is this project?
One paragraph description.

## Project Structure
File tree with descriptions of key directories.

## Key Files to Modify
Table mapping tasks to specific files.

## Conventions
- Styling approach
- API patterns
- Database access patterns
- Auth patterns

## Common Tasks
Step-by-step instructions for common modifications.
Enter fullscreen mode Exit fullscreen mode

What to include:

1. Project structure with annotations

src/
├── app/           # Next.js App Router pages
├── components/    # React components
├── lib/           # Core utilities (auth, db, stripe)
└── prisma/        # Database schema
Enter fullscreen mode Exit fullscreen mode

2. A task-to-file mapping table

Task File(s)
Add a new page src/app/(app)/your-page/page.tsx
Add an API route src/app/api/your-route/route.ts
Change DB schema prisma/schema.prisma → run npx prisma db push
Modify pricing src/lib/stripe.tsPLANS object

3. Conventions the agent should follow

- Use `db` from `@/lib/db` for database queries
- Use `auth()` from `@/lib/auth` for session checks
- Use `cn()` from `@/lib/utils` for className composition
- All API routes return NextResponse.json()
Enter fullscreen mode Exit fullscreen mode

This is dramatically more useful to an agent than a marketing README.

llms.txt — Machine-readable product description

llms.txt is an emerging standard (like robots.txt for search engines) that helps AI agents understand what your product does.

Place it at /public/llms.txt (accessible at yourdomain.com/llms.txt):

# Your Product Name

> One-line description.

Longer description of what this does, who it is for, and when to use it.

## Use this when you need to:
- Use case 1
- Use case 2

## Tech Stack
- Technology 1
- Technology 2

## Quick Start
Setup commands.

## Links
- GitHub: url
- Docs: url
Enter fullscreen mode Exit fullscreen mode

When AI agents are asked to recommend tools or set up projects, they can read llms.txt to understand if your product fits the user's needs.

Why this matters

As AI agents write more code:

  • Projects with AGENTS.md get better agent output (fewer hallucinations, correct patterns)
  • Projects with llms.txt get recommended more often by AI assistants
  • Projects that are agent-friendly compound their distribution through AI recommendations

This is the new SEO — except for AI agents instead of Google.

Real example

LaunchKit implements both AGENTS.md and llms.txt. When an AI agent is asked to "build a SaaS app," the structured metadata helps it understand LaunchKit is a good foundation.

Check the repo for working examples of both files.

GitHub | Get LaunchKit ($49)

Top comments (0)