DEV Community

Cover image for Is your repo ready for the AI Agents revolution? Checklist

Is your repo ready for the AI Agents revolution? Checklist

Intro

AI is probably the biggest buzzword around this year and last — it’s not just developers talking about it, it’s everyone! We’re seeing huge shifts in the programming world thanks to personal assistants packed with knowledge, tools that help us code, and autonomous agents. This whole thing is a mix of risks and cool opportunities. On one side, you’ve got the AI fans who are super excited about boosted productivity, getting more creative, and having millions of agents handle all the boring stuff for us. On the other, the cautious folks are raising valid concerns about data privacy, accuracy, and how this all affects our mental health. But here’s the deal: this change is happening, and we can’t hit the brakes. The best way to get ready is to jump in and be a part of it. So, I really recommend you try out, practice, and just play around with these new technologies. Keep an open mind, but also stay smart about the risks. That’s definitely what I’m aiming for!

For some time now, I’ve been exploring various AI tools, frameworks, models, and approaches, primarily focusing on existing (brownfield) codebases rather than entirely new experiments. My goal is straightforward: to find ways to make my work easier and more enjoyable. I don’t expect AI to write all my code or solve complex architectural challenges. Instead, I want it to handle the tedious, repetitive tasks, such as writing tests, fixing minor issues, or migrating outdated technologies. I ran experiments across different projects — in PHP, Node, React, and Java, encompassing both legacy and newer systems. Some were successful; others were not, and I plan to share more about these experiences in future articles. However, through this process, I recognized a critical omission in all the courses, tutorials, and articles I encountered: a simple, practical checklist for preparing a codebase to work effectively with AI agents. This checklist is what I’m sharing today. While you can just check the checklist below, I highly recommend reading the full article to fully understand the context and customize it to your specific requirements.

Disclaimer: Please note that I am a software engineer, not an AI specialist. The AI landscape is evolving rapidly, and I acknowledge that I may have missed certain developments or information. My focus is primarily on integrations with GitHub, Cursor, and Copilot, as these are the tools I use most often. While this checklist should be generally applicable, users of different toolchains may need to make minor adaptations. This article and the following tips are based on my personal experiences and may not universally apply. I encourage discussion and feedback — if you disagree with any points or feel something is missing, my DMs are open. I anticipate that this article will require updates over time, but I believe it offers immediate value in its current form.

The high-level goal

This checklist serves as an initial roadmap, not a definitive final product. You are encouraged to customize it to fit your project’s specific needs and development practices. However, it offers a simple guide to immediate steps for making AI agents more effective in your project. The structure is outlined below (in markdown so you can easily reuse for your projects):

# AI Readiness checklist

## Repository Hygiene

### Source Control & Ignore Rules
- [ ] Repository is integrated with Git (GitHub / GitLab / Bitbucket)
- [ ] `.gitignore` is added
- [ ] `.cursorignore` is added
- [ ] No secrets are hardcoded
- [ ] `.env` file is used and secrets are injected securely

### Automatic Linting & Formatting
- [ ] One formatter is configured and enabled
- [ ] Auto-format on save is enabled
- [ ] One linter is configured and added to the repository
- [ ] Linter runs in the CI pipeline

### IDE Setup & Extensions
- [ ] Modern IDE is used (Cursor / Antigravity / IntelliJ + Windsurf, etc.)
- [ ] Common shortcuts are configured (open chat, select lines, inject files, etc.)

### Standard Repository Commands
The repository provides a **single obvious command** for each action:
- [ ] Start the application
- [ ] Run all tests
- [ ] Run unit tests only
- [ ] Run e2e tests only
- [ ] Run the linter

## Testing Safety Net

### Unit and e2e rules
- [ ] Unit tests cover core functionality
- [ ] Test coverage is at least ~70%
- [ ] E2E tests cover critical user flows

### CI / CD & Code Review
- [ ] CI runs before every merge
- [ ] Linting, tests, and formatting are enforced in CI
- [ ] Code review is required before merge
- [ ] Merge Request (MR) template with a manual checklist exists (e.g. accessibility)
- [ ] (Optional) Automated AI-based code reviews are enabled (e.g. Copilot)
- [ ] (Optional) Automated repo checks are enabled (e.g. Jules agents)


## Grounding Documents

### Requirements & Intent
- [ ] High-level product specification exists in `.ai/requirements`
- [ ] Each new feature has a PRD in `.ai/requirements`
- [ ] (Optional) Out-of-scope features are documented in `.ai/requirements`

### High-Level Technical Documentation
- [ ] `architecture.md` exists in `.ai/docs`
- [ ] `tech-stack.md` exists in `.ai/docs`

### AI Rules & Guardrails
- [ ] `agents.md` with high-level AI rules exists in `.cursor/rules`
- [ ] Each framework/technology has its own rules file in `.cursor/rules`
- [ ] Cognitive-load reduction rules (e.g. 3x3 rule) are documented in `.cursor/rules`


## MCP Integrations

- [ ] Task management MCP (Jira / Asana, etc.) is added to `mcp.json`
- [ ] Design tooling MCP (Figma / Subframe, etc.) is added to `mcp.json`
- [ ] Database tooling MCP is added to `mcp.json`
- [ ] Browser tooling MCP (e.g. Chrome DevTools) is added to `mcp.json`
Enter fullscreen mode Exit fullscreen mode

Let’s talk about those sections and rules in more detail.

Repository hygiene

Source control & ignore rules

Before you start throwing new AI tools at your codebase, you’ve gotta make sure your repository has the right basic setup. The name of the game is safety and making sure you can easily track and roll back any changes. Seriously, these simple points are crucial:

  • Your Version Control Setup:

    • Git is Key: Your code needs to be fully hooked up with Git. Also, make sure your setup (whether it’s your terminal, your IDE plugin, or whatever else) is super comfortable and efficient for you to use.
    • Hosting Platform: You need a platform like GitHub, GitLab, or Bitbucket. Personally, I highly recommend GitHub — in the last months it’s got some great features and integrations not available in other platforms, especially when you start bringing AI agents into the mix.
  • Essential “Don’t Look Here” Files:

    • .gitignore: Non-negotiable. Use this to stop tracking files that are unnecessary or sensitive, like your .env file with all your credentials.
    • .cursorignore (or similar): Add this file to tell tools like semantic search, code completion (like Tab), agents, and inline edits to totally ignore certain content. Files already in .gitignore are automatically skipped, but you can:
    • Overrule It: Use the ! symbol if you do need an agent to see a file that Git is ignoring.
    • Speed Things Up: Add extra rules to cut out irrelevant parts of the repo. It helps the indexing go faster.

Quick Tip: If you use Cursor, you can set up global ignore patterns in your user settings so you don’t have to configure it for every single project.

  • Secret Management (Seriously, Stop Hardcoding):

No Hardcoded Secrets: This is a basic security rule, but it’s even more vital now that Large Language Models (LLMs) are learning from and analyzing our code. Back in the day, early LLMs like Copilot were sometimes proposing actual secret values just based on common variable names — a huge security risk! You must stop your secrets from being suggested to anyone else.

Automatic linting and formatting

Setting up the right tools isn’t strictly required, but man, does it make a difference for the developers! It seriously improves the quality of life and saves a ton of time arguing during code reviews or when new folks join the team (you know, like that never-ending tabs vs. spaces war). You totally need a formatter. Just make sure it’s set up to match your project’s style. If it auto-formats when you save, developers won’t even have to think about it — it just happens! For trickier style and quality issues, a Linter is a must-have. Get it configured, and make sure there’s one simple command to run it. And the last piece? Your CI/CD pipeline needs to lock these rules down. Any pull requests that mess up the formatting or fail the linting checks should be stopped dead before they can merge into the main branch.

IDE setup + extensions

Investing time in mastering your IDE is a crucial, yet often overlooked, step — much like familiarizing yourself with a new phone/car or air-fryer. While coding in a Notepad is technically possible (I still remember when teachers taught us to code this way back at school), it’s not an efficient use of your time. Utilize a modern IDE specifically designed for AI-tooling. Tools like Cursor IDE and Antigravity are excellent options that offer significant functionality right out of the box with minimal setup. If you prefer your existing IDE (such as IntelliJ), at least install and learn a contemporary, AI-focused plugin like Windsurf.

Knowing the most common shortcuts will drastically improve your efficiency. For example, in Cursor:

  • Cmd r then Cmd s: Opens the command palette
  • Cmd i: Toggles the side panel
  • Cmd e: Toggles the agent layout
  • Cmd Shift L (with code selected): Selects the highlighted code as context for the chat
  • @: Selects a file as input
  • /: Runs a shortcut command within the chat

Sharpen Your Axe: Think of preparing your workspace as a reasonable investment to maximize productivity.

The repository provides a single obvious command for each action

Having too many options, like when you’re shopping or picking a restaurant, can actually freeze you up, turning simple decisions into a major time sink. You see the same problem in a ton of software projects: too many settings and options. They’re great for the “power users,” sure, but they’re a total headache for the average person who just needs the standard setup. To make sure your code repository is easy for an AI Agent to use and welcoming to new developers, you’ve got to keep it from getting crazy complicated. It’s fine to keep the deep configuration options for the advanced folks, but you absolutely must provide a simple, single-command way to run the most common, basic stuff. This straightforward approach will be a huge win for both Large Language Models (LLMs) and junior engineers just getting started with your code.

The Must-Have One-Command Tasks:

  • Start the whole app
  • Run every test
  • Run just the unit tests
  • Run only the end-to-end (e2e) tests
  • Run the linter

These commands should be front-and-center in your README and included in the usual spots — like the scripts section in the package.json for any TypeScript/JavaScript project. Cutting down on confusion is key to a smooth start for everyone: new engineers and AI tools.

Testing safety net

To get your repository ready for AI agents, after the initial setup, you should really focus on building a solid testing safety net so you don’t accidentally break your application’s logic. Trust me, you really want to throw down comprehensive tests before you dive into any major refactoring or logic changes to guard your code. This safety net keeps unexpected failures from popping up, especially those caused by “improvements” or changes from your shiny new AI agents. Don’t just cross your fingers; never assume code is too simple to fail, or that developers will manually check things after every agent run. Make sure you get super-fast feedback — tests are the shortest loop you can get, letting the agent quickly fix itself if it slips up and introduces a bug. Keep that code quality high by running these tests in your CI/CD pipeline. This is a must-do to stop buggy code from sneaking into production. Plus, well-written test cases are actually awesome documentation for both your human engineers and the LLMs.

While automating is key, let’s be real — you can’t automate everything. For those tricky bits, set up a clear manual process instead of just relying on “everyone knows this” or a gut feeling. Make code reviews mandatory by requiring approval from another team member (you can set this up in Github). Give them review templates with checklists for things you can’t automate (like more advanced accessibility checks). Automate all the grunt work you can to maximize efficiency so your human reviewers can focus their brainpower on the critical, nuanced stuff, not nitpicking a missing comma. Oh, and definitely use AI pre-checks! Tools like Copilot’s automatic code review or Jules’ scheduled regular checks are an easy, cheap way to get an extra set of eyes, which is super helpful, especially for smaller teams.

Grounding documents


AI agents are quickly moving past their initial “junior engineer” phase — they’re getting way smarter and more capable. The thing is, their training is based on tons of global data, so they don’t know the ins and outs of your specific app or tech stack yet. To connect their general expertise with your specific world, you need to introduce grounding documents.

Requirements & Intent

If you’re wondering where to put documents about your product’s logic, the .ai/requirements folder, usually right at the top of your repository, is a great spot. It’s not a super-official standard, but it works well for keeping a high-level product spec. This spec should clearly explain what the app can do, and just as important, what’s definitely not going to be included (out-of-scope). Seriously, listing those excluded things, especially when you have to cut scope because of time or resources, has been super helpful.

For those of you into spec-driven development, this is also the perfect home for your Product Requirement Documents (PRDs). PRDs should get into the nitty-gritty of every feature and function. They basically turn the product manager’s big-picture idea into actual code and keep the documentation handy. PRDs shine brightest in “greenfield” projects (brand new repositories). But if you’re dealing with an existing, “brownfield” project, a mixed approach is the way to go: keep a high-level doc for all the stuff that’s already solid, then use good PRD practices for new things, like fresh features or big code migrations to ditch old systems. This hybrid strategy works pretty well for me (at least for now 😛).

High-level tech documentation

These two files I propose you to add in the repo are basically the high-level, technical rundown of what the project is all about.

architecture.md

First up is architecture.md. Keep this one short, like max two pages. It sketches out the project’s architecture, and you can toss in links to more detailed stuff if needed. Right now, I stick it in the .ai/docs folder at the root of the repo, but honestly, I don’t know if that’s standard practice or not. Why architecture.md is useful? It’s the main reference point for AI agents when they’re tackling huge jobs, like moving everything over to a new technology. It’s super helpful for getting new engineers up to speed fast. It’s essential for showing exactly how data moves through the whole application.

tech-stack.md

The second document I think we should have is tech-stack.md. This is just a list of all the tools and frameworks the app uses. It stops AI agents from “hallucinating” or randomly adding stuff we don’t need (like bringing in a new unit testing library) by nudging them to use what’s already there. It helps make sure new code follows our chosen solutions, especially during migrations, although we can also manage this with those cursor rules files (more on those later).

AI rules and guardrails

While previous sections can be useful also in standard development, this one is strictly connected with AI. There are special techniques to improve accuracy, reduce time needed for changes, help with context window problems and ensure that AI-generated code fits your best practices and conventions.

agents.md

First of them is an AGENTS.md file, which serves as a readme specifically for AI agents. This is an open, simple format designed to guide coding agents, with various examples available on their dedicated webpage. You should place this file in the root directory of your repository. For monorepos, it should also be included in the root of subfolders. The AGENTS.md file should contain essential information such as:

  • Build and test commands.
  • Code style guidelines.
  • Testing instructions.
  • A high-level overview of the project. My additional tip is to not overfocus on creating this file perfect at the first run. Start with a working version and improve it gradually — you can always update the file later.

Cursor rules

I understand that not everyone finds “Cursor rules” appealing, but I personally consider them quite valuable. Cursor rules are concise markdown files, ideally under 500 lines, located in the .cursor/rules folder. They serve to provide agents with additional context about specific frameworks, technologies, or areas, you can also add some good and bad examples of given rule there. For example, a rule could mandate the use of functional components in React or enforce your internal design system. Another rule might define what should typically be covered in unit tests and what mocking strategies to employ. You should continuously refine these files. After extended sessions with an agent, prompt it to improve the rules, adding more context relevant to your objectives. The agent finds and uses these rules automatically, so you don’t need to add them specifically.

Cognitive load rules

This section was added to my personal workflows after completing the 10xDevs course by Przeprogramowani (hello, Przemek and Marcin!). I realized that while I was focused on improving AI performance, I neglected the importance of managing my own cognitive load, overfocusing on helping machines instead. Dealing with detailed change tracking, debugging, or reviewing hundreds of lines of code is exhausting. You can significantly ease this burden by establishing rules for the AI agent on when it should pause and ask for your review. This approach allows you to understand changes and adjust the code more frequently. A practical example is the 3x3 approach: instruct the agent to implement a maximum of three steps from the plan, briefly summarize the work done, and then propose the next three steps. While Cursor’s built-in agents have recently started doing something similar, it lacks predictability (yet?). To tailor this interaction to your needs, you should define a specific rule for it. These rule files can contain various other tips for collaboration. Simply consider your preferred way of communicating with the agent and ask them to follow those preferences in the rules.

MCPs

The Model Context Protocol (MCP) is an open-source standard designed to connect AI applications with external systems, often likened to the simplicity of a USB-C connection — a “one-fits-all” solution that eliminates the need for special converters. You can learn more here: https://modelcontextprotocol.io/docs/getting-started/intro. Its primary benefit is enabling AI applications to connect to data sources and perform tasks, essentially giving the Large Language Model (LLM) “hands.” This means the AI can move beyond just suggesting or proposing actions to actually carrying them out and learning from the results. This capability leads to:

  • Improved accuracy due to better context for the LLM.
  • Reduced development time.
  • Simplified workflows, especially when multiple tools are involved.
  • Getting started is very easy, typically requiring only a few lines of code in your mcp.json file.

While this area is rapidly evolving, a general approach is recommended. In my view, you will likely need MCP integration for:

  • Task management tools
  • Design tooling
  • Database tooling
  • Browser tooling

Your specific needs may vary — you might require fewer or different tools, especially if you specialize in a certain area. How to find good MCP candidates? Validate your existing working flows. Ask yourself: which tools do you use most often? What common tasks do you perform? What actions currently require you to stop the agent’s work, perform an action manually, and then paste the results back into the agent window to continue? These manual steps are excellent candidates for MCP integration. Given current trends, it’s highly likely that “there should be an MCP for that!”

Summary

This article is long, personal and opinionated. However, I hope it will help you build a “framework” to maximalize benefits from using AI-agents in your project. You absolutely don’t have to agree with everything here, so feel free to read through, think critically, and adjust these ideas to fit what your project actually needs.

“Plans are worthless, but planning is everything.”
~Dwight D. Eisenhower

Remember that famous Eisenhower quote, “Plans are worthless, but planning is everything.” Take some time to prep before you jump into “vibe-coding” and just throw random changes into the repo. A little foresight goes a long way toward avoiding huge messes, bad changes, and flat-out wrong calls. Your code, your teammates, and future-you will seriously appreciate the effort.

P.S. If you got something good out of this, please think about sharing it or dropping a clap/comment below to help other people find it. Thanks!

P.S.2 All pictures used in the article were generated using ChatGPT.

Top comments (0)