The first monorepo I worked in had 12 services, 4 shared libraries, 3 frontend apps, and a tooling directory that nobody understood. My first week, I spent four hours hunting for the right place to add a new shared utility. I added it in the wrong package. The CI build broke. A staff engineer rewrote my PR with a polite comment that said "monorepos take time to learn." That comment is technically true. It is also a graceful way of saying "you wasted a day because you did not understand the layout."
Six months later I run an 80,000 file monorepo as a solo founder. I add new packages, refactor across boundaries, and ship multi-package changes with confidence. The thing that changed was not my memory. It was my workflow. Claude Code reads the dependency graph, plans changes that respect package boundaries, and catches violations before CI does. Here is the system.
Why Monorepos Are Hard
A regular repo has one source tree. You can hold its shape in your head. You know roughly where things live. You can grep your way to anything important.
A monorepo has many source trees that share a build system, a dependency graph, and a set of conventions that vary by package. The cognitive load is not linear. A monorepo with 50 packages is not 50 times harder than a single package. It is more like 500 times harder, because every change has to consider which packages depend on what, what the build implications are, and which conventions apply where.
The classic monorepo failure modes are familiar to anyone who has worked in one:
- Adding code in the wrong package
- Importing across boundaries that should be internal
- Breaking the build of a package you did not touch
- Missing a follow-up change in a downstream package
- Triggering 40 minutes of CI because you touched a root config file
Each of these is solvable with discipline. The problem is that discipline is expensive. You have to remember which packages depend on which, which boundaries are enforced and which are convention only, and which root files trigger global rebuilds. Most engineers do not remember. They guess and they apologize.
Monorepos do not ask for talent. They ask for cognitive bandwidth most engineers do not have. AI is the bandwidth multiplier.
The Map Phase
Every monorepo workflow starts with a map. Before I touch any code, I generate a dependency map of the repo and store it in a markdown file that becomes context for every subsequent change.
The map skill walks the package manifests, builds a dependency graph, and produces a markdown summary with the following sections:
- Package list with one-line descriptions
- Dependency layers showing which packages depend on which
- Boundary rules extracted from package configs and import patterns
- Hotspots showing packages that change most often
- Stable cores showing packages that almost never change
The output is roughly 100 lines for a 50-package monorepo. I commit this map to a docs folder and regenerate it weekly. Every Claude Code session starts by reading the map.
The map is the single highest leverage artifact in monorepo work. With the map in context, Claude Code can reason about cross-package implications without me having to explain them every time. Without the map, every session starts from zero and Claude Code makes the same mistakes a new hire would make.
The Locate Skill
Once I have a map, the next high-frequency task is locating where a change should go. New utility, new component, new service - which package owns it?
The locate skill takes a description of the change and the dependency map, and recommends the right package with reasoning. The reasoning matters. Anyone can pick a random package. The locate skill explains why a particular package is right, which boundary rules it respects, and which other packages would need to be updated as a result.
A typical locate query looks like this:
"I need to add a function that parses a date string with timezone support. Where does it go?"
The skill might respond:
"It belongs in
packages/utils-date. Reasoning: this package already exports date helpers, has timezone support vialuxon, and is depended on by 12 packages including the three places that currently parse dates inline. Adding it here lets you delete the inline parsers in those three places. Boundary rules respected:utils-dateis a leaf package with no internal dependencies, so adding logic here cannot create cycles."
Compare that to a guess. The locate skill saves me from a wrong placement that would cost a follow-up PR to fix.
The Boundary Check
Monorepo boundaries are usually documented in package configs but enforced unevenly. Some boundaries are hard, enforced at build time. Some are soft, enforced by code review. Some are conventions that everyone violates.
The boundary check skill takes a diff and verifies that every import and every cross-package change respects the boundary rules. The skill flags three categories:
- Hard violations - imports that would break the build
- Soft violations - imports that violate conventions but build fine
- Boundary stretching - changes that are technically allowed but indicate a design problem
I run the boundary check on every PR before pushing. The skill catches about one violation per week, almost always a soft violation that would have made it through CI but generated a comment in code review. Catching it before review saves a day of round-trip.
The Cross-Package Refactor
The hardest monorepo task is refactoring across packages. Renaming a function in a shared library means updating every package that uses it. Splitting a package into two means updating every importer. Moving a utility from one package to another means coordinating the move with all dependents.
Without tooling, cross-package refactors take days and usually leave one or two packages broken. With Claude Code and the dependency map, the same refactor takes hours.
The cross-package refactor skill takes a description of the refactor, the dependency map, and the target packages. It produces:
- A list of every file that needs to change
- The order of the changes (leaf packages first, then dependents)
- The exact diff for each file
- A list of packages that need to be rebuilt and tested
I run the refactor in stages. The skill produces the diffs. I review and apply them one package at a time. After each package I run its tests. If they pass, I move on. If they fail, I diagnose and fix before continuing.
The staged approach is critical. Trying to land a 30-package refactor as one PR is how you end up with three weeks of merge conflicts. Landing it package by package keeps the diffs small and reviewable.
The CI Cost Skill
Every monorepo has a CI cost problem. Touching a root config file triggers a rebuild of every package. Touching a leaf package only rebuilds that one. Most engineers do not know which files trigger which rebuilds, so they make conservative assumptions and run full builds when they do not need to.
The CI cost skill takes a diff and predicts which packages CI will rebuild. It uses the dependency graph plus the CI config to produce an estimated build time and a list of affected packages. If the cost looks wrong, the skill suggests how to scope the change to reduce it.
I run the CI cost skill before every push. About once a week it catches a change that would have triggered a 40-minute build that I could have avoided by scoping the diff differently. Over the course of a year that adds up to dozens of hours saved.
The Skill Stack in Action
A typical monorepo task runs through the skills like this. Imagine I want to add caching to a database query helper.
- Map - I read the latest dependency map (already in context from last week)
-
Locate - I ask where caching logic belongs. The skill recommends
packages/db-cache(existing package) orpackages/utils-cache(also existing). It explains whydb-cacheis wrong (it is database-specific) andutils-cacheis right (it is generic and already used by 8 packages). -
Implement - I write the caching logic in
utils-cachewith Claude Code generating the initial implementation against the package conventions. - Boundary check - I run the boundary check on the diff. It passes.
- CI cost - I check the build cost. About 12 packages will rebuild, total estimated CI time 8 minutes.
- Push - I push and let CI confirm.
Total time from idea to push: about 90 minutes. Without the skills, the same task would have taken half a day, with at least one wrong-package mistake along the way.
The compound effect of small skills is what makes monorepos tractable. Each skill is small. The stack is unstoppable.
What I Got Wrong Early
Three mistakes I made in my first month with this workflow that cost me real time.
First, I tried to put too much logic into the locate skill. I wanted the skill to answer queries like "what should this whole feature look like?" The skill is good at locating one piece. It is bad at designing whole features. Designing features is a planning task that needs human judgment first and Claude Code as a sounding board second.
Second, I forgot to regenerate the dependency map. After three weeks I was using a stale map that was missing four new packages. Claude Code kept recommending the wrong packages because the map was wrong. Now the map regenerates as a weekly cron task and gets committed automatically.
Third, I trusted the boundary check too much. The skill catches obvious violations but not subtle architectural drift. I had a package slowly accumulating responsibilities that did not belong, and the boundary check rated it green every time because every individual change was small. The lesson: skills catch local problems, humans catch global problems. Both are needed.
FAQ
How big does a repo need to be before this workflow is worth it?
Around 10 packages. Below that you can hold the structure in your head. Above that the cognitive load starts to dominate.
What about Bazel monorepos?
Same workflow, different tooling layer. Replace package manifests with BUILD files in the map skill. Everything else translates.
How do I handle multi-language monorepos?
The map skill needs language-aware parsers for each language. Most modern monorepos have one or two dominant languages and a long tail. Cover the dominant languages and let the long tail be manual.
Does this work for the Linux kernel?
Probably not. The Linux kernel has its own contribution model and conventions that do not map cleanly to this workflow. The workflow is designed for application monorepos, not OS-scale codebases.
The Bigger Picture
Monorepos are how most large engineering organizations actually build software. The size and complexity make them inaccessible to anyone who is not already inside. New hires take months to become productive. External contributors are nearly impossible to onboard. The cognitive cost is real, and it filters who gets to participate in the work.
Claude Code does not eliminate the cost. It distributes it. The dependency map captures what would otherwise live only in senior engineers' heads. The locate skill turns tribal knowledge into a documented decision process. The boundary check turns informal rules into automated checks. The result is that newer engineers can ship monorepo changes that look like they came from senior engineers, because the senior engineering knowledge is encoded in the skills.
This is the deeper pattern. AI is not replacing engineers. It is replacing the unwritten manuals that engineers used to spend years internalizing. The teams that win are the ones that document their conventions as skills, share them across the team, and use the freed-up bandwidth to do work that was previously impossible.
If you want to see the actual skill files I use, my full Claude Code setup is documented at nextools.hashnode.dev. The map skill, the locate skill, the boundary check, and the CI cost skill are all there. Steal them, adapt them to your monorepo, and ship more.
The cost of monorepo work is collapsing. The teams that act on this first will compound the advantage. Start with the map. Build out from there.
Top comments (0)