DEV Community

Cover image for **7 Essential Monorepo Patterns That Transformed Our Chaotic Multi-Repository Nightmare into Controlled Development Workflow**
Nithin Bharadwaj
Nithin Bharadwaj

Posted on

**7 Essential Monorepo Patterns That Transformed Our Chaotic Multi-Repository Nightmare into Controlled Development Workflow**

As a best-selling author, I invite you to explore my books on Amazon. Don't forget to follow me on Medium and show your support. Thank you! Your support means the world!

I remember when our codebase started to sprawl. We had a main application, a few micro-frontends, several shared libraries, and a separate admin panel. Each lived in its own repository. Syncing changes across them felt like herding cats. A simple update to a shared button component meant publishing a package, updating dependencies in five different places, and hoping nothing broke in between. We spent more time managing dependencies and version numbers than building features. That's when we moved everything into a single repository—a monorepo. It wasn't a magic bullet, but with the right tooling patterns, it transformed our workflow from chaotic to controlled. Let me walk you through the practical patterns that made it work.

The first and most critical pattern is setting up a proper workspace configuration. Think of your monorepo as an apartment building. The building itself is the repository, and each apartment is a separate project—an app or a package. The building manager needs to know which units exist and what rules they share. In code, this is defined in your root package.json. Using a workspace-enabled package manager like pnpm, Yarn, or npm, you declare where your projects live. This tells the package manager, "Hey, look inside these folders for other package.json files." It creates a map of your codebase.

Here’s what that looks like. In the root of your repository, your package.json might be very simple.

{
  "name": "company-monorepo",
  "private": true,
  "scripts": {
    "build": "turbo run build"
  },
  "workspaces": ["apps/*", "packages/*"]
}
Enter fullscreen mode Exit fullscreen mode

This small piece of configuration says all applications are in an apps folder and all shared packages are in a packages folder. This structure is a convention, not a rule, but it’s a helpful one. The private: true is important—it prevents you from accidentally publishing the whole monorepo to a public registry. Once this is set up, running pnpm install from the root will install dependencies for every project inside those folders. It’s smart; it links projects that depend on each other internally, so you don’t need to publish a package to test a change locally. You just update the shared code, and the app using it sees the change immediately.

Once your projects are defined, you need a way to run tasks across them efficiently. You don’t want to manually cd into each folder and run npm run build. This is where task orchestration comes in. Tools like Turborepo and Nx are built for this. They understand the dependencies between your projects. If app-web depends on shared-components, they know to build the components library first. Even better, they cache the results. If you build shared-components and nothing in it has changed, the tool can skip building it again and just use the cached output from last time. This saves minutes, even hours, in large codebases.

You configure this behavior in a pipeline. In Turborepo, this is done in a turbo.json file at the root.

{
  "pipeline": {
    "build": {
      "dependsOn": ["^build"],
      "outputs": ["dist/**", "build/**"]
    },
    "test": {
      "dependsOn": ["build"],
      "outputs": []
    },
    "lint": {
      "outputs": []
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

Let me break this down. The pipeline defines tasks you might run, like build, test, and lint. For the build task, "dependsOn": ["^build"] is a special instruction. It means "before building this project, run the build task in all of its dependencies." The outputs tell Turborepo what files are created by the task, so it knows what to cache. For test, the dependsOn is ["build"], meaning it will run the project's own build first, but not necessarily its dependencies' builds (they would have been handled already). This graph-based execution is the engine of a fast monorepo.

Managing dependencies in a monorepo has its own challenges. The goal is consistency and efficiency. You want to avoid having ten different versions of React floating around. Shared dependency management is the pattern that addresses this. Workspace-aware package managers use a technique called hoisting. Common dependencies are installed once in a root node_modules folder. This saves disk space and ensures everyone is, for the most part, on the same page.

The command line becomes your control center. Instead of navigating to a specific project, you use filters.

# Install all dependencies for all projects
pnpm install

# Add React to only the 'app-web' project
pnpm add react --filter=app-web

# Run the build script only for the 'shared-components' package
pnpm --filter=shared-components build
Enter fullscreen mode Exit fullscreen mode

This is incredibly powerful. It means you can update a dependency in one place, or run a command across a subset of projects, without tedious manual work. It also prevents "works on my machine" problems because the dependency tree is declared and managed centrally.

As your monorepo grows, creating new projects or adding standard components should be fast and consistent. You don’t want every team inventing their own folder structure. Code generation automates this. Tools like Nx and Plop allow you to create blueprints, or generators. Need a new shared utility library? Run a command, and it creates the folder, the package.json, the TypeScript config, the test setup, and a basic README—all following your company's best practices.

Here’s a simplified look at what a generator script might do.

// A generator for a new shared library
export default function (tree, schema) {
  const libName = schema.name;

  // Copy template files from a predefined folder
  generateFiles(tree, path.join(__dirname, 'template-files'), `packages/${libName}`, {
    libName,
    timestamp: new Date().toISOString()
  });

  // Add the new project to the workspace configuration
  addProjectConfiguration(tree, libName, {
    root: `packages/${libName}`,
    projectType: 'library',
    sourceRoot: `packages/${libName}/src`
  });

  // Return a function that installs dependencies
  return () => {
    console.log(`Library ${libName} created successfully.`);
    installPackagesTask(tree);
  };
}
Enter fullscreen mode Exit fullscreen mode

You would run this with a command like nx g lib data-utils. In seconds, you have a perfectly configured new package. This ensures uniformity and saves developers from copying and pasting from old projects, which often copies old mistakes as well.

In a big monorepo, it’s easy for dependencies to become a tangled mess. An app might import directly from a deeply nested utility, or two packages might depend on each other, creating a loop that can break your build. Cross-project reference validation is a guardrail against this. It lets you define rules about what can depend on what. You can say, "Apps can only import from shared packages, not from other apps," or "This core package cannot import anything from the UI layer."

Nx calls these "project boundaries." You set tags on your projects and then define rules.

// nx.json configuration
{
  "workspaceLayout": {
    "appsDir": "apps",
    "libsDir": "packages"
  },
  "targetDefaults": {
    "build": {
      "dependsOn": ["^build"]
    }
  },
  "tags": ["scope:shared", "scope:app", "type:ui", "type:data"],
  "boundaries": [
    {
      "sourceTag": "scope:app",
      "onlyDependOnLibsWithTags": ["scope:shared", "type:ui", "type:data"]
    },
    {
      "sourceTag": "type:data",
      "notDependOnLibsWithTags": ["type:ui"]
    }
  ]
}
Enter fullscreen mode Exit fullscreen mode

In this example, anything tagged scope:app (like your applications) can only depend on libraries tagged as scope:shared, type:ui, or type:data. Furthermore, libraries tagged type:data (like database clients) are forbidden from depending on type:ui (like React components). This enforces a clean architecture. If a developer tries to import a button component into a database client, the tool will throw an error during a lint or build step. It turns architectural guidelines into enforceable rules.

All this development work leads to shipping code. A unified CI/CD pipeline is what makes the monorepo sustainable at scale. The key insight is that when someone changes a file, you don’t need to test and build every single project in the repository. You only need to work on the project that changed and any project that depends on it. Your CI system needs to understand the workspace graph.

A GitHub Actions workflow can leverage this. The --filter flag, combined with ...[origin/main], is a common way to say "run this only for projects changed since the main branch."

name: Monorepo CI
on: [push, pull_request]

jobs:
  build-and-test:
    runs-on: ubuntu-latest
    steps:
      - name: Checkout code
        uses: actions/checkout@v3
        with:
          fetch-depth: 0 # Needed to compare commits

      - name: Setup pnpm
        uses: pnpm/action-setup@v2

      - name: Install dependencies
        run: pnpm install

      - name: Build Affected Projects
        run: pnpm run build --filter=...[origin/main]

      - name: Test Affected Projects
        run: pnpm run test --filter=...[origin/main]
Enter fullscreen mode Exit fullscreen mode

This workflow is efficient. If a developer opens a pull request that only touches a documentation file in one package, the build and test steps for all other packages will be skipped, saving compute time and getting feedback faster. For deployment, you can extend this further: only deploy the apps whose source code or dependencies have changed.

The final pattern is what holds the developer experience together: shared tooling configuration. Consistency in code style, formatting, and type-checking is non-negotiable in a large team. You achieve this by centralizing configurations for ESLint, Prettier, TypeScript, and Jest. Each individual project then extends from these shared configs.

Take TypeScript as an example. You create a base tsconfig.json in a shared location, perhaps in a packages/tsconfig folder.

// packages/tsconfig/base.json
{
  "$schema": "https://json.schemastore.org/tsconfig",
  "display": "Default",
  "compilerOptions": {
    "composite": false,
    "declaration": true,
    "declarationMap": true,
    "esModuleInterop": true,
    "forceConsistentCasingInFileNames": true,
    "inlineSources": false,
    "isolatedModules": true,
    "moduleResolution": "node",
    "noUnusedLocals": false,
    "noUnusedParameters": false,
    "preserveWatchOutput": true,
    "skipLibCheck": true,
    "strict": true
  },
  "exclude": ["node_modules", "dist"]
}
Enter fullscreen mode Exit fullscreen mode

Then, in your application, your tsconfig.json becomes very simple.

// apps/web/tsconfig.json
{
  "extends": "@my-repo/tsconfig/base.json",
  "compilerOptions": {
    "jsx": "react-jsx",
    "lib": ["dom", "dom.iterable", "esnext"],
    "module": "esnext",
    "outDir": "./dist",
    "rootDir": "./src"
  },
  "include": ["src/**/*"],
  "references": [{ "path": "../shared-components" }]
}
Enter fullscreen mode Exit fullscreen mode

The app-specific config only needs to define what’s different from the base, like enabling JSX for React or setting the output directory. The references field is a powerful TypeScript feature for monorepos; it tells the compiler about the local package dependencies, enabling correct cross-project type checking. You apply this same pattern to ESLint and Prettier. A single .prettierrc file in the root ensures every file in the repository is formatted identically. Your IDE, reading from the root, will apply the same rules whether you’re editing an app or a utility package.

Putting all these patterns together creates a development environment that is both scalable and pleasant to work in. You start with a clear workspace map. Task orchestration makes commands fast and reliable. Dependency management is centralized and efficient. Code generation enforces standards from the moment a project is born. Validation rules keep your architecture clean. CI/CD pipelines are smart and cost-effective. Shared configurations ensure consistent code quality. It’s a system where individual teams can move quickly without breaking the whole. The monorepo stops being a giant blob of code and becomes a well-organized city, with clear roads, shared utilities, and sensible zoning laws. It’s not about putting all your code in one place; it’s about using the right tools to manage the complexity that comes with it.

📘 Checkout my latest ebook for free on my channel!

Be sure to like, share, comment, and subscribe to the channel!


101 Books

101 Books is an AI-driven publishing company co-founded by author Aarav Joshi. By leveraging advanced AI technology, we keep our publishing costs incredibly low—some books are priced as low as $4—making quality knowledge accessible to everyone.

Check out our book Golang Clean Code available on Amazon.

Stay tuned for updates and exciting news. When shopping for books, search for Aarav Joshi to find more of our titles. Use the provided link to enjoy special discounts!

Our Creations

Be sure to check out our creations:

Investor Central | Investor Central Spanish | Investor Central German | Smart Living | Epochs & Echoes | Puzzling Mysteries | Hindutva | Elite Dev | Java Elite Dev | Golang Elite Dev | Python Elite Dev | JS Elite Dev | JS Schools


We are on Medium

Tech Koala Insights | Epochs & Echoes World | Investor Central Medium | Puzzling Mysteries Medium | Science & Epochs Medium | Modern Hindutva

Top comments (0)