DEV Community

Alex Cloudstar
Alex Cloudstar

Posted on • Originally published at alexcloudstar.com

TypeScript at Scale: Why Your tsc Takes 90 Seconds and How to Fix It

The TypeScript codebase I inherited last year had a clean build time of 94 seconds. Incremental builds were 12 seconds on a good day. The editor would freeze for two or three seconds every time you hovered over a Zod schema. Nobody wrote new code without first opening their second monitor to scroll Twitter while the language server caught up.

It is now 11 seconds for a clean build, sub-second incremental, and the editor stays responsive. We did not move to Project Corsa. We did not switch to Bun. We did not split the repo. We deleted three patterns that were generating millions of redundant type instantiations and tightened a few tsconfig settings. The work took about a week.

Most TypeScript performance problems at scale are not "TypeScript is slow." They are "we are asking TypeScript to do something quadratic and it is doing it." This post is the diagnostic playbook for figuring out which thing your codebase is doing.


The First Question: Where Is the Time Going

Before tuning anything, get real numbers. The TypeScript compiler ships with two flags that turn the diagnostic question from "feels slow" into "spends 47% of its time in type checking step X."

npx tsc --extendedDiagnostics
Enter fullscreen mode Exit fullscreen mode

The output gives you a breakdown: parse time, bind time, check time, emit time, total memory usage. If "Check time" dominates, your problem is in the type system. If "I/O Read time" or "Parse time" dominates, your problem is the size of what you are loading. These are very different problems with very different fixes.

The next flag is more targeted:

npx tsc --generateTrace ./trace
Enter fullscreen mode Exit fullscreen mode

This drops a Chrome trace file into ./trace. Open it in chrome://tracing or https://ui.perfetto.dev. You get a flame graph of every file the compiler checked, how long each took, and what types it instantiated.

The pattern to look for is single files that take seconds. Healthy code generates a flame graph where most files complete in under 100ms and the long tail tops out somewhere around 500ms. A file that takes 5 seconds is a file with a type the compiler is struggling with. A file that takes 30 seconds is the file generating most of your build pain, and finding it is most of the work.

@typescript/analyze-trace is the tool that reads the trace and tells you what is hot:

npx @typescript/analyze-trace ./trace
Enter fullscreen mode Exit fullscreen mode

It surfaces the worst-offending files, the deepest type instantiations, and the most expensive type aliases. The output is sometimes opaque, but the file names it gives you are almost always the right places to look.


The Patterns That Actually Cost You

In every slow codebase I have looked at, the cost concentrates in a small number of patterns. The patterns are recognizable once you know what to look for.

Deeply Nested Generic Inference

This is the most common offender, and it almost always lives in code that wraps a library with a generic helper.

function withRetry<T extends (...args: any[]) => Promise<any>>(
  fn: T,
  options: RetryOptions
): (...args: Parameters<T>) => Promise<Awaited<ReturnType<T>>> {
  // ...
}

const fetchUser = withRetry(api.users.fetch, { retries: 3 });
Enter fullscreen mode Exit fullscreen mode

Looks fine. The cost shows up when you wrap something whose signature is itself heavily generic. If api.users.fetch returns a Drizzle query result, or a tRPC procedure, or a Zod-inferred type, the compiler has to expand all of those generics every time the wrapper is instantiated. If withRetry is used in 200 places across your codebase, the compiler does that work 200 times in every type check.

The fix is rarely to delete the wrapper. It is to break the chain of inference at strategic points. Instead of inferring Awaited<ReturnType<T>> deep inside the type, accept a simpler input type and let the user spell it out at the call site, or use a type assertion to terminate the inference.

Conditional Type Recursion in Hot Paths

type DeepReadonly<T> = {
  readonly [K in keyof T]: T[K] extends object
    ? T[K] extends Function
      ? T[K]
      : DeepReadonly<T[K]>
    : T[K];
};
Enter fullscreen mode Exit fullscreen mode

A DeepReadonly over a small interface is fine. A DeepReadonly applied to your top-level state type, which contains your database row types, which reference your domain types, which contain unions of all your enums, is a recursive type explosion. The compiler will work through it, sometimes. Sometimes it gives up and emits any, silently. Either way it is slow.

The default position for recursive utility types should be: do not. If you find yourself reaching for DeepPartial, DeepReadonly, DeepKeys, or anything that walks an arbitrary tree, ask whether you actually need the type to be deep. Most of the time you need it to be one or two levels deep, which is a much cheaper type to write explicitly.

When you do need recursion, cap the depth:

type DeepReadonly<T, Depth extends number = 4> = Depth extends 0
  ? T
  : { readonly [K in keyof T]: DeepReadonly<T[K], Decrement<Depth>> };
Enter fullscreen mode Exit fullscreen mode

This gives you the safety of a finite recursion at the cost of writing a numeric depth helper. The compiler can always finish.

Massive Discriminated Unions

A union with eight variants is fast. A union with 200 variants generated from a Zod schema or a code generator is slow. Every time you narrow the union with a discriminator, the compiler has to consider every variant and prove which ones are eliminated.

type Event =
  | { type: 'user.created'; payload: UserCreated }
  | { type: 'user.updated'; payload: UserUpdated }
  // ... 198 more
  ;

function handle(event: Event) {
  switch (event.type) {
    case 'user.created': return handleUserCreated(event.payload);
    // ...
  }
}
Enter fullscreen mode Exit fullscreen mode

The narrowing inside the switch is where time goes. The compiler proves at each case statement which variants of the union are still possible. With 200 variants, that proof gets expensive. If handle is called from many places, and each call site re-checks the union, you can pay this cost thousands of times in a single type check.

Two fixes that usually work: split the union at module boundaries so any single function only deals with a subset, or convert the union into a record type keyed by the discriminator and look up the handler dynamically. The latter sacrifices exhaustiveness checking, which you can get back with a satisfies clause:

const handlers = {
  'user.created': handleUserCreated,
  'user.updated': handleUserUpdated,
  // ...
} satisfies Record<Event['type'], (payload: any) => void>;

function handle(event: Event) {
  return handlers[event.type](event.payload);
}
Enter fullscreen mode Exit fullscreen mode

The compiler still verifies completeness on the satisfies, but the lookup at the call site is constant-time, not a union narrowing.

as const Object Literals With Heavy Inference

export const routes = {
  users: {
    list: '/users',
    detail: '/users/:id',
    create: '/users',
    update: '/users/:id',
  },
  posts: {
    // ...
  },
} as const;

type RouteKey = `${keyof typeof routes}.${keyof typeof routes[keyof typeof routes]}`;
Enter fullscreen mode Exit fullscreen mode

The as const keeps the literal types, which is what you want. The template literal type at the bottom is what is expensive. It generates the cartesian product of all top-level keys and all nested keys, and TypeScript materializes the full set during type checking. For a route table with 50 sections and 5 routes each, you have a 250-element string union that has to be computed every time something references RouteKey.

The fix is to keep the inferred type but stop computing the joined string union at the type level. If you need to enumerate all routes, generate the list at runtime from the object and accept that you pay a tiny startup cost. If you need it at compile time for autocompletion, narrow the scope of the type so it only covers one section at a time.

Library-Caused Slowdown

Sometimes the slow file is not your code. It is node_modules/some-library/dist/index.d.ts. The trace will show this clearly. Common offenders historically have been older versions of typed-form libraries, validation libraries with very expressive types, and ORMs that try to type your entire schema.

The trace will tell you which library. The fix is usually one of: upgrade to a newer version that has fixed the issue, swap the library, or wrap the library at a thin module boundary so the heavy types do not leak into your call sites. The wrapping pattern works better than people expect: define a narrower internal type for the bits of the library you actually use, and import only that internal type from the rest of the codebase. The compiler stops re-checking the library's types every time you reference your internal type.


Project References, the Right Way

tsconfig project references are the thing everyone reaches for and rarely sets up correctly.

The promise of project references is that you split your codebase into smaller projects, each with its own tsconfig.json, and the compiler builds each project once and reuses the output. Incremental builds are dramatically faster because changing a leaf project does not invalidate the type checking of unaffected projects.

The catch is that project references require composite mode, which requires every referenced project to emit declaration files, which means every referenced project needs a real build output. This is fine for libraries. It is awkward for application code that historically just relied on tsc --noEmit for type checking and a separate bundler for output.

The setup that has worked for me on a Next.js + workspace setup:

apps/
  web/tsconfig.json
packages/
  domain/tsconfig.json
  database/tsconfig.json
  ui/tsconfig.json
tsconfig.base.json
tsconfig.json
Enter fullscreen mode Exit fullscreen mode

The root tsconfig.json references each project:

{
  "files": [],
  "references": [
    { "path": "./packages/domain" },
    { "path": "./packages/database" },
    { "path": "./packages/ui" },
    { "path": "./apps/web" }
  ]
}
Enter fullscreen mode Exit fullscreen mode

Each package has composite: true, declaration: true, and produces a .tsbuildinfo file. The first build is roughly the same speed as before. The second build is dramatically faster because unchanged packages are skipped entirely.

The mistake to avoid: do not split into projects until you have profiled and have a real reason. A small codebase with project references is slower than the same codebase without, because the overhead of the build orchestration outweighs the savings. The crossover point is usually somewhere around 50,000 lines of TypeScript or three to four logical domains that change independently.

For Astro, SvelteKit, and Next.js apps specifically, the project reference setup interacts with the framework's own type generation. Read the framework's docs before assuming the standard setup will work; they often have specific guidance.


Compiler Settings That Matter for Speed

A handful of tsconfig options have a direct performance impact. Most of the others do not, regardless of what online guides claim.

skipLibCheck: true. This is the single highest-impact setting for most codebases. It tells the compiler not to type-check your node_modules. The downside is that a broken type declaration in a dependency will not be caught at type-check time. The upside is that you stop doing redundant work for hundreds of dependencies. Almost every production codebase should have this on. Library authors who publish types should have it off in their own builds and on in their consumers' builds.

incremental: true with a tsBuildInfoFile. This caches the type-check graph between runs. Even on a single project (no references), this halves the time of subsequent runs because most files have not changed.

isolatedModules: true. Required if you are using a separate bundler for emit (which you almost certainly are in 2026 with Vite, Bun, esbuild, Turbopack, or any of the others). Forces you to write code that can be compiled file-by-file without cross-file type information. Slightly more restrictive but enables the bundler to skip work the compiler would otherwise have to redo.

moduleResolution: "bundler". The newer resolution mode introduced in TypeScript 5.0. Faster than node16 for most setups because it skips some of the legacy behavior. Use it if your bundler is newer than 2023.

noUncheckedIndexedAccess: true. Not a performance setting, but worth mentioning because people assume it slows things down. It does not. It changes the inferred type of array index access from T to T | undefined. Pure type-system change, no impact on check time.

The compiler options that do not matter for speed despite the rumors: strict, noImplicitAny, strictNullChecks, exactOptionalPropertyTypes. Turning these off does not measurably speed up type checking. They affect what gets reported, not how much work the compiler does.


Editor Performance Is a Different Problem

The TypeScript language server is what your editor uses for autocomplete, hover info, go-to-definition, and inline errors. It runs the same compiler as tsc but with different priorities: it tries to give you fast partial answers rather than complete answers.

When the editor feels slow, the tsc benchmark does not always reflect it. The language server has its own performance characteristics. The diagnostic for editor performance is to open the TypeScript: Open TS Server log command in VS Code (or your editor's equivalent) and watch what it is doing. You will see entries like:

Info 1234 [10:31:42.123] getQuickInfoAtPosition: 4823.4ms
Enter fullscreen mode Exit fullscreen mode

A getQuickInfoAtPosition taking five seconds means the type at the position you hovered is genuinely that expensive to compute. The hot path in the compiler for hovers is type display, and large inferred types (especially from generic libraries) can blow up at display time even when type checking them is fast.

Two specific editor optimizations that help:

Memory limit: 8192 (or higher). The default language server memory limit is 3GB. Codebases with very rich types blow past this and the language server starts garbage collecting aggressively, which feels like lag. Bumping the limit in your editor settings is free if you have the RAM.

Disable inlay hints in the files where they are slow. Inlay hints (the inferred parameter types and return types shown in the editor) require the language server to compute every type for display. In files with heavy generics, this is the single most expensive operation. Most editors let you disable specific inlay hint categories. Turning off "All inlay hints" on a heavy file is a quality-of-life win even if you keep them on globally.

If you are running Cursor, Zed, or any of the AI-augmented IDEs from the IDE comparison post, the language server runs the same way. The AI features are layered on top, but the underlying TypeScript performance is the language server's responsibility, and the same diagnostics apply.


What Project Corsa Changes, and What It Does Not

The Go-based TypeScript compiler (Project Corsa) is the largest single performance change to the language since it shipped. The headline numbers are real: 10x faster on most codebases, sometimes more on codebases that are I/O bound.

What it does not change is the type system. A codebase with quadratic type-instantiation patterns will still have quadratic type-instantiation patterns under Corsa. The 10x speedup compounds: a 90-second build becomes 9 seconds, but a 9-minute build becomes 54 seconds, which is still slow. If your codebase is generating millions of redundant type instantiations, fixing those patterns is still worth doing. Corsa makes the existing work faster; it does not make the work go away.

For most codebases, the incremental version of Corsa lands as a drop-in replacement for tsc and the language server. The migration is small. The wins are large. It is worth doing as soon as it is stable for your version of TypeScript. It is not worth waiting for if your build is currently slow; the patterns described above will pay off both before and after Corsa lands.


A Concrete Diagnostic Loop

If your build is slow and you do not know why, here is the order of operations that almost always isolates the problem.

Start with npx tsc --extendedDiagnostics and capture the timings. Save the output. You will compare against this later.

Run npx tsc --generateTrace ./trace and npx @typescript/analyze-trace ./trace. The output will list the hottest files. Pick the top three.

Open each of the hot files. Look at the imports first. The expensive types usually come in through an import. Note any types from libraries that look complex (Zod, Drizzle, tRPC, anything with deep generics).

Search for usages of those types in the file. Find any place where a generic is being inferred deeply or a conditional type is being recursively expanded. These are your candidates for surgery.

Try the fixes one at a time. After each, re-run tsc --extendedDiagnostics and compare against the baseline. You want to see the check time drop. If it does not, revert and try the next thing.

The reason for one-at-a-time changes is that some "fixes" make things worse, and a batched change hides which one helped and which one hurt. The diagnostic is fast enough that the patience pays off.

Once the hot files are no longer hot, run the trace again. New hot files will surface as the previous ones fall down the list. Stop when the worst file is in a range you are happy with, usually 200ms or less for a single file.

The whole loop is a day or two of focused work for most codebases. The win is permanent unless someone reintroduces the same patterns, which is why a tsc --extendedDiagnostics check in CI as a regression guardrail is worth considering.


What I Would Tell You If You Asked

If you have a slow TypeScript codebase and limited time, the highest-leverage thing you can do is generate a trace and read it. Most teams skip this and try fixes blind. The fixes work some of the time, but the trace tells you exactly where to look, and the work after that is usually small.

The second highest-leverage thing is skipLibCheck: true, if you do not already have it. The savings are immediate. The downside is rarely material.

The third is to cap any recursive utility types you have introduced and to push deeply inferred generic helpers to terminate inference earlier. These are pattern-level changes, not config tweaks, and they require reading the trace to know which patterns matter for your codebase.

What I would not do: rewrite to a different language or framework hoping the performance will be better. Bun, Deno, and esbuild are faster at the bundling and parsing parts, but the type checking is still TypeScript's compiler doing TypeScript's compiler work. The gains from tooling come from building, not type-checking. You can ship faster builds with a faster bundler and still have a 90-second tsc because nothing about the bundler changed how the type system works.

The honest summary: TypeScript at scale is fast enough if you do not do the expensive things, and slow if you do. The expensive things are knowable and the fixes are not exotic. The work is figuring out which of them your codebase is doing, which is what the trace is for.

For the broader picture of where TypeScript is heading, the Project Corsa post covers what is coming. For a related performance angle on running TypeScript without a build step at all, the type-stripping post is useful. Both are about reducing the work the toolchain has to do. This post is about reducing the work the type system has to do, which is the part you control directly even before any new compiler ships.

Top comments (0)