DEV Community

Juan Torchia
Juan Torchia

Posted on • Originally published at juanchi.dev

TypeScript 7 beta benchmark: what the repo numbers confirmed for me — and what I still don't buy

I have a problem with compiler announcement posts: the numbers the official team cites live in lab conditions that look nothing like what you actually run in production. Microsoft says TypeScript 7 is "often 10x faster". Maybe. But on what kind of code? With what flags? On what hardware?

So I built typescript7-demo — a public lab with benchmarks anyone can reproduce, using real repos as test subjects, pinned commits, and two GitHub Actions workflows you can fork and run today.

This post is a summary of what I learned building that — not just running it.

The first trap: the package name

Let's start with something that would waste anyone's time. TypeScript 7 does not install as typescript@beta. The published package is @typescript/native-preview, and the binary you run is tsgo, not tsc. Meanwhile, TypeScript 6 is still typescript, but for side-by-side usage there's @typescript/typescript6, which exposes tsc6.

That's not a minor detail. If you install typescript@beta in April 2026, you'll probably get TypeScript 6 at some release candidate version. The repo's package.json makes this explicit:

// package.json  correct side-by-side installation
{
  "devDependencies": {
    "@typescript/native-preview": "^7.0.0-dev.20260421.2",
    "@typescript/typescript6": "^6.0.1",
    "typescript": "^6.0.3"
  },
  "scripts": {
    // tsc6 compiles with the classic JS compiler
    "typecheck:ts6": "tsc6 --noEmit",
    // tsgo is the native Go compiler
    "typecheck:ts7": "tsgo --noEmit"
  }
}
Enter fullscreen mode Exit fullscreen mode

With that foundation, both compilers run against the same project, comparable, without interfering with each other.

The numbers I got

If you ask me what surprised me most, it's that the TypeScript 7 gains are not uniform. They depend dramatically on the kind of types you're using.

The data from site/data/history.json at the analyzed commit is as follows:

Corpus TS6 median TS7 median Delta
template-literal-stress 44.009 ms 17.097 ms 2.57x
many-modules 3.468 ms 858 ms 4.04x
project-references 1.487 ms 622 ms 2.39x
type-fest v5.6.0 (real) 125.026 ms 76.685 ms 1.63x
ts-pattern v5.9.0 (real) 5.294 ms 2.795 ms 1.89x
ts-essentials v9.4.2 (real) 1.369 ms 1.164 ms 1.18x

Those are the outputs of benchmark-synthetic.mjs and benchmark-public-repos.mjs run locally. These aren't my claims — they're numbers from the JSON committed in the repo, reproducible.

What catches my attention: the synthetic many-modules corpus — 2600 files chained with imports — hits a 4x improvement. But type-fest, which is exactly the kind of code where you'd expect the biggest impact (conditional types, recursive, mapped, template-literal all together), comes in at 1.63x. That's not bad, but it's pretty far from the 10x in the announcement.

My read: the native gains are real, and they're largest where the bottleneck is I/O and module resolution. With deeply recursive types, the inference algorithm is still the same — it just runs in Go instead of Node. That explains the gap between 4x and 1.6x.

How the benchmark is structured to be credible

This matters more than the numbers themselves: why should you trust these results over any other post that runs time tsc once?

benchmark-public-repos.mjs clones repos at specific commits and verifies the expected hash before running anything:

// scripts/benchmark-public-repos.mjs — integrity check before measuring
const projects = [
  {
    id: "type-fest",
    repo: "sindresorhus/type-fest",
    ref: "v5.6.0",
    // if the commit doesn't match, the benchmark fails before running
    expectedCommit: "a5491644b32160f804dd10d0b44dad461037f4c1",
    // the exact command, not an npm script that could change
    ts6: ["node", "--max-old-space-size=6144",
           "node_modules/@typescript/typescript6/bin/tsc6",
           "-p", "tsconfig.json", "--noEmit"],
    ts7: ["node",
           "node_modules/@typescript/native-preview/bin/tsgo.js",
           "-p", "tsconfig.json", "--noEmit"],
  },
  // ... more repos following the same pattern
];
Enter fullscreen mode Exit fullscreen mode

The synthetic benchmark generates projects in .tmp/synthetic-corpus — you can inspect them after running. They're not a black box: they're real TypeScript files you can open and verify. And results come out as JSON first, from which the Markdown and the site are derived.

What it does not measure: editor latency, runtime performance, and bundlers (unless you configure it explicitly). The docs/benchmark-methodology.md says this plainly, which I think is honest.

The part that interested me most: migration friction

The benchmarks are the hook. But the real value for a team that has to make a decision right now is in the migration scanner.

scripts/scan-migration.mjs reads every tsconfig*.json in the project and reports what's going to break. The three errors I saw reported most often in real repos:

moduleResolution=node10 — removed in TypeScript 7. If you have this, you need to migrate to node16, nodenext, or bundler and verify that your package.json exports resolution still works the same way.

baseUrl — removed in TypeScript 7 preview builds. This feels like the most painful one in large repos, because baseUrl was the standard solution for absolute imports before paths became ergonomic. There are legacy projects with dozens of imports that depend on this.

moduleResolution=classic — incompatible with any modern path resolution. If you have this in 2026, you have a bigger problem than TypeScript 7.

The scanner also emits info for skipLibCheck (which can hide problems during migrations) and for the absence of isolatedDeclarations when declaration: true is active. That second one feels particularly useful to me because isolatedDeclarations is a clear direction from the TypeScript ecosystem, not just a TypeScript 7 feature.

The fixture fixtures/isolated-declarations/bad-export.ts demonstrates this concretely:

// fixtures/isolated-declarations/bad-export.ts
// This fails with isolatedDeclarations: true because
// the return type is not explicitly declared.
// tsgo will reject this; tsc6 with isolatedDeclarations will too.
export const getPostMetadata = async (slug: string) => {
  return {
    slug,
    title: "missing explicit return type",
  };
};
Enter fullscreen mode Exit fullscreen mode

The test in test/tooling.test.mjs verifies that tsgo rejects that file with the correct error. It's a small but executable example.

GitHub Actions without exposing private code

This was the design decision I thought about the most. If you want to test TypeScript 7 against your own repo without publishing the code, the repo generates a GitHub Actions workflow you can copy and run inside your private environment.

The typescript-7-open-source.yml workflow runs on every push to main and on PRs. The typescript-7-full-benchmark.yml is manual or weekly (Mondays at 10 UTC), and accepts inputs to control RUNS and WARMUPS. The artifacts — benchmark-results.json, migration-findings.json — are saved even if the job fails, which is exactly what you want when you're investigating why TypeScript 7 is rejecting something.

Both workflows have permissions: contents: read and nothing else. No writing to the repository, no extra tokens, no surprises.

What I don't buy about the current state

I'll be direct about a few things:

The synthetic benchmarks are run with runs: 1 and warmups: 0 in the results I have committed. The JSON says so: "runs": 1. The benchmark-methodology.md explicitly states that you prefer medians when you have multiple samples, but the most visibly shared result (history.json) has exactly one sample per data point. For the synthetic corpus, that makes the 4x delta on many-modules a single-run observation on my Windows machine. Reproducible, yes. Statistically robust, not entirely.

The CI workflow uses runs: 3 and warmups: 1 by default, which is better. But for the numbers I committed locally, the "sanity check" caveat that benchmark-methodology.md itself gives to single runs applies.

That doesn't invalidate the lab — it invalidates the certainty of specific numbers. The methodology, the design, the repos chosen, the migration scanner: all of that is still solid.

My position

TypeScript 7 is going to matter more than most of the compiler updates we've seen in the last five years. The native Go foundation isn't marketing: it changes the ceiling of what's computable before the feedback loop becomes unacceptable in large repos. For me that matters in contexts like juanchi.dev (which is a relatively small project) but especially in enterprise-scale codebases with many packages and project references — which is exactly the world I work in at Lakaut.

What I don't buy: that this is urgent today for most teams. TypeScript 7 is still in beta. The --checkers, --builders, --singleThreaded flags are preview behavior. baseUrl being removed is going to break a fair amount of legacy code. The right story is: build the lab now, run the migration scanner, identify your blockers, and don't migrate yet.

Using this repo to measure, yes. Doing npm install @typescript/native-preview in production this week, no.

If you run it against your own project and the numbers are different from mine, that's useful information. The .github/ISSUE_TEMPLATE/typescript-7-result.yml has exactly the format you need to report it.


This article was originally published on juanchi.dev

Top comments (0)