DEV Community

Cover image for Bun Test vs Vitest for TypeScript Library Authors in 2026
Gabriel Anhaia
Gabriel Anhaia

Posted on

Bun Test vs Vitest for TypeScript Library Authors in 2026


You've seen it. A library has two test scripts in the same
package.json. One says bun test. The other says vitest run.
The CI matrix runs both. The local dev loop runs whichever one
the maintainer typed last. The README does not mention either.
The reason there are two is that the library exports a function
that does some HTTP work, and at some point a maintainer got
tired of waiting for Vitest to spin up to re-check a one-line fix
in a small unit suite. They tried bun test, it felt noticeably
quicker, they kept it. Then a contributor opened a PR that
mocked an ESM dependency the way Vitest's docs describe. It
worked under Vitest. It would silently no-op under bun test in
some Bun versions where the mock helper the contributor reached
for is not yet implemented. The library shipped. A bug class
landed downstream. The maintainer added the second script.

This is the question every library author has had to answer in
2026: pick a test runner that is fast enough to keep you in flow,
and make sure it tests the same thing your users will see when
they consume the package on Node, Bun, Deno, or a bundle. Bun
Test and Vitest both want to be the answer. They are good at
different things. The real difference is ecosystem surface
versus raw runtime fit; speed is downstream of both. Below: the
rule for picking, plus the same library tested both ways so you
can see what actually changes.

What each one is, in one paragraph

Bun Test is the test runner built
into the Bun runtime. It runs your *.test.ts files directly, in
Bun, with no separate compile step and no node-shaped wrapper. It
ships a Jest-compatible API surface (describe, it, expect,
mock) plus a vi alias for tests written against Vitest. There
is nothing to install — bun test is the command, the binary is
the runtime, and the binary already understands TypeScript.

Vitest is a Vite-powered test runner that
runs on Node (or on Bun, if you tell it to). It uses Vite's
transform pipeline for TypeScript and JSX, ships its own
vi.mock, snapshot, fixture, and concurrency primitives, and
plugs into Vite's HMR for sub-100ms test re-runs in watch mode.
The package is vitest, the config typically lives in
vitest.config.ts, and it works anywhere Node works.

Both run TypeScript out of the box. Both speak the
describe/it/expect shape. Both do snapshots, mocks, and
parallel files. The differences live below that surface.

Speed: where Bun's lead is real, and where it isn't

Bun's own benchmark page reports
a lead over Vitest and Jest on cold-start runs of pure-logic
suites, and that matches what most library authors I've talked to
report anecdotally: bun test feels faster on cold start, and
noticeably quicker for small unit suites where the runner spends
most of its time spinning up rather than running assertions. Your
numbers will vary with suite shape, transform cost, and how much
of the test body is async I/O the runner cannot speed up.

The first caveat is watch mode. Vitest's HMR re-runs only the
affected tests in tens of milliseconds on a small suite, because
the Vite dev server is keeping the module graph warm. Bun re-runs
the full file. On a 50-test file where you want to iterate one
assertion, the watch loop in Vitest stays under the cold-start
time of bun test, and the "slower" runner feels faster because
you are working at the edit-save-see boundary, not the cold-start
one.

The second caveat is what the test does. Bun's native TS, native
HTTP, and the lack of a separate transform make it pull ahead
hardest on compute-bound suites: pure functions, parsers,
algorithm tests, anything that does not block on the network or
on a database. Once the bottleneck is await db.query(...) or
await fetch(...), the runner is mostly waiting on a socket and
the gap collapses. Bun wins on cold-start and on pure code.
Vitest wins on watch loops and on suites where the ecosystem buys
you back integration time.

ESM mocking: the gap that bites you

This is the part that bit the maintainer in the opening story.

Vitest's vi.mock is the
de-facto ESM mocking API in 2026. It hoists, it supports partial
mocks via vi.importActual, it has vi.spyOn for selective
replacements, it integrates with vi.mocked<T>(...) for typed
access, and there is a deep ecosystem of helpers built on it.
The mocking surface is what most third-party testing utilities
target.

Bun Test's mocking covers
the core Jest API — mock(), mock.module(),
spyOn, mockReturnValue, mockImplementation — and exposes a
vi alias for the Vitest-compatible subset. Module mocks
maintain live bindings across ESM and CJS, which is good. The
gap is breadth. Vitest's helpers around partial-imports, hoisting
semantics, and the integrations with libraries like
@testing-library have a longer track record under Vitest than
under Bun's compatibility layer. Some helpers will work under
both; some will silently no-op under Bun if they reach for a
Vitest-only API.

For a library author shipping a package other people consume,
this matters. If your tests rely on vi.importActual to mock
half a module while keeping the other half real, validate that
the same test passes under Bun before you claim Bun support. The
compatibility is not a strict superset, and a green CI run on
Vitest does not guarantee a green run on Bun.

Same library, both runners

Here is a small library — one pure function, one HTTP function —
that runs the same test under both runners. Save as
src/index.ts:

export type Slug = string & { readonly __brand: "slug" };

export function slugify(input: string): Slug {
  const cleaned = input
    .toLowerCase()
    .normalize("NFKD")
    .replace(/[̀-ͯ]/g, "")
    .replace(/[^a-z0-9]+/g, "-")
    .replace(/^-+|-+$/g, "");
  return cleaned as Slug;
}

export async function fetchTitle(
  url: string,
  fetchImpl: typeof fetch = fetch,
): Promise<string> {
  const res = await fetchImpl(url);
  const html = await res.text();
  const match = html.match(/<title>(.*?)<\/title>/i);
  return match?.[1]?.trim() ?? "";
}
Enter fullscreen mode Exit fullscreen mode

The pure-logic test is identical text under either runner, except
for the import line. Pick one of these depending on which runner
you are running. Save as src/index.test.ts:

// Vitest:
import { describe, it, expect } from "vitest";
// Bun: replace the line above with
//   import { describe, it, expect } from "bun:test";

import { slugify, fetchTitle } from "./index";

describe("slugify", () => {
  it("lowercases and dashes", () => {
    expect(slugify("Hello World")).toBe("hello-world");
  });

  it("strips diacritics", () => {
    expect(slugify("Café au Lait")).toBe("cafe-au-lait");
  });

  it("collapses runs of separators", () => {
    expect(slugify("  --foo___bar--  ")).toBe("foo-bar");
  });
});
Enter fullscreen mode Exit fullscreen mode

The HTTP test is where the runners diverge. The dependency
injection style above sidesteps the import system, which is
what you want as a library author. Your function takes its
fetch as a parameter. The test passes a fake. Neither runner's
mocking surface is on the critical path:

import { describe, it, expect } from "vitest";
import { fetchTitle } from "./index";

describe("fetchTitle", () => {
  it("returns the <title> contents", async () => {
    const fakeFetch: typeof fetch = async () =>
      new Response("<html><title>Hi</title></html>");

    const title = await fetchTitle("https://x.test", fakeFetch);
    expect(title).toBe("Hi");
  });
});
Enter fullscreen mode Exit fullscreen mode

Same file, same assertions. Runs under vitest run and under
bun test. No mock-module call. No hoisting gotcha to worry
about.

When you do need module mocking — when the function under
test imports its dependency directly instead of taking it as a
parameter — write the test in the dialect of the runner you are
running it under, and keep two assertions about the call shape
that are runner-agnostic:

import { describe, it, expect, vi } from "vitest";

vi.mock("./fetch-impl", () => ({
  doFetch: vi.fn(async () => new Response("<title>Hi</title>")),
}));

// vi.mock is hoisted above this import, so the mocked module
// is in place by the time fetchTitle is loaded.
import { fetchTitle } from "./index";

describe("fetchTitle (mocked module)", () => {
  it("calls the mocked fetch", async () => {
    const out = await fetchTitle("https://x.test");
    expect(out).toBe("Hi");
  });
});
Enter fullscreen mode Exit fullscreen mode

The Bun version of this test reaches for mock.module(...) and
has different hoisting semantics. If your library's CI matrix
covers both runners (and it should, if you advertise both), the
DI style above keeps the assertions portable and lets the
runner-specific paths live in one or two test files instead of
spreading across the suite.

Snapshot testing, briefly

Both runners have snapshots. Vitest's toMatchSnapshot,
toMatchInlineSnapshot, and toMatchFileSnapshot all work, and
the diff output in watch mode is good. Bun's toMatchSnapshot
covers the common case and serializes to the same __snapshots__
directory shape. Inline snapshots work in both. The interop is
high enough that snapshot files written by one are usually
readable by the other, in my experience across recent versions —
both expose expect.addSnapshotSerializer, but the
registered-serializer lifecycle differs subtly across versions,
so check the version when you go to use a serializer that touches
a private field.

For a library: pick the runner you write the most tests in, write
your snapshots there, and run the other runner in CI to catch
regressions where the second runner produces a different
serialized form. Different serialized forms across runners are a
sign your snapshot is testing the runner, not your code.

CI matrix: what to actually wire up

A library that says "works on Node, Bun, and Deno" needs the
matrix to back the claim. The shape that scales uses
matrix.include alone, so each entry pairs its runtime with the
right command:

strategy:
  matrix:
    include:
      - runtime: node-22
        cmd: pnpm vitest run
      - runtime: node-24
        cmd: pnpm vitest run
      - runtime: bun-latest
        cmd: bun test
Enter fullscreen mode Exit fullscreen mode

Two Node LTS lines under Vitest, plus the same suite under
bun test. Add Deno if you ship a Deno entry point — Deno can
run Vitest via npm specifiers,
which keeps the assertions identical across all three. The Deno
dx is rougher than either Node or Bun, but the matrix coverage is
worth the friction for a library that advertises Deno support.

The second runner buys you bug coverage. Speed isn't the point.
The Vitest run is the source of truth for the assertion
semantics. The Bun run catches the cases where Bun's runtime
behaves differently: header handling on fetch, timer
resolution, or a module-resolution quirk. If your library has
zero of those, the Bun job is a green checkbox. If it has any,
you find out before a downstream user files an issue at 2am.

The decision rule

For a TypeScript library author in 2026, the rule is:

  1. Default to Vitest as the primary runner. The ecosystem surface, the watch-mode loop, the IDE integration, and the stability of the mocking API across versions are worth more than the cold-start speedup on a per-edit basis. Vitest's place in the Vite-aligned toolchain is the centre of gravity the rest of the JS ecosystem is currently building around.
  2. Add bun test as a second matrix entry if your library advertises Bun support. The job exists to catch runtime-divergence bugs, not to be fast. Because it is fast anyway, the matrix is cheap.
  3. Flip the order if your library is itself a Bun-first project (a Bun plugin, a Bun-runtime adapter, a CLI shipped as bun build --compile). In that case bun test is the primary loop and Vitest is the cross-check.
  4. Write tests against dependencies as parameters wherever you can. The mocking-API gap between the two runners disappears when your function takes its collaborators as arguments instead of importing them. Library code wants this refactor for testability anyway. The runner choice is the bonus.

Run the matrix. Find out before your users do.


If this was useful

Library authoring across runtimes — pick a transform, pick a
target list, decide what your matrix actually proves — is the
ground TypeScript in Production covers end to end. The build
chapter is the long version of "how does the same tsc
invocation produce a package that imports cleanly under Node,
Bun, and a bundler"; the testing chapter is the long version of
this post, including the parts on coverage tooling, fixture
patterns, and how to keep snapshot files runner-portable.

If you want the ground floor before the production layer,
TypeScript Essentials is the entry point. The TypeScript Type
System
is the deep dive on the type-system depth that makes
typed mocks (vi.mocked<T>, MockInstance<F>) compose without
as any.

The five-book set:

  • TypeScript Essentials — From Working Developer to Confident TS, Across Node, Bun, Deno, and the Browser — entry point: amazon.com/dp/B0GZB7QRW3
  • The TypeScript Type System — From Generics to DSL-Level Types — deep dive: amazon.com/dp/B0GZB86QYW
  • Kotlin and Java to TypeScript — A Bridge for JVM Developers — bridge for JVM devs: amazon.com/dp/B0GZB2333H
  • PHP to TypeScript — A Bridge for Modern PHP 8+ Developers — bridge for PHP devs: amazon.com/dp/B0GZBD5HMF
  • TypeScript in Production — Tooling, Build, and Library Authoring Across Runtimes — production layer: amazon.com/dp/B0GZB7F471

All five books ship in ebook, paperback, and hardcover.

The TypeScript Library — the 5-book collection

Top comments (0)