DEV Community

Cover image for I built Vercube because benchmarks don’t lie
Oskar Lebuda
Oskar Lebuda

Posted on

I built Vercube because benchmarks don’t lie

Let's start with the obvious: yes, this is another Node.js framework.

I know. You know. Somewhere, a counter just overflowed and a backend developer quietly closed a tab.

Most of the time, that's the right reaction. You already have NestJS, Fastify, maybe a couple of internal helpers. Adding one more framework usually sounds pointless.

Vercube didn't start as:

We need a new framework.

It started as:

Why does every framework that feels good to work with get slow - and every fast one feel like you're fighting it?

This article isn't an external review. I'm the author of Vercube. It's a story about why I built it, which assumptions I stopped accepting, and why the benchmark results were the moment I knew this wasn't just a side project anymore.


The trade-off I didn't want to accept

For a long time, backend development felt like a forced choice.

On one side, minimal frameworks: fast, predictable, efficient - but structurally thin. You end up rebuilding patterns, conventions, and discipline in every project.

On the other side, full OOP frameworks like NestJS or Ts.ED: decorators, dependency injection, clear structure, code that still makes sense after a year. But you pay for it - in build time, startup cost, and runtime overhead.

Vercube exists because I didn't want to keep choosing between those two worlds.


Where the idea really came from

The motivation was simple and very practical:

I wanted backend code that stays readable long after the first sprint.

OOP still does a great job here. Clear responsibilities. Explicit dependencies. Code that reads like a system instead of a script. Decorators help express intent without adding noise.

The problem isn't OOP itself - it's how most Node.js frameworks implement it.

Most decorator-based frameworks rely heavily on runtime reflection.
reflect-metadata becomes a global dependency. Types are inspected at runtime. Containers infer dependencies dynamically. Metadata gets scanned and merged while the app is already running.

All of that has a cost - and that cost shows up exactly where you don't want it: in build time, cold starts, latency, and throughput.

In many ways, Vercube is what projects like routing-controllers were aiming to be - but rebuilt for modern TypeScript, modern runtimes, and performance as a first-class concern.

So I asked a simple question:

What happens if you remove all of that?


A different take on decorators

In Vercube, decorators are not a runtime trick. They describe structure without inspecting types, scanning metadata, or depending on global reflection APIs. They don't make decisions at runtime.

This keeps the framework predictable and avoids the usual reflection-heavy model entirely.

Vercube is built on top of srvx and native Request and Response interfaces. That makes it runtime-agnostic by design - the same application model works on Node.js, Bun, and Deno without adapters or compatibility layers.

The result looks familiar if you've used classic OOP frameworks - but behaves very differently under the hood. There's no magic.

If a dependency exists, it's because you registered it.
If something is resolved, it's because you explicitly allowed it.

That simplicity becomes crucial once we look at performance.


The container as the actual core

At the heart of Vercube is a very small IoC container - nothing more. It's not trying to be a platform or a meta-framework. It registers and resolves dependencies, and that's it.

There are no hidden scopes, no proxy chains, no request-time dependency graphs. The container does its work during setup. After that, request handling is as direct as possible.

It might sound boring - and that's intentional.
The less the container tries to be clever, the easier it is to reason about both behavior and cost.


Benchmarks: where ideas meet numbers

At some point, design ideas stop being interesting on their own. Numbers take over.

These benchmarks are not about "winning" against every framework. They're about showing the real cost of different architectural choices.

All tests use identical endpoints, identical load, and the same environment. Raw data and methodology are public - you can find them on GitHub: vercube/benchmarks.

All benchmarks were run on the same machine, using identical configuration and endpoints.


Build time

Build time directly affects developer experience and CI pipelines - yet it's rarely discussed.

Vercube build time benchmark

That means Vercube builds ~4.6× faster than NestJS in the same setup.
Even compared to other decorator-based frameworks, Vercube consistently stays ahead.

This difference comes from removing reflection, metadata scanning, and complex bootstrap logic. Vercube uses Rolldown - a blazing-fast bundler that complements this simplified architecture perfectly.


Cold start time

Cold start matters in serverless environments, autoscaling setups, and frequent restarts.

Vercube cold start time benchmark

Vercube cold starts ~35% faster than NestJS and over 3× faster than Ts.ED.
That gap is the cost of runtime introspection and heavy initialization.


Throughput (requests per second)

Throughput is the metric everyone expects.

Vercube RPS benchmark

That's roughly ~16% higher throughput than NestJS, while still using decorators and dependency injection.

The goal here isn't to dominate throughput charts - it's to stay competitive without giving up structure.


Latency distribution (p95)

Average latency hides problems. p95 shows real behavior under load.

Vercube latency benchmark

Vercube stays stable under load and avoids long-tail latency spikes common in heavier frameworks.


It's not just about runtime

Most frameworks talk about runtime performance.

Vercube cares just as much about everything that happens before the first request: build time, cold start, startup memory.

In modern systems - CI-heavy workflows, serverless deployments, short-lived instances - these costs add up quickly. The benchmarks show that Vercube performs well across the entire lifecycle, not just during request handling.


Why the numbers look like this

There's no single trick behind the results.

Performance comes from removing entire classes of overhead:

  • no runtime reflection
  • no metadata scanning
  • no request-time dependency resolution
  • no abstraction layers sitting on every call

Everything expensive happens once. What runs per request is plain JavaScript.

That also means fewer surprises. When something behaves a certain way, it's usually obvious why.


OOP doesn't have to mean legacy

Many developers associate decorators with slow, over-engineered systems.

That's a tooling problem - not a paradigm problem.

Vercube shows that you can keep OOP, keep decorators, keep clean structure - and still hit numbers usually reserved for much more minimal frameworks.


Closing thoughts

I didn't build Vercube to prove a point about frameworks.

I built it because I wanted to enjoy writing backend code again - without paying for that comfort in build time, startup time, or runtime performance.

The benchmarks matter because they remove opinions from the discussion. You don't have to like the design or agree with the philosophy.

You can just look at the numbers.

And those numbers show that OOP - when done carefully - still has a place in modern backend development.

Top comments (2)

Collapse
 
krojecki profile image
Konrad Rojecki

Looks intresting. I'll give it a try in the next project :)

Collapse
 
oskarlebuda profile image
Oskar Lebuda

Thanks! 🙏