DEV Community

Cover image for Encore.ts — 9x faster than Express.js & 3x faster than Bun + Zod
Marcus Kohlberg for Encore

Posted on

Encore.ts — 9x faster than Express.js & 3x faster than Bun + Zod

A couple of months ago we released Encore.ts — an Open Source backend framework for TypeScript.

Since there's already a lot of frameworks out there, we wanted to share some of the outlier design decisions we've made and how they lead to remarkable performance numbers.

Performance benchmarks

We benchmarked Encore.ts, Bun, Fastify, and Express, both with and without schema validation.

For schema validation we used Zod where possible. In the case of Fastify we used Ajv as the officially supported schema validation library.

For each benchmark we took the best result of five runs. Each run was performed by making as many requests as possible with 150 concurrent workers, over 10s. The load generation was performed with oha, a Rust and Tokio-based HTTP load testing tool.

Enough talk, let's see the numbers!

Encore.ts handles 9x more requests/sec than Express.js

Requests per second

Encore.ts has 80% less response latency than Express.js

Response latency

(Check out the benchmark code on GitHub.)

Aside from performance, Encore.ts achieves this while maintaining 100% compatibility with Node.js.

How is this possible? From our testing we've identified three major sources of performance, all related to how Encore.ts works under the hood.

Boost #1: Putting an event loop in your event loop

Node.js runs JavaScript code using a single-threaded event loop. Despite its single-threaded nature this is quite scalable in practice, since it uses non-blocking I/O operations and the underlying V8 JavaScript engine (that also powers Chrome) is extremely optimized.

But you know what's faster than a single-threaded event loop? A multi-threaded one.

Encore.ts consists of two parts:

  1. A TypeScript SDK that you use when writing backends using Encore.ts.

  2. A high-performance runtime, with a multi-threaded, asynchronous event loop written in Rust (using Tokio and Hyper).

The Encore Runtime handles all I/O like accepting and processing incoming HTTP requests. This runs as a completely independent event loop that utilizes as many threads as the underlying hardware supports.

Once the request has been fully processed and decoded, it gets handed over to the Node.js event-loop, and then takes the response from the API handler and writes it back to the client.

(Before you say it: Yes, we put an event loop in your event loop, so you can event-loop while you event-loop.)

Diagram

Boost #2: Precomputing request schemas

Encore.ts, as the name suggests, is designed from the ground up for TypeScript. But you can't actually run TypeScript: it first has to be compiled to JavaScript, by stripping all the type information. This means run-time type safety is much harder to achieve, which makes it difficult to do things like validating incoming requests, leading to solutions like Zod becoming popular for defining API schemas at runtime instead.

Encore.ts works differently. With Encore, you define type-safe APIs using native TypeScript types:

import { api } from "encore.dev/api";

interface BlogPost {
    id:    number;
    title: string;
    body:  string;
    likes: number;
}

export const getBlogPost = api(
    { method: "GET", path: "/blog/:id", expose: true },
    async ({ id }: { id: number }) => Promise<BlogPost> {
        // ...
    },
);
Enter fullscreen mode Exit fullscreen mode

Encore.ts then parses the source code to understand the request and response schema that each API endpoint expects, including things like HTTP headers, query parameters, and so on. The schemas are then processed, optimized, and stored as a Protobuf file.

When the Encore Runtime starts up, it reads this Protobuf file and pre-computes a request decoder and response encoder, optimized for each API endpoint, using the exact type definition each API endpoint expects. In fact, Encore.ts even handles request validation directly in Rust, ensuring invalid requests never have to even touch the JS layer, mitigating many denial of service attacks.

Encore’s understanding of the request schema also proves beneficial from a performance perspective. JavaScript runtimes like Deno and Bun use a similar architecture to that of Encore's Rust-based runtime (in fact, Deno also uses Rust+Tokio+Hyper), but lack Encore’s understanding of the request schema. As a result, they need to hand over the un-processed HTTP requests to the single-threaded JavaScript engine for execution.

Encore.ts, on the other hand, handles much more of the request processing inside Rust, and only hands over the decoded request objects. By handling much more of the request life-cycle in multi-threaded Rust, the JavaScript event-loop is freed up to focus on executing application business logic instead of parsing HTTP requests, yielding an even greater performance boost.

Boost #3: Infrastructure Integrations

Careful readers might have noticed a trend: the key to performance is to off-load as much work from the single-threaded JavaScript event-loop as possible.

We've already looked at how Encore.ts off-loads most of the request/response lifecycle to Rust. So what more is there to do?

Well, backend applications are like sandwiches. You have the crusty top-layer, where you handle incoming requests. In the center you have your delicious toppings (that is, your business logic, of course). At the bottom you have your crusty data access layer, where you query databases, call other API endpoints, and so on.

We can't do much about the business logic — we want to write that in TypeScript, after all! — but there's not much point in having all the data access operations hogging our JS event-loop. If we moved those to Rust we'd further free up the event loop to be able to focus on executing our application code.

So that's what we did.

With Encore.ts, you can declare infrastructure resources directly in your source code.

For example, to define a Pub/Sub topic:

import { Topic } from "encore.dev/pubsub";

interface UserSignupEvent {
    userID: string;
    email:  string;
}

export const UserSignups = new Topic<UserSignupEvent>("user-signups", {
    deliveryGuarantee: "at-least-once",
});

// To publish:
await UserSignups.publish({ userID: "123", email: "hello@example.com" });
Enter fullscreen mode Exit fullscreen mode

"So which Pub/Sub technology does it use?"
— All of them!

The Encore Rust runtime includes implementations for most common Pub/Sub technologies, including AWS SQS+SNS, GCP Pub/Sub, and NSQ, with more planned (Kafka, NATS, Azure Service Bus, etc.). You can specify the implementation on a per-resource basis in the runtime configuration when the application boots up, or let Encore's Cloud DevOps automation handle it for you.

Beyond Pub/Sub, Encore.ts includes infrastructure integrations for PostgreSQL databases, Secrets, Cron Jobs, and more.

All of these infrastructure integrations are implemented in the Encore.ts Rust Runtime.

This means that as soon as you call .publish(), the payload is handed over to Rust which takes care to publish the message, retrying if necessary, and so on. Same thing goes with database queries, subscribing to Pub/Sub messages, and more.

The end result is that with Encore.ts, virtually all non-business-logic is off-loaded from the JS event loop.

Diagram

In essence, with Encore.ts you get a truly multi-threaded backend "for free", while still being able to write all your business logic in TypeScript.

Conclusion

Whether or not this performance is important depends on your use case. If you're building a tiny hobby project, it's largely academic. But if you're shipping a production backend to the cloud, it can have a pretty large impact.

Lower latency has a direct impact on user experience. To state the obvious: A faster backend means a snappier frontend, which means happier users.

Higher throughput means you can serve the same number of users with fewer servers, which directly corresponds to lower cloud bills. Or, conversely, you can serve more users with the same number of servers, ensuring you can scale further without encountering performance bottlenecks.

While we're biased, we think Encore offers a pretty excellent, best-of-all-worlds solution for building high-performance backends in TypeScript. It's fast, it's type-safe, and it's compatible with the entire Node.js ecosystem.

And it's all Open Source, so you can check out the code and contribute on GitHub.

Or just give it a try and let us know what you think!

Top comments (38)

Collapse
 
nicolabelliardi profile image
NicolaBelliardi

The speed comparisons are compelling, but I’m curious about real-world use cases. Has anyone implemented EncoreTS in a production environment? How did it perform under heavy load, and were there any significant challenges?

Collapse
 
marcuskohlberg profile image
Marcus Kohlberg

Lots of teams are using Encore in prod, here are some: encore.dev/showcase

Collapse
 
melroy89 profile image
Melroy van den Berg

But then benchmark encore with db calls and some logic. Against for example fastify, with plugins like CORS, helmet, jwt, mysql, swagger and under pressure.

See: github.com/fastify/demo

Thread Thread
 
marcuskohlberg profile image
Marcus Kohlberg

Yeah more benchmarks are coming!

Collapse
 
u007 profile image
James • Edited

also any memory leak issue, i heard bunjs has memory leak. but bunjs has amazing startup time

Collapse
 
starptech profile image
Dustin Deus

Encore is literally a transparent proxy in front of your application. This is a powerful concept. You can compare it with AWS Load Balancer + AWS Lambda or any function runtime. After processing the request, Encore will make an optimized RPC call to your application. Blocking tasks like parsing the request, compression, or validation are done at the proxy level in parallel, which allows your application to do more in the single-threaded environment. A downside that isn't mentioned here is that you have to opt in fully into the framework. You won't be able to work with Node.js HTTP Request/Response objects or Web standards to leverage community plugins and frameworks. You have to wait until Encore provides all these as part of the core, or you have to develop your own modules from scratch.

The benchmarks are the best example of comparing apples and oranges. You shouldn't compare HTTP frameworks while your solution offloads work on a proxy and communicating RPC. The real benefits of Encore are the benefits a transparent proxy/RPC protocol can enable, like automatic tracing, pre/post processing, client generation, and extending your platform with features without building them twice, like CRONs, WebHook, PubSub handling, etc.

In comparison, Fastify is performing quite well here. Run twice as many instances, and you can handle the same load without opting into a full-blown solution. In practice, you still have to deal with a single-threaded JavaScript process. The performance benefits of Encore can become negligible if you don't take care of your application logic. For this purpose, Encore could provide built-in process spawning and load balancing.

Collapse
 
marcuskohlberg profile image
Marcus Kohlberg • Edited

Thanks Dustin! I'll just add that Encore does support exposing the "raw" HTTP request/response. Not sure what plugin/framework you think is unsupported?

Agree it can get a little Apples/Oranges, Encore operates across a bigger share of the stack than most other tools so making a 1:1 comparison is tricky. That's why we included examples of using e.g. Zod for validation as well to show what the performance difference is when the scope is closer to 1:1.

In our benchmarks against Fastify, we've found Encore to be much more performant in various use cases. We're working on the full report to be published soon!

Collapse
 
raibtoffoletto profile image
Raí B. Toffoletto

Encore looks amazing! Specially for putting two of my prefered languages together. But, a question, I get it's performance boost if you have a 8c 16t vps running your backend. But in the age of cloud/k8s is this gain real? I mean now we are running stuff in pods of 1c 1t and we scale up pods when something is overload? I mean... I get the rust is probably better in single core when compared with anything else, but is it bottlenecked but the lack of threads?

Collapse
 
j0nimost profile image
John Nyingi

it's interesting Encore uses multi threading. yet bun is single threaded. Bun offers something called workers as an alternative to multi threading.

Collapse
 
mindplay profile image
Rasmus Schultz

Node supports workers. (Most JS runtimes do.)

Collapse
 
princecee profile image
PrinceCEE

This looks really promising. I'll check it out soon.

Collapse
 
saaduddin profile image
Mohammed Saaduddin Ansari

Can we use encore.ts with sveltekit ? If yes, then how? Is there any example?

Collapse
 
marcuskohlberg profile image
Marcus Kohlberg

Yeah Encore is frontend agnostic. Don't have any pre-made examples right now though. But give it a go and ask any questions in encore.dev/discord

Collapse
 
juanpabloos profile image
Juan Pablo Olvera Sánchez

Awesome work guys! I'm curious to know if you will also do some benchmark against nest

Collapse
 
marcuskohlberg profile image
Marcus Kohlberg

Thanks! Yeah we can probably make that happen!

Collapse
 
monjai profile image
mon-jai • Edited

Looks good at first glance, but it seems that some of its core functionality is tied to the proprietary platform offering (such as cloud deployments).

Part of the package installed on our machine appears to be closed source, as I couldn't find the source code for the local dashboard.

Correct me if I am wrong.

Collapse
 
marcuskohlberg profile image
Marcus Kohlberg

Hey @monjai, there's no runtime dependencies on Encore's platform and everything you need to self-host is open source and documented.

You're right that the local dashboard isn't fully open source yet, this is actively being worked on and will be part of the open source project very soon.

Collapse
 
mistval profile image
Randall • Edited

This obviously took a lot of work and it's pretty cool how you took as much as you could down into native code. I haven't encountered much need for anything faster than Express though personally. Express is already fast enough that real world response time is dominated much more by latency between the user and the server, then latency between the server and external APIs, then latency between the server and databases, than by the performance of application code, and a well-optimized application on any modern framework should be bringing in enough money per instance for the business that the cost of adding more instances is pretty trivial.

Collapse
 
tareksalem profile image
tareksalem

I feel from the documentation that encore was built as a development platform and the purpose of it is totally different from what introduced in this article, I see now you are trying to introduce it as an alternative framework to expresses and other nodejs http libraries but it's bigger than that and also harder than that, is there details how this runs on normal infrastructure rather than cloud, what setup I need and libraries and tools to be installed on the OS to run it on a production server? I think it's more a new environment rather than saying it's a nodejs framework. Great work for sure but I noticed something, u still need to make the platform support more and more tools and libraries that already there, for example regarding event driven, it's not just Gcloud pub sub and aws, there are tons of pub sub tools and queuing tools as well and used on production, how someone for example can use rabbitmq or amq and other tools, what if I am using another db like mongodb, cockroachdb, or other dbs, do u need to build drivers for those tools? If yes, is there a way to use the already existing drivers that already exist so we keep the learning curve simple for new users.

Collapse
 
marcuskohlberg profile image
Marcus Kohlberg

Thanks Tarek, great comments. Some thoughts:

  • Encore was designed to solve challenges related to building cloud backends, hence the focus of the platform's capabilities is to tackle cloud related hurdles rather than trying to make it simpler to use any type of infrastructure.
  • The framework is independent of the platform, so you can self-host your application if you wish, this is documented here: encore.dev/docs/how-to/self-host
  • You can use any library, queueing infra, or database you wish. There's no magic preventing you from setting up and using that as you normally would for a Node.js application. However, Pub/Sub and Postgres is what Encore has native support for and can automatically orchestrate for you.
Collapse
 
retakenroots profile image
Rene Kootstra

And when you need a dbase call suddenly these performance characteristics do not mean anything anymore, because the dbase trends to become the bottleneck.

Collapse
 
marcuskohlberg profile image
Marcus Kohlberg

Yeah db calls are definitely a bottleneck, but in our real-world testing the uplift is still very meaningful across the entirety of an applications.

Collapse
 
imamdev_ profile image
Imamuzzaki Abu Salam

https://www.techempower.com/benchmarks/#hw=ph&test=composite&section=data-r22

I hope you're enlisted here soon, dear Encore.ts.

Collapse
 
mrbns profile image
Mr. Binary Sniper

I am Deno fanboy. Just wondering is there any way to use it with Deno ?

Collapse
 
marcuskohlberg profile image
Marcus Kohlberg

Coming soon!

Collapse
 
kravetsone profile image
Kravets

Why u compare it with one of the slowest validators?

Collapse
 
marcuskohlberg profile image
Marcus Kohlberg

We picked it because it's one of the most commonly used so the information would be most relevant to people.

Collapse
 
timsar2 profile image
timsar2

How to config angular ssr to use Encore instead of express?