DEV Community

Maxim Krendel
Maxim Krendel

Posted on

The TypeScript Performance Lie: How V8 Actually Runs Your Code

The most common question asked during interviews with JS developers is: "What is TypeScript and why is it needed?" The classic answer: "It's an extension of JS that adds static typing and speeds up development." And that's absolutely true. When the code base grows so large that it can no longer be managed by a single team, strict contracts save the project from drowning in dozens of unpredictable undefined and runtime errors.

A pencil sketch of a job interview: an interviewer and a candidate sitting at a desk, both with cube-shaped heads

Engineers who come from statically typed languages (such as C++, Java, or Rust) will logically continue this thought: "Static typing also speeds up code execution. After all, the compiler knows the size of the data and the types of variables in advance and can immediately generate optimized machine code."

This sounds perfectly logical. And it's even partially true, because in some cases the JS engine does generate machine code. But acceleration isn't as simple as it seems. Let's take a look at what exactly happens under the hood of TS and how much it "speeds up" our code.

Why Your Objects Aren't Hash Maps?

Let's start with the basics. For comparison, let's compare objects in JS and structures in C. It seems that an object in JS is a hash table where you can put anything you want whenever you want, unlike in C, where a structure is a strictly monolithic piece of memory. If this were the case, JS would run very slowly. That's why V8 converts our objects into C-like structures on the fly.

A pencil sketch of a figure with a cube-shaped head walking through a maze. Open boxes with gears are located in the dead ends of the maze

The advantage of structures is extremely simple: since the elements are arranged predictably, accessing a property actually turns into an offset that needs to be subtracted from the beginning of the object. If we compare the steps of the algorithm, accessing by offset requires only one direct step in memory. At the same time, to get the value of obj.x, V8 will need to:

  • Calculate the hash of the string property x
  • Calculate the bucket index
  • Go to the bucket in memory
  • Check the key for a match

For a more low-level comparison (and for those who are most interested), here is what the simplified assembly code looks like when accessing by offset:

CMP [obj_ptr - 8], EXPECTED_HIDDEN_CLASS_PTR
JNE deoptimize ; Condition if the structures don't match

MOV RAX, [obj_ptr + 16]

Enter fullscreen mode Exit fullscreen mode

In this pseudocode, we first make sure that the object's form matches the expected one, and then instantly retrieve the value into the RAX processor register.

On the other hand, here is the number of instructions ONLY FOR PREPARING to search the hash table:

MOV RCX, [obj_ptr + HASH_OFFSET]    ; 1. Load the pre-calculated hash from memory
MOV RDX, [table_ptr + SIZE_OFFSET]  ; 2. Load the current hash table capacity
DEC RDX                             ; 3. Calculate (table_size - 1) for the mask
AND RCX, RDX                        ; 4. Apply bitwise AND to find the bucket index
MOV RBX, [table_ptr + RCX * 8]      ; 5. Fetch the bucket pointer from memory
TEST RBX, RBX                       ; 6. Check if the bucket is empty
JZ not_found                        ; 7. Conditional jump...

Enter fullscreen mode Exit fullscreen mode

And we haven't even started comparing the keys themselves character by character yet!

Briefly about Hidden Classes

Obviously, getting a value by offset is much more efficient than using a hash table. Now the only question is: "How does JS turn chaotic objects into ordered structures?"
To do this, V8 uses the concept of Hidden Classes. Instead of creating a heavy hash table for each object, the engine forms a special "blueprint" in memory, where properties and their exact offsets are recorded. Consider the example const user = { id: 1, name: "User" }. In V8 memory, it looks like this:

  1. Header (16 bytes): Object metadata and pointer to Hidden Class.
  2. id (offset 16): Immediately after the header. Occupies one machine word ~ 4 bytes.
  3. name (offset 20): After the header and id field. Takes up another machine word ~ 4 bytes.

Thanks to this "blueprint," the engine doesn't need to guess where the data is located. To read user.name, it only needs to know the start address of the object and add 20 bytes to it. The concept of Hidden Classes is extensive, but for us, the main thing is important right now: objects with the same shape use the same blueprint.

What's under the hood of this machine?

One of the most important applications of the Hidden Classes concept occurs when objects are used as function arguments. This is one of those cases where V8's optimization tools are turned on to the maximum.

A pencil sketch of a mechanic with a cube-shaped head repairing a complex car engine with a wrench in hand

In practice, V8 classifies call sites and property accesses as monomorphic/polymorphic/megamorphic, depending on how many different object shapes were encountered there:

  • Monomorphic: adapted to work with only one data type, i.e., one Hidden Class.
  • Polymorphic: adapted to several specific forms, usually up to 4.
  • Megamorphic: work with an unlimited number of different structures.

People think that the interpreter doesn't think at all, it just translates each individual line into machine code and executes it immediately. In reality, V8 is much smarter and has three phases of code execution. For example, let's take the function greetings(user), which returns user.name, we will call it many, many times. Three processes will run inside V8:

  1. Ignition interpreter, which not only executes code line by line, but also monitors data types and collects runtime statistics. What happens if our function is called many times? In engine terminology, it becomes "warm" or "hot," and then it's time to move on to one of the following steps.
  2. Maglev, a mid-tier compiler and a new feature in the latest versions of the engine, kicks in when the function is already "warm" but not yet "hot," a so-called "timid staker." Maglev bets that the same Hidden Class will always come to the function and generates machine code based on that, but avoids aggressive optimizations. This allows it to compile machine code faster and not waste a lot of time in cases where the "bet doesn't pay off."
  3. The*TurboFan* compiler is a more daring "bettor" in V8. It goes all in and tries to apply the maximum number of optimizations to make the function work faster. Importantly, it is ideal for complex functions where Maglev's capabilities are no longer sufficient. The result is code that runs very fast, but if the "bet doesn't pay off," we not only waste resources on useless background compilation, but also get an expensive synchronous reset back to the interpreter.

A pencil sketch of two figures with cube-shaped heads playing poker. One is examining notes through a magnifying glass, while the other goes all-in, pushing a mountain of chips with

V8 uses an execution profile (how many times the code was executed, what types/forms were encountered) and, based on thresholds, can send hot sections to Maglev and even hotter ones to TurboFan. At the same time, compilation can occur in parallel while the current version of the code continues to execute, i.e., at the moment when TurboFan is only compiling its heavy version, the optimized code from Maglev is already running at full speed. Moreover, TurboFan is not always necessary: if there are no complex calculations in the function, V8 understands that the speed from Maglev is sufficient and does not perform additional optimizations. And all this works perfectly until V8 continues to guess the Hidden Classes of function arguments...

The bet didn't pay off, but the establishment is still winning

Since TurboFan and Maglev are constantly placing bets, what happens when the bet doesn't come in? What happens if the function accepts the object {id: 1, name: "User"} 1000 times, and on the 1001st time it receives anotherUser = {name: "User", id: 1}? Although the keys are the same, their order has changed, which means we are dealing with a different Hidden Class.

The machine code generated by TurboFan and Maglev always checks the hidden class to make sure it matches the one the function is "used to." When the class doesn't match, the deoptimization or bailout phase kicks in. How does this happen?

When a different Hidden Class arrives at the function, the engine needs to do two things:

  1. Stop executing native code
  2. Return to the interpreter so that the JS code does not notice the trick. To do this, you need to return the current physical state of the processor to the virtual state of ignition.

To accomplish the second task, TurboFan creates deoptimization tables in parallel with bytecode compilation. This is a map that says, "At this point in machine code execution, the value from the RBX processor register corresponds to the variable x in the Ignition bytecode." During virtual state restoration, the engine looks at the deoptimization tables, takes the raw values from the processor registers, and packs them back into full-fledged JavaScript objects or values on the heap so that Ignition can work with them.

A pencil sketch of a figure with a cube-shaped head frantically sorting through parts in an overflowing chest and sacks filled with gears

Next comes an even more complex stage. Native machine code has its own Stack Frame format, while the Ignition interpreter uses a completely different virtual stack format, and V8 literally "rewrites" the call stack on the fly. It destroys the native frame of the optimized function and constructs an interpreter frame in its place. Only after this does the code execution continue.

Most importantly, just one function call with a different Hidden Class will cause subsequent calls to run in slow mode until Ignition has accumulated enough statistics to start the TurboFan and Maglev processes again. An attentive reader will notice, "But we also have polymorphic functions, what happens to them?" Yes, indeed, a function can be polymorphic, but for this to happen, the function must be called many times with two/three/four different Hidden Classes. If there are an infinite number of them, the function goes beyond polymorphism.

What does TypeScript have to do with this, and how to write code correctly?

When they say in interviews that TypeScript does not speed up code at runtime, it's true. There are no types for the V8 engine, and after compilation, only bare JavaScript remains. And although TypeScript imposes type discipline on us, it is important to understand its limitation: TS checks content, not form. From the TS compiler's point of view, the objects {a: 1, b: 2} and {b: 2, a: 1} are identical to the interface {a: number, b: number}. But for the V8 engine, these are two different data structures with different Hidden Classes. And for V8 to always remain in the lead, it is also important to follow the rules for working with memory under the hood:

  1. Don't change the structure of objects on the fly and initialize all attributes at once. If an object must have a field, declare it immediately upon creation, even if it will be null for now. Remember: TypeScript will forgive you for changing the order of keys when creating an object, but V8 will immediately create a new Hidden Class and reset optimization. In JS, if you create an empty object const user = {} and then start adding properties throughout the code, V8 suffers. Each new property means creating a new Hidden Class on the fly and changing the object's pointer.
  2. Forget about delete obj.x. Deleting a property not only breaks the current hidden class, but also puts the object into slow Dictionary Mode. In this mode, access to properties is usually noticeably slower and less optimized.
  3. Initialize properties in strictly the same order. If there is a factory or constructor, fields must be created step by step in the same sequence. If in one part of the program the object is assembled as obj.a = 1; obj.b = 2;, and in another as obj.b = 2; obj.a = 1; — in memory these are two completely different structures, and the optimizer will stumble again.

Essentially, all these rules boil down to one simple paradigm: write JavaScript as if you were working with static memory structures in C or C++.

A pencil sketch of a strict inspector with glasses looking at a perfectly aligned formation of identical figures with cube-shaped heads

The V8 engine spends enormous computational resources precisely to transform the chaos of dynamic typing into predictable memory blocks that the processor can access with hard-coded fixed offsets. TypeScript provides an excellent platform and syntax for maintaining this rigor, but the real speed under the hood comes from our discipline in working with memory.

What was all this about?

"Why do I need to know all this? What difference does it make if a class is monomorphic or polymorphic? So the processor will execute 20 more commands during deoptimization, so what? Modern hardware executes billions of instructions per second, I won't even notice the difference."

To prove that this "insignificant difference" is actually significant, let's take a look at the benchmark. You can analyze and test it in its entirety here; in this article, we will only analyze pieces of code.

Let's start with initialization and create four objects: two with the same Hidden Class and two with different Hidden Classes. Note that obj3's Hidden Class differs from obj1 and obj2 even though the set of attributes is the same.

const obj1 = { a: 1, b: 2 };
const obj2 = { a: 3, b: 4 };
const obj3 = { b: 2, a: 1 };
const obj4 = { a: 1, b: 2, c: 3 };
Enter fullscreen mode Exit fullscreen mode

Next, let's warm up our Ignition interpreter. Let's create a simple addition function and run objects with the same Hidden Class through it 10,000 times. This will force V8 to collect statistics (Type Feedback) and "bet" that the function will always receive objects of this particular form. Then we will forcefully send it to the TurboFan compiler.

// Reads properties by memory offset (when optimized).
function calculate(obj) {
    return obj.a + obj.b;
}
// 3. Warm-up Phase
for (let i = 0; i < 10000; i++) {
    calculate(obj1);
    calculate(obj2);
}

// Force the TurboFan compiler to optimize the function into machine code on the next call.
%OptimizeFunctionOnNextCall(calculate);
calculate(obj1); // This call triggers the actual JIT compilation.
Enter fullscreen mode Exit fullscreen mode

Now let's run two cycles of 10 million iterations. In the first (monomorphic) cycle, we will only pass objects with the expected hidden class. In the second (polymorphic) cycle, we will start feeding the function objects with a different memory structure. We will accumulate the results in the sumMonomorphic and sumPolymorphic variables so that the compiler does not cut out our code entirely as unnecessary.

const ITERATIONS = 10_000_000;
let sumMonomorphic = 0; // Accumulator to prevent Dead Code Elimination (DCE)

const startMono = performance.now();
for (let i = 0; i < ITERATIONS; i++) {
    // Alternating between objects with the SAME memory layout
    sumMonomorphic += calculate(i % 2 === 0 ? obj1 : obj2);
}
const endMono = performance.now();
console.log(`[+] Monomorphic execution (C-like offsets): ${(endMono - startMono).toFixed(2)} ms`);

// 5. Polymorphic Execution (Deoptimization) Benchmark
// The function is currently optimized for the {a, b} shape.
// Passing objects with different shapes will cause cache misses and force V8 to deoptimize.
let sumPolymorphic = 0;

const startPoly = performance.now();
for (let i = 0; i < ITERATIONS; i++) {
    // Alternating between objects with DIFFERENT memory layouts
    if (i % 3 === 0) sumPolymorphic += calculate(obj1);
    else if (i % 3 === 1) sumPolymorphic += calculate(obj3);
    else sumPolymorphic += calculate(obj4);
}
const endPoly = performance.now();
console.log(`[-] Polymorphic execution (V8 deoptimization): ${(endPoly - startPoly).toFixed(2)} ms`);

// Output the sums to ensure the compiler doesn't optimize the loops away completely
console.log(`\\n(DCE prevention output: ${sumMonomorphic}, ${sumPolymorphic})`);
Enter fullscreen mode Exit fullscreen mode

If you run this code, the numbers will depend on the power of your processor, but the ratio will always be approximately as follows:

[+] Monomorphic: ~13.32 ms Polymorphic: ~22.73 ms

The difference is almost 2 times on absolutely identical mathematical operations.

In the first case, the processor simply flew through the compiled machine code, reading properties from memory using predefined hard offsets. In the second case, TurboFan failed the hidden class check. The engine urgently reset the processor's prediction pipeline, stopped the execution of native code, and started the Bailout process—unpacking the registers back into the virtual stack of the Ignition interpreter. The harmless "20 extra commands" turned into a loss of a profitable machine path: breaking the inline cache, returning to the interpreter/less optimized code, and worsening branch prediction and locality.

Think like a compiler

When we write in high-level languages, it's easy to succumb to the illusion of "infinite resources" and "smart tools" that will fix any problems. But, as we have seen, even in such a situation, we remain hostage to the processor architecture. It still needs predictability, data locality, and clear contracts.

TypeScript does not make your code faster on its own. Its real superpower is not in type checking in VS Code, but in the fact that it enforces discipline that is best suited to the internal optimizations of the engine. When you adhere to strict interfaces, you literally help TurboFan generate machine code that is close to C in terms of efficiency.

A pencil sketch of a relaxed figure with a cube-shaped head sitting in a comfortable armchair with a cup of tea, reading a book

"Writing productive code today, in the era of neural networks and LLM" means "understanding where logic ends and hardware physics begins." Code is still an instruction for transistors. And it is within our power to make these instructions "fly down the dedicated lane" rather than "get stuck in traffic."

Top comments (0)