DEV Community

Cover image for Building a JIT Compiler from Scratch: Part 0 - How Computers Run Your Code
Damilare Akinlaja
Damilare Akinlaja

Posted on

Building a JIT Compiler from Scratch: Part 0 - How Computers Run Your Code

This is the first part of the series on Building a JIT Compiler from Scratch. I have been exploring the world of compilers and interpreters for many years, from theory to many failed attempts. As a hobbyist trying to cure my curiosity. I did not have a Computer Science background even though programming was a mandatory course as a student of Applied Physics. Every advance topic I have learned about compilers were based on personal reading, learning and attempts.

Writing a language parser is the easy part, then it gets a bit dramatic once you start tree-walking your AST to interpret it. But I wanted to go beyond parsers and basic interpreters. My journey led me to the world of virtual machines, made attempts like developing Cabasa and Wasp experimental runtimes for WebAssembly, and now developing Zyntax - a multi-paradigm and multi-tier JIT compilation and runtime framework.

For the curious cats, I am writing this blog series to demystify compiler and virtual machine development. I might oversimplify some definitions or use diagrams to explain concepts, but the major fun part is in the hands-on coding of our own JIT compiler in Rust.

But first, let's touch on a brief overview of compilers and runtimes.

A Tour of Interpreters, Compilers and Everything In Between

Every time you run a program, something bridges the gap between your code and the transistors that execute it. That bridge is the programming language runtime, a program designed to interpret or translate your code.

Code Is Just Text

When you write:

fn add(a:i32, b:i32) -> i32:
    return a + b
Enter fullscreen mode Exit fullscreen mode

This is meaningless to the CPU. The processor understands only binary encoded instructions specific to its architecture. Someone or something must translate this code.

Since the invention of high level programming languages, two main translation philosophies emerge:

  1. Translate everything upfront (Compilation)
    • Convert entire program to machine code before running
    • Pay translation cost once
    • Run at full hardware speed
  2. Translate on demand (Interpretation)
    • Read and execute code line by line
    • Quick upstart, no upfront wait to compile
    • Pay translation cost at repeatedly

The split, established in the 1950s, still define how we think about language implementation today.

A Brief History of Language Execution

The Foundations - 1950s

1952 - A-0 System

"All I had to do was to write down a set of call numbers, let the computer find them on the tape, bring them over and do the additions. This was the first compiler."

Grace Brewster Murray Hopper, A computer pioneer and United States Navy rear admiral created a system that translates symbolic mathematical code into machine language. Grace had been collecting subroutines and transcoded them on tape, each routine had a call number which the computer finds and then compute.

1958 - FORTRAN (FORmular TRANslator)

This is the first widely used high-level programming language with an optimizing compiler. Though it was first developed in 1954 by John Backus at IBM to simplify math programming, subsequent versions in 1958 introduced a features that made code reusable.

1958 - LISP

This is the first interpreted language. Lisp was created by John McCarthy at MIT, it's the second high-level language designed for symbolic computation. Lisp introduced the REPL (Read Evaluate Print Loop) system that interactively interprets Lisp code.

The Divergence - 1960s-70s

Between the 60s and 70s, more high-level programming languages had emerged, for systems programming, numerical computation, and rapid prototyping.

Compiled Languages: COBOL, FORTRAN, C

  • Systems programming and numerical computation
  • Maximum performance, minimal runtime overhead
  • Steep edit-compile-run cycle

Interpreted Languages: BASIC, early Smalltalk

  • Education, rapid prototyping
  • Immediate feedback, slow execution
  • Often 10-100x slower than compiled code

An Optimized Interpretation: The Bytecode Virtual Machine - 1980s

Bytecode immediate representations were introduced in this period with Smalltalk-80 as the first consistently object-oriented language (after Simula, which introduced OO concepts). Unlike in classical interpreters, Bytecode borrows the structure of machine code - compact opcodes and operands but they are interpreted by a virtual processor (not directly on the CPU via machine code). This makes it more efficient to interpret while remaining portable and higher-level than machine code.

Bytecode designs vary widely in abstraction level, some are quite close to machine code in structure, others are typically serialized abstract syntax trees.

Other benefits of the Bytecode virtual machine (VM) is that it allows languages to be executable anywhere the VM can run, this made languages (like UCSD Pascal's P-code) to be written once and run anywhere, using the Bytecode as a distribution format.

Just-In-Time Execution (JIT) - 1990s

In 1987, David Ungar and Randall Smith pioneered Just-In-Time compilation techniques while developing the Self programming language. Self was developed as an object-oriented language based on prototypes instead of classes. This design choice posed a significant challenge to the efficiency of the runtime implementation. Every operation was a method invocation, even simple operation like a variable access, this increased the complexity of the language's execution.

As a workaround to Self programming language's implementation bottleneck, the team experimented with several approaches that led to key innovations in Just-In-Time execution:

  • Adaptive optimization
  • Inline caching, poplymorphic inline caches
  • Type feedback and On-the-fly recompilation
  • Achieved half the speed of C language, proving that a dynamic language could also be fast

The Self language team's innovation eventually laid the foundation for modern JIT compilers in languages implementations like Java Hotspot Virtual Machine, and more recent ones like V8 (Javascript), LuaJIT (lua), PyPy (python).

JIT Changed Everything

Modern language implementations can now achieve best-in-class runtime performance without Ahead-Of-Time compilation to machine code. JIT opened the doors to more runtime innovation.

  1. Java (1995) / Hotsport (1999)
    • Bytecode + JIT hybrid architecture became mainstream
    • Hot paths and Cold paths compilation philosophy for runtime optimization
    • JIT became industry standard
  2. Javascript / V8
    • Earlier Javascript implementation started as a slow interpreter
    • V8's JIT made Javascript among the fastest dynamic language implementations
  3. Modern VMs combine everything
    • Interpreter for cold startup (start fast, optimize later)
    • Baseline JIT for quick compilation and hot paths (move hot code to JIT for better performance)

Execution Models Compared

Pure Interpretation

An illustration of a programming language interpreter

How it works:

  1. A parser reads the source text and translate it into an Abstract Syntax Tree
  2. AST contains information about the language which is used to decode what to execute
  3. In a loop we walk the AST and interpret the next statement or operation in the tree.
  4. Repeat for the next statement

Examples of interpreter implementations can be found in shell scripts and early programming language implementations of BASIC, Python, Ruby.

Characteristics:

Aspect Assessment
Startup time Instant
Execution speed Slow (10-100x native)
Memory usage Low
Debugging Excellent (source available)
Portability High (interpreter handles platform)

Why it's slow:

  • Parse/decode cost paid on every execution
  • Dispatch overhead
  • Little opportunity for optimization
  • No direct use of CPU's native instruction pipeline

Bytecode Interpretation

An illustration of a bytecode interpreter implementation
Bytecode gets its name from its structure: a sequence of bytes where each byte (or small group of bytes) encodes an instruction (or set of operands). Think of it as machine code for a virtual computer, it doesn't execute on the native CPU hardware. This buys you portability (the same bytecode runs anywhere the VM runs) and efficiency (no re-parsing source code on every execution).

How it works:

  1. Compile source to bytecode (once)
  2. Bytecode is sequence of simple operations
  3. Interpreter loops: fetch opcode → dispatch to handler → execute

Popular examples of bytecode-based programming languages: Ruby MRI, Lua, early versions of Java, Wren.

Types of Bytecode Interpreters

  1. Stack-based vs Register-based:
Stack-based (JVM, CPython, Wasm)     Register-based (Lua, Dalvik, LuaJIT)
─────────────────────────────────    ─────────────────────────────────────
push 3                               load r0, 3
push 4                               load r1, 4  
add          ; implicit operands     add r2, r0, r1   ; explicit operands
push 2                               load r3, 2
mul                                  mul r4, r2, r3

Smaller bytecode                     Fewer instructions
Simpler compiler                     Easier to optimize
More instructions executed           Maps better to real CPUs
Enter fullscreen mode Exit fullscreen mode
  1. Fixed vs Variable Width:
Fixed width (simpler, faster decode):
┌────────┬────────┬────────┬────────┐
│ opcode │ operand│ operand│ operand│   Always 4 bytes
└────────┴────────┴────────┴────────┘

Variable width (compact, complex decode):
┌────────┐  or  ┌────────┬────────┐  or  ┌────────┬────────┬────────┐
│ opcode │      │ opcode │ operand│      │ opcode │  wide operand   │
└────────┘      └────────┴────────┘      └────────┴────────┴────────┘
Enter fullscreen mode Exit fullscreen mode

Characteristics:

Aspect Assessment
Startup time Fast (compile once)
Execution speed Moderate (3-10x native)
Memory usage Moderate
Debugging Good (with source maps)
Portability Excellent (bytecode is platform-independent)

Why it's faster than pure source interpretation:

  • Parsing done once, not per execution
  • Bytecode is compact (cache-friendly)
  • Simpler dispatch (opcode vs. AST node type)

Why it's still slow:

  • Dispatch overhead on every instruction
  • No native code generation
  • Can't use CPU's branch prediction effectively

Study Recommendation For Building Interpreters

If you are interested in building an interpreter, pure or bytecode based, I highly recommend Rob Nystrom's free book Crafting Interprers

  1. A Tree-Walker Interpreter: https://craftinginterpreters.com/a-tree-walk-interpreter.html
  2. A Bytecode Virtual Machine: https://craftinginterpreters.com/contents.html#a-bytecode-virtual-machine

Ahead-Of-Time (AOT) Compilation

An illustration of an Ahead-Of-Time compiler architecture

AOT compilation transforms the source code into machine code once. Some implementations have to choose between aggressive optimization versus compile time. Optimization may take more time and this usually means you have to wait for the code to compile before you can run it. Most systems programming language implementations prefer AOT compilation for maximum performance. Some give you option to switch between JIT and AOT.

Common examples of AOT compiled languages are Rust, Haskell, C, C++, Go.

Characteristics:

Aspect Assessment
Startup time Instant (already compiled)
Execution speed Fast (native)
Compile time Slow (seconds to minutes)
Binary size Depends on optimization
Portability Low (recompile per platform)

Why it's fast:

  • No runtime interpretation overhead
  • Heavy optimization possible (compiler has time)
  • Direct use of CPU features

Limitations:

  • Must know everything at compile time
  • Can't optimize based on runtime behavior
  • Long edit-compile-run cycle

Just-In-Time Compilation

An illustration of a Just-In-Time compiler architecture
JIT gives you half the performance of AOT, yet more portable. If you are developing a programming language for long-running fault-tolerant servers, JIT is the best choice. Most modern JIT infrastructure starts out from the Cold Path - When the code runs the first time, it is interpreted for quicker execution, a profiler is then used to analyze the Hot Paths - When certain functions or operations are being used more frequently, or require better optimization to perform. Hot path code is recompiled into machine code and executed on-demand while the interpreter is still in process. Unlike AOT, the JIT architecture is a collaborative one, between the interpreter and the JIT executor.

Modern compiler vendors support JIT execution, giving you the opportunity to skip AOT compilation before executing machine code. Common examples are LLVM and Cranelift.

How it works:

  1. Start executing (interpret or baseline compile)
  2. Profile: track which code runs frequently, what types appear
  3. Hot code triggers JIT compilation - inside the running process
  4. Generated native code stored in executable memory - in the same address space
  5. Continue profiling, recompile if behavior changes

Characteristics:

Aspect Assessment
Startup time Fast (interpret first)
Peak execution speed Near-native
Warm-up time Moderate (JIT needs profile)
Memory usage High (code + compiler in memory)
Portability Good (bytecode portable, JIT per-platform)

Why it can match AOT in some scenarios:

  • Optimizes based on actual runtime behavior
  • Can specialize for observed types
  • Can inline across module boundaries
  • Can deoptimize and reoptimize as behavior changes

Key Distinctions Between AOT and JIT Compilation

Illustration showcasing the key distinctions between AOT and JIT compilation

The fundamental tradeoff:

AOT JIT
Compile once, run forever Compile repeatedly, run smarter
Optimizations are guesses Optimizations are informed
No runtime compilation cost Pays compilation cost during execution
Predictable memory usage Memory includes compiler infrastructure
Must handle all possible cases Can specialize for observed cases

Summary

Programming language compilers and runtimes have evolved over many decades as computing advancements. In smaller projects, the differences in runtimes are not usually immediately obvious, but in performance and memory critical environments we begin to see where the other implementations shine and why they do. In upcoming sequels to this post, we will discuss the anatomy of modern high performance runtimes, the different optimization strategies, and then we will build a JIT compiler from scratch!

Further Reading

Books:

Papers:

Implementations to study:

  • LuaJIT (brilliantly simple tracing JIT)
  • V8 (production JavaScript, open source)
  • GraalVM (polyglot, written in Java)

Hi there! My name is Damilare Akinlaja
I am currently building Zyntax

Top comments (1)

Collapse
 
olaoluwa_afolabi_c81ce6ff profile image
Olaoluwa Afolabi

Well written!

I cannot but imagine optimising a slow code through hijacking it's AST. Was writing something about that last year in Ruby and I didn't finish it. The drive was that no language is intricately slow—the culprit is compilation or interpretation. Now Ruby is better proceed with -JIT flag on execution.

It's highly informative and I'll be on the lookout for the rest of the series.

Thank you.