Building a compiler is often seen as one of the "final bosses" of computer science. It's complex, requires deep knowledge of architecture, and usually involves wrestling with C++.
But what if we could build a simple, modern, modular compiler infrastructure using Rust?
Meet Lamina.
What is Lamina?
Lamina is a general-purpose compiler infrastructure that I've been building from scratch. Think of it as a lightweight, Rust-native alternative to LLVM. It takes a Readable Intermediate Representation (IR), optimizes it, and generates efficient machine code for multiple architectures.
I started this project to build a playground for experimenting with compiler optimizations and code-generation techniques.
Current Features
1. Multi-Target Support
Lamina is trying to support as many targets as possible(since I own a few machines with different arch and OS, it's mandatory) It currently supports code generation for:
- AArch64 (ARM64, Apple Silicon) - most stable
- x86_64 (Linux, macOS, Windows) - close to working end-to-end
- RISC-V (32-bit and 64-bit) - untested
- WebAssembly (Wasm32/64) - codebase migration in progress
2. Modern IR & MIR
Lamina uses a dual-layer representation system:
- Lamina Intermediate Representation: A high-level, readable intermediate representation similar to LLVM IR.
- LUMIR (Lamina Unified Machine Intermediate Representation): A lower-level representation designed for machine-specific optimizations and register allocation.
3. Optimization Pipeline
The compiler includes a few experimental optimizations:
- Constant Folding: Evaluating expressions at compile-time.
- Dead Code Elimination: Removing unused instructions.
- Strength Reduction: Replacing expensive operations with cheaper ones.
- Function Inlining: Reducing call overhead.
Performance (Apple M1 8GB)
Best case scenario
Worst case scenario
Usages
Here's a simple example of Lamina IR code that calculates a factorial of 5
fn @factorial(i32 %n) -> i32 {
entry:
%cond = eq.i32 %n, 0
br %cond, then, else
then:
ret.i32 1
else:
%sub = sub.i32 %n, 1
%rec = call @factorial(%sub)
%res = mul.i32 %n, %rec
ret.i32 %res
}
fn @main() -> i32 {
entry:
%result = call @factorial(5)
print %result
ret.i32 0
}
or use an IR builder in Rust
fn create_factorial_function(builder: &mut IRBuilder) {
builder
.function_with_params(
"factorial",
vec![FunctionParameter {
name: "n",
ty: Type::Primitive(PrimitiveType::I32),
annotations: vec![],
}],
Type::Primitive(PrimitiveType::I32),
)
// Entry block: check if n == 0
.cmp(CmpOp::Eq, "cond", PrimitiveType::I32, var("n"), ir_i32(0))
.branch(var("cond"), "factorial_then", "factorial_else");
// Then block: return 1
builder
.block("factorial_then")
.ret(Type::Primitive(PrimitiveType::I32), ir_i32(1));
// Else block: recursive case
builder
.block("factorial_else")
// Subtract 1 from n
.binary(
BinaryOp::Sub,
"sub",
PrimitiveType::I32,
var("n"),
ir_i32(1),
)
// Call factorial recursively
.call(Some("rec"), "factorial", vec![var("sub")])
// Multiply n by recursive result
.binary(
BinaryOp::Mul,
"res",
PrimitiveType::I32,
var("n"),
var("rec"),
)
// Return the result
.ret(Type::Primitive(PrimitiveType::I32), var("res"));
}
You can compile this directly to an executable:
lamina factorial.lamina
./factorial
or use the current experimental pipeline
soon become the default
lamina --emit-mir-asm factorial.lamina
./factorial
Status and Feedback
Lamina is still early and very much a work in progress, but it already:
- Parses and lowers a custom IR
- Runs a few basic optimization passes
- Emits assembly/machine code for several architectures
- Able to create simple compiler frontend awesome-lamina
If you’re into compilers, IR design, or codegen and want to poke at a Rust-native backend, I’d love feedback, issues, and nitpicks.
Let me know what you think in the comments below! Happy coding!


Top comments (0)