DEV Community

Cover image for My 6-week-old language is mass producing Go's async scheduler
Juan Denis
Juan Denis

Posted on

My 6-week-old language is mass producing Go's async scheduler

The first commit to Mapanare was March 8. It's April 21 now. Somewhere in between, the compiler started compiling itself, and the async benchmarks started landing within spitting distance of Go.

That's kind of the whole post. But if you want the fun details, keep reading.

What is this thing

Mapanare is a compiled language I've been building — first-class agents, signals, streams, tensors. Compiles to LLVM IR (and C, and WebAssembly, because why not). Named after a Venezuelan pit viper because I'm from there and it sounded cool.

The pitch, if you want one: Python was built for humans. Mapanare is what you'd build if you knew from day one that most code would be written with AI in the loop. Strict types, real concurrency primitives baked into the grammar, deterministic output.

Today I tagged v5.0.0.

The snake eats its tail

The self-hosted compiler is ~38,000 lines of Mapanare across 10 modules. It now compiles itself through a strict 3-stage fixed point:

stage1 (Python bootstrap)  →  compiles the self-hosted source  →  stage2.ll
stage2 (compiled from stage1)  →  compiles the same source  →  stage3.ll
stage2.ll == stage3.ll   ← byte-identical, md5 and all
Enter fullscreen mode Exit fullscreen mode

You can build the whole thing from source with no Python anywhere in the pipeline:

bash scripts/build_from_seed.sh
./mnc hello.mn
Enter fullscreen mode Exit fullscreen mode

The first time stage2.ll == stage3.ll printed in my terminal I sat there for a minute. La culebra se muerde la cola.

The numbers

I ran everything 10+ times with clock_gettime instrumentation that matches how Rust and Go measure themselves — no subprocess-spawn overhead sneaking into the numbers.

vs Python vs Go (async) vs Rust vs C (gcc)
Mapanare ~50× faster (up to 168× on tight loops) 0.84×–1.17× (competitive) 1.17× 0.96×

The async story is the one I'm happiest about. Across the 5-benchmark suite (benchmarks/async/*.mn — sequential chain, fan-out, I/O-bound, mixed CPU/IO, backpressure), Mapanare lands at geomean ~0.84× of Go's throughput on a tuned machine, and actually beats Go on the I/O-bound workload. On less-tuned hardware Go pulls ahead by 15–20% on geomean. The point isn't that I beat Go — it's that a 6-week-old language is close enough that the comparison is interesting.

One fix was dumb and worth sharing. The scheduler was spinning up 31 OS threads on my 32-core machine to run a 2 ms benchmark. MAPANARE_ASYNC_THREADS=2 closed most of the gap. This is the kind of thing you only find when you actually measure stuff on real hardware.

The Python number is benchmark-dependent. On a recursive fib(35), Mapanare lands around 50× faster. On a tight numerical loop where LLVM can really stretch, it's 168× or more. Take the geomean, not the headline.

Compile your Python scripts to native

This one I'm a bit proud of. You can take a regular .py file and compile it to a native binary:

mapanare build your_script.py -o your_script
./your_script
Enter fullscreen mode Exit fullscreen mode
Script Python 3 Mapanare (native) Speedup
numerical_compute (10M iters) 2,557 ms 10.7 ms 239×
collatz_explorer (5M range) 30,636 ms 446.8 ms 69×
prime_sieve (2M range) 3,832 ms 108.8 ms 35×
fibonacci(40) 8,220 ms 193.7 ms 42×

It's not a wrapper or a JIT. The transpiler reads .py, emits Mapanare MIR, and the LLVM backend does what it does. Out comes a statically linked binary.

Obviously not every Python script compiles cleanly — anything leaning hard on duck typing or eval is going to fight you. But for numeric / algorithmic code? It just works.

What 6 weeks of obsessive compiler work looks like

The v4 arc was 160 point releases. I measured everything. A few things that were fun:

Memory safety. Started with 47 sanitizer findings (valgrind + ASan). Ended with zero. The fixes were all in the emitter — ownership tracking for List and String copies, pthread_join before tearing down agents, enum metadata lifetimes. Nothing glamorous, just following the bug until it stopped biting.

Performance. Eight structured experiments with before/after numbers. Wins: unboxed enum payloads (1.77× on enum_match), realloc instead of calloc+memcpy+free for string growth (29.7% faster), right-sizing the coroutine thread pool (the one that unlocked beating Go). Dead ends too: LLVM noalias didn't help because aggregates pass by value, and four of my MIR optimizer passes are fully subsumed by -O2 so I just disabled them.

Determinism. Compile the compiler with itself, compile it again, byte-identical. This is the thing I wanted from the start and it's been stable since v4.134.0.

The architecture, briefly

.mn source
  → Lexer
  → Parser (recursive descent, 13-level precedence)
  → AST
  → Semantic checker
  → MIR lowering
  → MIR optimizer (O0-O3)
  → Emitter
      ├→ LLVM IR (primary — native binaries)
      ├→ C source (gcc fallback)
      └→ WebAssembly (WAT/WASM — browser + WASI)
Enter fullscreen mode Exit fullscreen mode

Three targets from one IR. The self-hosted compiler reimplements this entire pipeline in Mapanare. The Python bootstrap is only needed to plant the seed.

Agents, because that's the actual reason I built this

agent Counter {
    state count: Int = 0
    on increment { count = count + 1 }
    on get_count -> Int { return count }
}

let c = spawn Counter()
c <- increment
let n = sync c.get_count
print(str(n))
Enter fullscreen mode Exit fullscreen mode

Agents aren't a library, they're a language construct. Typed channels, compiler-checked message passing. The runtime is a C library with lock-free SPSC ring buffers and a cooperative scheduler for mobile. No event loop to misconfigure. No async/await color problem. spawn, <-, sync. Done.

What's not there yet

Being honest: v5.0.0 ships a native compiler binary for Linux (mnc-linux-x64) and macOS ARM64 (mnc-darwin-arm64). Windows ships as a CLI bundle that runs through the Python bootstrap — functional, but the mapanare run path needs a gcc/clang on PATH, and you'll see [dev mode] Using Python bootstrap compiler in the output. Native Windows (mnc-win-x64.exe) and macOS Intel lands in v5.0.1–v5.0.3, which are already queued up. Tensor reshape, mutable views, and stepped slices are v5.x. The LSP does syntax highlighting and snippets; go-to-definition is in progress. Ecosystem packages (Dato for DataFrames, net/crawl, security tooling) are scaffolded but early.

If you're looking for a polished language with a book and a conference track, this isn't that. If you're looking for something weird and fast that someone clearly cares about, come hang out.

Try it

# Linux (native binary)
curl -fsSL https://mapanare.dev/install | bash

# macOS ARM64 (native binary)
curl -fsSL https://mapanare.dev/install | bash

# Windows — grab the CLI bundle from the v5.0.0 release.
# Runs in Python-bootstrap mode today; install MinGW or use WSL for
# `mapanare run`/`build`. Native Windows binary lands in v5.0.1.

# From source
git clone https://github.com/Mapanare-Research/Mapanare.git
cd Mapanare && bash scripts/build_from_seed.sh
Enter fullscreen mode Exit fullscreen mode
mapanare run hello.mn        # compile + run
mapanare build hello.mn      # native binary
mapanare build script.py     # compile Python to native
Enter fullscreen mode Exit fullscreen mode

MIT licensed. RFCs open. I'll be around.

GitHub: github.com/Mapanare-Research/Mapanare
Discord: discord.gg/5hpGBm3WXf
Website: mapanare.dev

Top comments (0)