DEV Community

Art
Art

Posted on

SMS (Simple Multiplatform Script) AOT Event Dispatch Beats C++, Kotlin Native, and JVM. Here's Why.

SMS is the scripting language at the heart of Forge, an open source UI framework built on one conviction: architecture is a moral choice.
Bad performance on a low-end device is not a technical inconvenience. It harms real people.
Forge exists to do no harm.


The Setup

In a previous benchmark we compared SMS against C++, C#, and Kotlin JVM.
Kotlin JVM won the compute round thanks to JIT. This time we added Kotlin Native, AOT-compiled, no JVM, no GC - the fairest opponent yet.

But then something unexpected happened in the event dispatch benchmark.


The Contenders

Runtime Compile mode
SMS LLVM IR, no optimizer (sms_compile)
C++ clang -O0
C# .NET Release, Optimize=false, JIT
Kotlin JVM Standard kotlinc + java, JIT
Kotlin Native konanc, AOT, no -opt

All five languages implement the exact same algorithm:
fib(36) + lcgChain(42, 2_000_000) + nestedMod(800×800).
Seven runs each, median reported. Machine: Apple M2.


Part 1 — Compute

Runtime Median (µs) vs SMS
SMS → LLVM IR (no -O) 70 967
C++ clang -O0 84 764 SMS 1.19× faster
C# .NET (warm, JIT) 97 083 SMS 1.37× faster
Kotlin Native (AOT, no -opt) 49 933 1.42× faster than SMS
Kotlin JVM (warm, JIT) 64 326 1.10× faster than SMS

Kotlin Native wins the compute round — even without the optimizer.
Why? Kotlin's compiler performs significant IR-level optimizations (inlining, devirtualization) before LLVM touches the code. SMS generates more conservative LLVM IR. SMS still beats unoptimized C++ and JIT-warmed C#, but Kotlin Native is ahead.

No surprise there. We called it before running.


Part 2 — Event Dispatch

This is where the story changes. But first — full transparency.

Our first run of Part 2 showed SMS losing badly. The benchmark was calling sms_native_session_invoke(), the interpreter path. We were measuring the wrong thing.
SMS has an AOT compiler. We just weren't using it for event dispatch.

We accepted the challenge immediately: fix it. Add --emit-obj to sms_compile,
link the compiled handler directly, measure the real thing.

The benchmark: 100 000 calls to on bench.tick() (1 000 warmup).
Handler body: counter = counter + 1; return counter.

Kotlin JVM / Kotlin Native / C++ approach

val dispatch = HashMap<String, EventHandler>()
dispatch["bench.tick"] = EventHandler { counter += 1L; counter }
// ...
dispatch["bench.tick"]!!.invoke()
Enter fullscreen mode Exit fullscreen mode

Every call hashes the key string, walks the bucket chain, finds the handler, calls it.

C++ approach

std::unordered_map<std::string, std::function<int64_t()>> dispatch;
dispatch["bench.tick"] = [&]() { return ++counter; };
// ...
dispatch["bench.tick"]();
Enter fullscreen mode Exit fullscreen mode

Same story. Hash, lookup, call.

SMS AOT approach

var counter = 0

on bench.tick() {
    counter = counter + 1
    return counter
}
Enter fullscreen mode Exit fullscreen mode

sms_compile sees on bench.tick() and emits:

define i64 @sms_on_bench_tick() {
  ; ... counter increment ...
}
Enter fullscreen mode Exit fullscreen mode

The event name is gone at compile time. The caller links directly against sms_on_bench_tick. No map. No hash. No lookup. Just a call.

Results

Runtime Per event (ns) Total 100k (µs)
SMS AOT 4.6 459
Kotlin Native HashMap (AOT) 8.4 843
Kotlin JVM HashMap (JIT) 47.7 4 767
C++ unordered_map 393.3 39 331
SMS interpreter 7 219.0 721 902

SMS AOT is 85× faster than C++ and 1.8× faster than Kotlin Native.


Why This Matters for UI

Not all events are equal.

  • Button clicked → dispatch → handler — waits for the user. The user is the bottleneck.
  • Scroll → dispatch → handler — fires continuously, every finger move
  • Animation tick → dispatch → handler — 60× per second, no waiting

The first one? Never a performance problem. The last two? That's where frameworks die.

I see this every day on my Android GO device. Not slow handlers — missing handlers.
The framework is so busy dispatching that scroll and touch events simply get dropped.
The UI freezes. Taps are ignored. The user taps again. Nothing.
That is not a UX problem. That is an architecture problem.

In a JVM-based framework, every tick allocates a lambda, goes through a HashMap, and eventually triggers GC. The GC pauses exactly when you need smoothness most, mid-animation.

SMS AOT has no GC because it has nothing to collect. No heap allocations in handlers, no closures, no reference counting. And now: no dispatch overhead either.

60fps = 16.6 ms per frame.
SMS AOT dispatch = 4.6 ns.
That leaves 16 599 995 ns for the actual render logic.

One more thing worth saying: on button.clicked() is never a bottleneck, because it waits for the user. The user is the bottleneck there, not the framework.
The real bottleneck is always the events that fire without waiting:
scroll, resize, animation tick. Those are the ones that need to be free.
And now they are.


The Architectural Insight

Events should be a compile-time concept, not a runtime concept.

When you write on button.clicked() in SMS, you are not registering a callback.
You are naming a native symbol. The framework calls it directly. The "dispatch" is just the CPU's branch predictor doing its job.

This is what we mean by use-case centered design: UI is event-driven, so the language itself should make events free.


What's Next

SMS does not have an optimizer yet. Kotlin Native won the compute benchmark without one because its frontend does more work before handing off to LLVM.
That gap will close.

But the event dispatch result is structural. Even with a perfect optimizer, a HashMap lookup cannot be faster than a direct call. SMS AOT made the right architectural choice, and the numbers show it.


Run It Yourself

git clone https://github.com/CrowdWare/Forge4D
cd Forge4D/samples/bench_mac
./install_kotlin_native.sh   # one-time
./run_bench.sh
Enter fullscreen mode Exit fullscreen mode

Results land in results.md. All source code for every variant is in the same folder.


CrowdWare — building tools for people, not corporations.
Forge is open source. SMS is open source. The benchmark is open source.

Top comments (0)