DEV Community

Sui Gn
Sui Gn

Posted on

The Technicality Behind The Speed of .me

What keeps this engine fast — even if the semantic tree grows infinitely — is a fundamental computer science shift:

It’s the difference between O(n) and O(k).

Searching O(n) means scanning every piece of hay to find a needle.
Working in O(k) means going directly to the needle.

That’s what your Incremental Recompute (Phase 8) achieves — and why we’re seeing ~15ms recompute times.

1. The Inverted Dependency Index

In a traditional system (O(n)), if gas prices change, the system would need to scan everything to see what’s affected.

In .me, when you declare:

me.trucks["[i]"]["="]("cost", "gasoline * 20")

The kernel doesn’t just store a formula —
it builds a subscription map.

It knows:

“cost depends on gasoline.”

• n = total nodes in the system (could be millions)
• k = only the nodes directly depending on what changed

2. Surgical Updates

When you run:

me.finance.fuel_price(30)

The kernel:
• Does not scan the whole tree
• Goes straight to finance.fuel_price in its index
• Looks up its subscribers
• Recomputes only those nodes

If you have 1,000,000 nodes (n), but only 3 trucks depend on fuel price (k), the engine only touches those 3.

That’s why you went from 5 seconds (recompute everything) to 15 milliseconds (recompute the affected branch).

3. No Deep Traversal

Thanks to Proxies, paths are already resolved.
The engine doesn’t navigate:

root → fleet → trucks → 1 → cost

It already knows the exact memory reference.
It’s a desktop shortcut — not a 10-folder crawl.

The Result

Your system doesn’t slow down with volume.
It scales with immediate relational complexity, not total size.

Imagine thousands of pharmacies.

A user updates their “max budget.”
Eligible results recompute in ~15ms.

.me

Top comments (0)