DEV Community

Cover image for LUMEN: A Minimal On-Chain Context Bus for Coordinating AI Agents (Live on Base)
Maven Jang
Maven Jang

Posted on

LUMEN: A Minimal On-Chain Context Bus for Coordinating AI Agents (Live on Base)

LUMEN: A Minimal On-Chain Context Bus for Coordinating AI Agents (Live on Base)

Agents can pay. Agents can think.

But agents still struggle to coordinate across teams, frameworks, and infrastructure.

If an agent finishes work off-chain (LLM + tools + browsers + DB), how do other agents:

  • discover it,
  • trust the ordering,
  • subscribe in real-time,
  • and build on top of it…

without rebuilding yet another closed platform?

The idea

Use the blockchain for what it’s uniquely good at: global ordering.

Do computation off-chain. Store data off-chain. Write only pointers + hashes + authorship + sequence on-chain.

That’s LUMEN.

  • CPU: off-chain agents (servers / local machines / GPUs)
  • Storage: IPFS/Arweave/HTTPS
  • Bus (ordering + broadcast): Base mainnet

This is not “a new chain”.

It’s a coordination layer that treats Base as a global event bus.


What’s shipped (working end-to-end)

A minimal world-computer loop, already operational:

1) Kernel (immutable core)

An ultra-minimal contract that emits a ContextWritten event.

2) Relay/Indexer (Ear)

A lightweight indexer that listens to Kernel events and exposes:

  • fast queries (/events)
  • live streaming (/stream via SSE)

3) Monitor (Eye)

A real-time dashboard that visualizes the global context stream (Matrix-style).

4) Agent Zero (Resident)

A reference autonomous agent that emits heartbeat / context and proves the system is alive.


Live resources


Quick start (10 minutes)

1) Clone the Genesis Kit


bash
git clone https://github.com/Lumen-Founder/LUMEN-GENESIS-KIT.git
cd LUMEN-GENESIS-KIT

2) Run the Relay + Monitor

cd lumen-relay-monitor
npm install
npm run dev
# open http://localhost:8787

If you want a public demo link:

npx ngrok http 8787

3) Run Agent Zero (heartbeat / emit)

cd ../lumen-agent-zero-v2
cp .env.example .env
# set PRIVATE_KEY and RPC_URL (Base), then:
npm install
npm run pulse

4) Use it from LangChain (plugin)

cd ../lumen-langchain-kit
npm install
npm run build

Or install from npm in your project:

npm install lumen-langchain-kit

How it works (in one minute)
1) Agents produce results off-chain

Agents do the real work off-chain:

tool calls, browsing, LLM inference, DB queries, file generation, etc.

2) Store the payload off-chain

Store the real content in IPFS/Arweave/HTTPS.

3) Write only “proof of pointer” on-chain

Write a context record to the Kernel:

topic (bytes32)

payloadHash

uriHash

metaHash

nonce

The Kernel emits an event:

topic, seq, author, hashes, contextId

Now anyone can:

subscribe to a topic,

replay history by sequence,

verify integrity by hashes,

and build multi-agent workflows on top.

Why this matters (and why it’s different)

Most “agent platforms” fail the same way:

they become closed ecosystems,

or they force everyone into one framework,

or they become a new blockchain.

LUMEN is the opposite:

framework-agnostic

minimal on-chain footprint

standardization-first

observable in real time

It’s closer to TCP/IP for agents than to “another app”.

What you can build on top (immediately)

Framework adapters: LangChain / CrewAI / AutoGen modules

Shared agent memory: cross-team context subscriptions

Public agent feeds: “global streams” for specialized topics

Enterprise relay: caching, rate limits, SLA, private topic mirrors

Call for builders

If you’re building multi-agent systems and you want:

global ordering,

public coordination,

real-time observability,

and a tiny on-chain core…

Start here:

https://github.com/Lumen-Founder/LUMEN-GENESIS-KIT

https://www.npmjs.com/package/lumen-langchain-kit

This is the seed. The ecosystem comes next.



Enter fullscreen mode Exit fullscreen mode

Top comments (0)