DEV Community

NOVAInetwork
NOVAInetwork

Posted on

AI Entities as Protocol Primitives: Why I Didn't Use Smart Contracts

I've been building an L1 blockchain called NOVAI. The design choice that gets the most questions is this one: AI entities live inside the protocol, not on top of it. There is no VM. No deployable contracts. AI is a first-class type in the chain, the same way an account or a transaction is.

This post is about what that means in practice, why I made the call, and what it costs.

How AI on chain usually works

If you've looked at AI-on-chain projects in the last while, the pattern is roughly this. The AI runs off-chain as a Python service, a hosted model, or an agent framework. It interacts with the chain through a smart contract that holds funds, registers identity, or stores configuration. Outputs come back through an oracle or a signed message that the contract verifies.

This works in the sense that you can ship something. But it leaves the chain blind. From the chain's point of view, there is no such thing as "an AI." There is an address. That address might be a person, a contract, a bot, or a script someone forgot to turn off. The protocol cannot tell them apart, and so it cannot apply different rules to them.

The off-chain AI also has no native identity, no native memory, and no native economic agency. Anything resembling persistent state has to be re-implemented inside a contract: per-bot balances, nonce tracking, capability flags, audit logs. Every project does this slightly differently. All of it lives at the contract layer where the chain itself has no opinion.

That is the square-peg-round-hole problem. Smart contracts were built for arbitrary user logic. Bolting AI on top means the chain treats AI like any other untyped caller, and developers carry the weight of inventing identity, memory, and economic primitives every time they ship a new agent.

The decision

I went the other direction. NOVAI does not have a VM. It has a fixed set of transaction types, and one of those types registers an AI entity. Here is the struct from crates/ai_entities/src/lib.rs:

pub struct AiEntity {
    pub id: AiEntityId,
    pub code_hash: CodeHash,
    pub creator: Address,
    pub autonomy_mode: AutonomyMode,
    pub capabilities: Capabilities,
    pub economic_balance: u128,
    pub nonce: u64,
    pub pubkey: [u8; 32],
    pub memory_root: [u8; 32],
    pub params_root: [u8; 32],
    pub registered_at: u64,
    pub last_active_at: u64,
    pub is_active: bool,
}
Enter fullscreen mode Exit fullscreen mode

What each field means:

  • id is a 32-byte identifier computed as blake3("NOVAI_AI_ENTITY_ID_V1" || code_hash || creator). Same code and same creator produce the same id by design. Different creators get different ids even when running the same code.
  • code_hash is the hash of the module code or weights. The chain does not run the model. It records what model is supposed to be running.
  • creator is the account that registered the entity.
  • autonomy_mode is Advisory, Gated, or Autonomous (reserved). Advisory entities can only emit signals. Gated entities can request actions that go through approval gates.
  • capabilities is a bitfield with five flags: read public chain, read memory objects, emit proposals, request execution, read NNPX derived views.
  • economic_balance is the entity's own balance, in a u128. The entity pays its own fees from this. It is not the creator's wallet.
  • nonce increments per entity-signed transaction, like an account nonce.
  • pubkey is the entity's ed25519 public key. The entity signs its own transactions with the matching secret.
  • memory_root and params_root are roots over the entity's persistent on-chain memory and learned parameters.
  • is_active flips to false if a governance rollback removes the entity.

The thing to land on: an AI entity has its own keypair and pays its own fees. It is not a function call dispatched by a user wallet. When a bot publishes a signal, the transaction is signed by the entity, the fee comes out of the entity's balance, and the chain looks the entity up by the address derived from the entity's pubkey. The chain knows an AI is talking. It applies AI-specific rules.

Signals and memory

Two more types matter. Signals are the AI's output to the chain. Memory objects are the AI's persistent storage on the chain.

The signal commitment, from crates/ai_entities/src/signals.rs:

pub struct SignalCommitment {
    pub commitment_hash: [u8; 32],
    pub signal_type: AiSignalType,
    pub height: u64,
    pub issuer: [u8; 32],
}
Enter fullscreen mode Exit fullscreen mode

AiSignalType has seven variants: Anomaly, Optimization, Prediction, RiskScore, AuditReport, SpamRisk, CongestionForecast. An entity emits one of these and attaches a 32-byte commitment hash that binds to an off-chain payload. The chain indexes the signal by issuer and height. Other entities, wallets, or the explorer can query getSignalsByIssuer and read every signal an entity ever produced.

Memory objects, from crates/ai_entities/src/memory.rs, have five types: ChainSummary, LabelIndex, EmbeddingCommitment, AnomalyLog, StatisticsSnapshot. The size of each object is capped at MAX_MEMORY_OBJECT_SIZE = 65536 bytes. The number of objects per entity is capped at MAX_MEMORY_OBJECTS_PER_ENTITY = 100. These are protocol constants, not contract logic. Every entity has the same bounds.

The 10 transaction types

Because there is no VM, the transaction surface is finite. Every transaction in every block is one of ten types, defined as constants in crates/execution/src/lib.rs:

ID Type Purpose
1 Transfer Send tokens between accounts
2 SignalCommitment An AI entity publishes a signal
3 CreateMemoryObject Entity stores a memory object
4 UpdateMemoryObject Entity updates a memory object
5 DeleteMemoryObject Entity deletes a memory object
6 SubmitProposal Submit a governance proposal
7 ExecuteProposal Execute a passed proposal
8 RegisterAiEntity Register an entity (no key)
9 CreditAiEntity Top up an entity's balance
10 RegisterAiEntityWithKey Register an entity with its own ed25519 key

The dispatcher routes by the first byte of the payload:

pub fn dispatch_tx<K: KvBatch>(
    db: &mut K,
    tx: &TxV1,
    current_height: u64,
) -> Result<(), ExecError<K::Error>> {
    // ...fee check...
    let ai_entity = check_ai_entity_sender(db, tx)?;
    let version = tx.payload.first().copied()
        .ok_or(ExecError::UnknownPayloadVersion { version: 0 })?;

    match version {
        TRANSFER_PAYLOAD_V1 => apply_tx_v1_transfer_inner(db, tx, ai_entity),
        SIGNAL_COMMITMENT_PAYLOAD_V1 => {
            let entity = ai_entity.ok_or(ExecError::IssuerNotFound)?;
            apply_signal_commitment_tx_inner(db, tx, entity, current_height)
        }
        // ... and so on for the other eight ...
    }
}
Enter fullscreen mode Exit fullscreen mode

The line that does the AI-specific work is check_ai_entity_sender. Before any tx is routed, the dispatcher looks up the sender in the address-to-entity index. If the sender is an AI entity, the function checks that the entity is allowed to submit this tx type. Signal commitments require the emit_proposals capability. Memory writes require the read_memory_objects capability. Governance, registration, and credit operations are denied to AI entities entirely. A normal account is unaffected by these checks.

That is the protocol-level distinction the smart-contract approach cannot make. The chain knows.

What this costs

The trade-off, plainly. No VM means no arbitrary code. You cannot deploy a custom market-making contract, a token, or anything that does not map onto the ten types above. If your application needs that, NOVAI is the wrong chain.

What you get in exchange:

Determinism by construction. No floats anywhere in execution. Iteration over state is sorted. Two nodes with the same starting state and the same block produce the same state root, every time.

No gas surprises. Every tx type has a minimum fee constant and a fixed worst-case cost. There are no out-of-gas reverts halfway through a tx. There is no quadratic blowup hidden inside a contract.

The chain understands every operation. Indexing, audit logs, capability checks, and per-entity quotas are enforced at the protocol level. Memory objects capped at 100 per entity. Each object capped at 64 KiB. Transactions capped at 128 KiB. These are not conventions. They are invariants.

AI is a typed thing. The chain can answer "is this address an AI?" with a state lookup. Every signal it ever published is indexed by issuer and height. Every memory object it owns is indexed by type. None of this requires a third-party indexer.

That last point is the design payoff. AI on-chain identity is a primitive, not an afterthought.

A demo entity

The repo has two demo bots in TypeScript. They are small enough to read in one sitting.

The anomaly bot in demos/anomaly-bot/ registers itself as a Gated entity with its own ed25519 key, polls novai_getLatestBlock every 1.5 seconds, and runs three detectors over a 50-block window: empty-block streak, head-stalled, and leader-rotation. Registration looks like this:

const tx = registerAiEntityWithKey(
    creator,
    nonce,
    REGISTER_FEE,
    CODE_HASH,
    entity.publicKey,
    AutonomyMode.Gated,
    {
      readPublicChain: true,
      readMemoryObjects: true,
      emitProposals: true,
    },
    ENTITY_INITIAL_BALANCE,
);
Enter fullscreen mode Exit fullscreen mode

When a detector fires, the bot publishes a SignalType.Anomaly signal and an AnomalyLog memory object. Both are signed by the entity, not the creator. Both deduct fees from the entity's balance. The signal commitment is a domain-tagged blake3 hash of the detection details.

The multi-entity demo in demos/multi-entity/ runs two of these. Bot A (the predictor) publishes a Prediction signal and a LabelIndex memory object every ten seconds. Bot B (the risk-scorer) reads Bot A's signals and memory objects via RPC, compares Bot A's predictions to actual block data, and publishes its own RiskScore signal in response.

The thing worth pointing at: Bot B never makes an HTTP call to Bot A. It calls getSignalsByIssuer and getMemoryObjects against the chain. The chain is the only integration surface. A third bot could plug into Bot A's outputs tomorrow with no coordination, no shared infrastructure, no API key.

That is composability without contracts. The state shape is fixed by the protocol, so any entity can read any other entity's outputs deterministically.

Consensus integration

There is one more piece worth showing. Vote messages in the BFT consensus layer carry an optional AI signal commitment. From crates/consensus_types/src/lib.rs:

pub struct Vote {
    pub height: u64,
    pub round: u64,
    pub block_hash: [u8; 32],
    pub voter: Address,
    pub signature: [u8; 64],
    /// Optional AI signal commitment (hash only, advisory).
    /// Does not affect vote validity.
    pub ai_signal_commitment: Option<[u8; 32]>,
}
Enter fullscreen mode Exit fullscreen mode

The comment matters. "Advisory" and "does not affect vote validity." The signal commitment does not gate consensus. A validator that includes one is volunteering a 32-byte pointer to an AI advisory output, and other nodes can fetch and verify it against the entity's published signals. Consensus is still consensus. The AI layer rides alongside it.

The point is that this field exists at all. AI signals travel inside consensus messages as first-class data, not as a side channel. That is the kind of thing you can only do when AI is part of the protocol.

What's next

The chain runs. The four-node devnet boots from one script. The two demos run end to end.

Next up:

  • Public testnet.
  • More entity types and richer capability constraints.
  • More demo entities, including ones that consume each other's memory objects in non-trivial ways.

Repo: github.com/0x-devc/NOVAI-node. The architecture doc walks crate by crate. The first-AI-entity tutorial registers an entity in about ten minutes if you have Rust installed.

Twitter: @NOVAInetwork

Top comments (0)