Phase 1 — C++ Sovereign Kernel Skeleton (Daemon-First)
Goal: ship a running C++ daemon that can:
- accept events
- maintain a capability graph (endpoints + edges)
- run a minimal state kernel (deterministic transitions)
- write an append-only ledger (provenance + replay-ready)
- execute a first “hunt” (graph search over reachability)
Non-goals (Phase 1):
- LLM integration
- MindScript parsing/compilation
- distributed multi-node consensus
- fancy persistence engines (we’ll keep storage clean + swappable)
Section 1 — Architecture Map (Phase 1)
1.1 Kernel Modules
- ledger/ Append-only log. Immutable entries. File-backed.
- state/ Deterministic state machine + transition rules.
- graph/ Endpoints, capabilities, reachability computation.
- events/ In-process pub/sub bus. (daemon’s nervous system)
- hunts/ A planner that searches reachable endpoints under constraints.
- daemon/ Main loop + API surface (HTTP/gRPC later; Phase 1 uses HTTP for simplicity).
1.2 The Law of Collapse (Phase 1 version)
In Phase 1, “collapse” = atomic commit:
- receive input (event / command)
- validate
- compute consequences (state + graph updates)
- write one ledger entry (append-only)
- emit internal events
No mutation without a ledger commit.
Section 2 — Repository Structure (Phase 1)
mindseye-fabric/
README.md
LICENSE
CMakeLists.txt
docs/
01_phase1_kernel.md
src/
main.cpp
daemon/
server.hpp
server.cpp
routes.hpp
routes.cpp
core/
types.hpp
time.hpp
result.hpp
ledger/
ledger.hpp
ledger.cpp
entry.hpp
state/
state.hpp
state.cpp
transitions.hpp
graph/
graph.hpp
graph.cpp
endpoint.hpp
capability.hpp
events/
bus.hpp
bus.cpp
hunts/
hunt.hpp
hunt.cpp
third_party/
httplib.h
tests/
test_smoke.cpp
Why this structure? Each folder is a kernel “organ.” No mixing. No “utils” junk drawer.
Section 3 — Build System (CMake)
Code Space 3.1 — CMakeLists.txt
cmake_minimum_required(VERSION 3.20)
project(mindseye_fabric LANGUAGES CXX)
set(CMAKE_CXX_STANDARD 20)
set(CMAKE_CXX_STANDARD_REQUIRED ON)
add_executable(mindseye_fabric
src/main.cpp
src/daemon/server.cpp
src/daemon/routes.cpp
src/events/bus.cpp
src/ledger/ledger.cpp
src/state/state.cpp
src/graph/graph.cpp
src/hunts/hunt.cpp
)
target_include_directories(mindseye_fabric PRIVATE
src
third_party
)
# Good defaults
if(MSVC)
target_compile_options(mindseye_fabric PRIVATE /W4)
else()
target_compile_options(mindseye_fabric PRIVATE -Wall -Wextra -Wpedantic)
endif()
Section 4 — Core Types (Shared Contracts)
Code Space 4.1 — src/core/types.hpp
#pragma once
#include <cstdint>
#include <string>
#include <string_view>
#include <optional>
#include <variant>
#include <vector>
#include <unordered_map>
namespace me {
using u64 = std::uint64_t;
using i64 = std::int64_t;
struct Error {
std::string code;
std::string message;
};
template <typename T>
using Result = std::variant<T, Error>;
inline bool ok(const auto& r) { return std::holds_alternative<std::decay_t<decltype(std::get<0>(r))>>(r); }
} // namespace me
Code Space 4.2 — src/core/time.hpp
#pragma once
#include <chrono>
#include "core/types.hpp"
namespace me {
inline u64 now_ms() {
using namespace std::chrono;
return (u64)duration_cast<milliseconds>(system_clock::now().time_since_epoch()).count();
}
} // namespace me
Section 5 — Ledger (Append-Only, File-Backed)
5.1 Ledger Entry Spec (Phase 1)
An entry is a single JSON line (NDJSON), easy to tail, easy to replay.
Fields:
-
idmonotonic integer -
ts_mstimestamp -
kinde.g."event" | "state_transition" | "graph_update" | "hunt_result" -
payloadJSON object (stringified for Phase 1 simplicity) -
prev_hash+hash(integrity chain)
Code Space 5.2 — src/ledger/entry.hpp
#pragma once
#include <string>
#include "core/types.hpp"
namespace me {
struct LedgerEntry {
u64 id = 0;
u64 ts_ms = 0;
std::string kind;
std::string payload_json;
std::string prev_hash;
std::string hash;
};
} // namespace me
Code Space 5.3 — src/ledger/ledger.hpp
#pragma once
#include <fstream>
#include <mutex>
#include <optional>
#include "ledger/entry.hpp"
#include "core/types.hpp"
namespace me {
class Ledger {
public:
explicit Ledger(std::string path);
Result<LedgerEntry> append(std::string kind, std::string payload_json);
std::optional<LedgerEntry> last() const;
u64 next_id() const;
private:
std::string path_;
mutable std::mutex mu_;
u64 next_id_{1};
std::optional<LedgerEntry> last_;
static std::string sha256_hex(std::string_view data); // Phase 1 stub -> replace w/ real impl
static std::string compute_hash(const LedgerEntry& e);
};
} // namespace me
Code Space 5.4 — src/ledger/ledger.cpp
#include "ledger/ledger.hpp"
#include "core/time.hpp"
#include <sstream>
namespace me {
static std::string fake_hash(std::string_view s) {
// Phase 1: placeholder integrity chain.
// Replace with real SHA-256 later (OpenSSL / libsodium / etc).
std::hash<std::string_view> h;
return std::to_string(h(s));
}
Ledger::Ledger(std::string path) : path_(std::move(path)) {
// Phase 1: we start fresh if file doesn't exist.
// Later: scan file to restore next_id_ + last_.
std::ifstream in(path_);
if (!in.good()) return;
// Minimal restore: read last line only (fast path). If file is huge, we'd seek from end.
std::string line, last_line;
while (std::getline(in, line)) if (!line.empty()) last_line = line;
// Phase 1 simplicity: don't parse back; just keep next_id_ conservative.
// If file exists, assume some entries and bump next_id_.
// Upgrade in Phase 2 with real parsing.
if (!last_line.empty()) next_id_ = 1000; // safe placeholder
}
std::optional<LedgerEntry> Ledger::last() const {
std::scoped_lock lk(mu_);
return last_;
}
u64 Ledger::next_id() const {
std::scoped_lock lk(mu_);
return next_id_;
}
std::string Ledger::sha256_hex(std::string_view data) {
return fake_hash(data);
}
std::string Ledger::compute_hash(const LedgerEntry& e) {
std::ostringstream ss;
ss << e.id << "|" << e.ts_ms << "|" << e.kind << "|" << e.payload_json << "|" << e.prev_hash;
return sha256_hex(ss.str());
}
Result<LedgerEntry> Ledger::append(std::string kind, std::string payload_json) {
std::scoped_lock lk(mu_);
LedgerEntry e;
e.id = next_id_++;
e.ts_ms = now_ms();
e.kind = std::move(kind);
e.payload_json = std::move(payload_json);
e.prev_hash = last_.has_value() ? last_->hash : "GENESIS";
e.hash = compute_hash(e);
std::ofstream out(path_, std::ios::app);
if (!out.good()) {
return Error{"LEDGER_IO", "Failed to open ledger file for append"};
}
// NDJSON line (simple, replay-friendly)
out << "{"
<< "\"id\":" << e.id << ","
<< "\"ts_ms\":" << e.ts_ms << ","
<< "\"kind\":\"" << e.kind << "\","
<< "\"payload\":" << e.payload_json << ","
<< "\"prev_hash\":\"" << e.prev_hash << "\","
<< "\"hash\":\"" << e.hash << "\""
<< "}\n";
last_ = e;
return e;
}
} // namespace me
Section 6 — State Kernel (Deterministic)
6.1 Phase 1 State Vocabulary
We’ll keep your internal physics private later; for now, Phase 1 uses canonical names:
PAUSESTRESSLOOPTRANSMITCOLLAPSE
Code Space 6.2 — src/state/state.hpp
#pragma once
#include <string>
#include "core/types.hpp"
namespace me {
enum class State : u64 {
PAUSE = 0,
STRESS = 1,
LOOP = 2,
TRANSMIT = 3,
COLLAPSE = 4
};
inline std::string to_string(State s) {
switch (s) {
case State::PAUSE: return "PAUSE";
case State::STRESS: return "STRESS";
case State::LOOP: return "LOOP";
case State::TRANSMIT: return "TRANSMIT";
case State::COLLAPSE: return "COLLAPSE";
}
return "UNKNOWN";
}
struct Transition {
State from;
State to;
std::string reason;
};
class StateKernel {
public:
State current() const { return current_; }
// Phase 1: simple rule set. Later: constraints + costs + guards.
Transition apply_event(std::string_view event_type);
private:
State current_{State::PAUSE};
};
} // namespace me
Code Space 6.3 — src/state/state.cpp
#include "state/state.hpp"
namespace me {
Transition StateKernel::apply_event(std::string_view event_type) {
// Deterministic, minimal mapping for Phase 1.
if (event_type == "ingest") {
auto prev = current_;
current_ = State::LOOP;
return {prev, current_, "ingest -> LOOP"};
}
if (event_type == "pressure") {
auto prev = current_;
current_ = State::STRESS;
return {prev, current_, "pressure -> STRESS"};
}
if (event_type == "export") {
auto prev = current_;
current_ = State::TRANSMIT;
return {prev, current_, "export -> TRANSMIT"};
}
if (event_type == "commit") {
auto prev = current_;
current_ = State::COLLAPSE;
return {prev, current_, "commit -> COLLAPSE"};
}
// Default: no-op collapse back to pause after unknowns (keeps system stable).
auto prev = current_;
current_ = State::PAUSE;
return {prev, current_, "unknown -> PAUSE"};
}
} // namespace me
Section 7 — Capability Graph (Endpoints + Reachability)
7.1 Graph Concepts (Phase 1)
- Endpoint = a node (human, agent, service, tool)
- Capability Edge = “A can invoke B with capability X”
- Reachability = what’s callable right now under current conditions
Phase 1 conditions are basic:
- endpoint enabled/disabled
- edge enabled/disabled
Code Space 7.2 — src/graph/endpoint.hpp
#pragma once
#include <string>
#include "core/types.hpp"
namespace me {
struct Endpoint {
std::string id;
std::string label;
bool enabled{true};
};
} // namespace me
Code Space 7.3 — src/graph/capability.hpp
#pragma once
#include <string>
#include "core/types.hpp"
namespace me {
struct CapabilityEdge {
std::string from;
std::string to;
std::string capability; // e.g. "deploy", "review", "notify"
bool enabled{true};
};
} // namespace me
Code Space 7.4 — src/graph/graph.hpp
#pragma once
#include <unordered_map>
#include <vector>
#include <unordered_set>
#include "graph/endpoint.hpp"
#include "graph/capability.hpp"
#include "core/types.hpp"
namespace me {
class CapabilityGraph {
public:
void upsert_endpoint(Endpoint ep);
void add_edge(CapabilityEdge e);
std::vector<std::string> reachable_from(const std::string& start) const;
const std::unordered_map<std::string, Endpoint>& endpoints() const { return endpoints_; }
const std::vector<CapabilityEdge>& edges() const { return edges_; }
private:
std::unordered_map<std::string, Endpoint> endpoints_;
std::vector<CapabilityEdge> edges_;
};
} // namespace me
Code Space 7.5 — src/graph/graph.cpp
#include "graph/graph.hpp"
#include <queue>
namespace me {
void CapabilityGraph::upsert_endpoint(Endpoint ep) {
endpoints_[ep.id] = std::move(ep);
}
void CapabilityGraph::add_edge(CapabilityEdge e) {
edges_.push_back(std::move(e));
}
std::vector<std::string> CapabilityGraph::reachable_from(const std::string& start) const {
std::vector<std::string> out;
if (!endpoints_.contains(start)) return out;
if (!endpoints_.at(start).enabled) return out;
std::unordered_map<std::string, std::vector<std::string>> adj;
for (const auto& e : edges_) {
if (!e.enabled) continue;
if (!endpoints_.contains(e.from) || !endpoints_.contains(e.to)) continue;
if (!endpoints_.at(e.from).enabled || !endpoints_.at(e.to).enabled) continue;
adj[e.from].push_back(e.to);
}
std::queue<std::string> q;
std::unordered_set<std::string> seen;
q.push(start);
seen.insert(start);
while (!q.empty()) {
auto cur = q.front(); q.pop();
for (const auto& nxt : adj[cur]) {
if (seen.insert(nxt).second) {
out.push_back(nxt);
q.push(nxt);
}
}
}
return out;
}
} // namespace me
Section 8 — Event Bus (In-Process Pub/Sub)
Phase 1 event bus lets daemon modules talk without tight coupling.
Code Space 8.1 — src/events/bus.hpp
#pragma once
#include <functional>
#include <mutex>
#include <unordered_map>
#include <vector>
#include <string>
namespace me {
struct Event {
std::string type;
std::string payload_json;
};
class EventBus {
public:
using Handler = std::function<void(const Event&)>;
void subscribe(const std::string& type, Handler h);
void publish(const Event& e);
private:
std::mutex mu_;
std::unordered_map<std::string, std::vector<Handler>> subs_;
};
} // namespace me
Code Space 8.2 — src/events/bus.cpp
#include "events/bus.hpp"
namespace me {
void EventBus::subscribe(const std::string& type, Handler h) {
std::scoped_lock lk(mu_);
subs_[type].push_back(std::move(h));
}
void EventBus::publish(const Event& e) {
std::vector<Handler> handlers;
{
std::scoped_lock lk(mu_);
if (subs_.contains(e.type)) handlers = subs_[e.type];
}
for (auto& h : handlers) h(e);
}
} // namespace me
Section 9 — Hunts (First Planner: Reachability Hunt)
Phase 1 hunt = “given a start endpoint, return reachable endpoints.”
Code Space 9.1 — src/hunts/hunt.hpp
#pragma once
#include <string>
#include <vector>
#include "graph/graph.hpp"
namespace me {
struct HuntResult {
std::string start;
std::vector<std::string> reachable;
};
class HuntEngine {
public:
HuntResult reachability_hunt(const CapabilityGraph& g, const std::string& start) const;
};
} // namespace me
Code Space 9.2 — src/hunts/hunt.cpp
#include "hunts/hunt.hpp"
namespace me {
HuntResult HuntEngine::reachability_hunt(const CapabilityGraph& g, const std::string& start) const {
HuntResult r;
r.start = start;
r.reachable = g.reachable_from(start);
return r;
}
} // namespace me
Section 10 — Daemon (HTTP API Surface, Phase 1)
We’ll use a single-header HTTP server (cpp-httplib style) to avoid dependency hell in Phase 1.
You drop httplib.h into third_party/.
10.1 API Endpoints (Phase 1)
-
POST /event→ triggers state transition + ledger append + internal publish -
POST /graph/endpoint→ upsert endpoint + ledger -
POST /graph/edge→ add edge + ledger -
GET /hunt/reachability?start=...→ run hunt + ledger
Code Space 10.2 — src/daemon/server.hpp
#pragma once
#include "ledger/ledger.hpp"
#include "state/state.hpp"
#include "graph/graph.hpp"
#include "events/bus.hpp"
#include "hunts/hunt.hpp"
namespace me {
struct Kernel {
Ledger ledger;
StateKernel state;
CapabilityGraph graph;
EventBus bus;
HuntEngine hunts;
explicit Kernel(std::string ledger_path) : ledger(std::move(ledger_path)) {}
};
void run_server(Kernel& k, int port);
} // namespace me
Code Space 10.3 — src/daemon/server.cpp
#include "daemon/server.hpp"
#include "daemon/routes.hpp"
#include "httplib.h"
namespace me {
void run_server(Kernel& k, int port) {
httplib::Server app;
mount_routes(app, k);
app.listen("0.0.0.0", port);
}
} // namespace me
Code Space 10.4 — src/daemon/routes.hpp
#pragma once
#include "daemon/server.hpp"
#include "httplib.h"
namespace me {
void mount_routes(httplib::Server& app, Kernel& k);
} // namespace me
Code Space 10.5 — src/daemon/routes.cpp
#include "daemon/routes.hpp"
#include "core/time.hpp"
#include <sstream>
namespace me {
// NOTE: Phase 1 keeps payload parsing minimal to stay deterministic + clean.
// You can send small JSON strings; we don't deep-parse yet.
static std::string json_ok(std::string_view body) {
std::ostringstream ss;
ss << "{\"ok\":true,\"data\":" << body << "}";
return ss.str();
}
static std::string json_err(std::string_view code, std::string_view msg) {
std::ostringstream ss;
ss << "{\"ok\":false,\"error\":{\"code\":\"" << code << "\",\"message\":\"" << msg << "\"}}";
return ss.str();
}
void mount_routes(httplib::Server& app, Kernel& k) {
app.Post("/event", [&](const httplib::Request& req, httplib::Response& res) {
// Expect headers: X-Event-Type: ingest|pressure|export|commit
auto ev_type = req.get_header_value("X-Event-Type");
if (ev_type.empty()) {
res.status = 400;
res.set_content(json_err("BAD_REQUEST", "Missing X-Event-Type header"), "application/json");
return;
}
// 1) state transition
auto tr = k.state.apply_event(ev_type);
// 2) ledger commit (collapse)
std::ostringstream payload;
payload << "{"
<< "\"event_type\":\"" << ev_type << "\","
<< "\"from\":\"" << to_string(tr.from) << "\","
<< "\"to\":\"" << to_string(tr.to) << "\","
<< "\"reason\":\"" << tr.reason << "\""
<< "}";
auto le = k.ledger.append("event", payload.str());
if (std::holds_alternative<Error>(le)) {
res.status = 500;
res.set_content(json_err("LEDGER_FAIL", "Failed to append ledger"), "application/json");
return;
}
// 3) publish internal event
k.bus.publish(Event{ev_type, req.body.empty() ? "{}" : req.body});
// response
res.set_content(json_ok(payload.str()), "application/json");
});
app.Post("/graph/endpoint", [&](const httplib::Request& req, httplib::Response& res) {
// Phase 1 quick protocol (headers):
// X-Endpoint-Id, X-Endpoint-Label, X-Endpoint-Enabled (true/false)
auto id = req.get_header_value("X-Endpoint-Id");
auto label = req.get_header_value("X-Endpoint-Label");
auto en = req.get_header_value("X-Endpoint-Enabled");
if (id.empty() || label.empty()) {
res.status = 400;
res.set_content(json_err("BAD_REQUEST", "Missing endpoint headers"), "application/json");
return;
}
bool enabled = (en != "false");
k.graph.upsert_endpoint(Endpoint{id, label, enabled});
std::ostringstream payload;
payload << "{\"id\":\"" << id << "\",\"label\":\"" << label << "\",\"enabled\":" << (enabled ? "true":"false") << "}";
k.ledger.append("graph_update", payload.str());
res.set_content(json_ok(payload.str()), "application/json");
});
app.Post("/graph/edge", [&](const httplib::Request& req, httplib::Response& res) {
// Headers: X-From, X-To, X-Capability, X-Edge-Enabled
auto from = req.get_header_value("X-From");
auto to = req.get_header_value("X-To");
auto cap = req.get_header_value("X-Capability");
auto en = req.get_header_value("X-Edge-Enabled");
if (from.empty() || to.empty() || cap.empty()) {
res.status = 400;
res.set_content(json_err("BAD_REQUEST", "Missing edge headers"), "application/json");
return;
}
bool enabled = (en != "false");
k.graph.add_edge(CapabilityEdge{from, to, cap, enabled});
std::ostringstream payload;
payload << "{\"from\":\"" << from << "\",\"to\":\"" << to << "\",\"capability\":\"" << cap << "\",\"enabled\":" << (enabled?"true":"false") << "}";
k.ledger.append("graph_update", payload.str());
res.set_content(json_ok(payload.str()), "application/json");
});
app.Get("/hunt/reachability", [&](const httplib::Request& req, httplib::Response& res) {
auto start = req.get_param_value("start");
if (start.empty()) {
res.status = 400;
res.set_content(json_err("BAD_REQUEST", "Missing query param: start"), "application/json");
return;
}
auto r = k.hunts.reachability_hunt(k.graph, start);
std::ostringstream payload;
payload << "{"
<< "\"start\":\"" << r.start << "\","
<< "\"reachable\":[";
for (size_t i = 0; i < r.reachable.size(); i++) {
payload << "\"" << r.reachable[i] << "\"";
if (i + 1 < r.reachable.size()) payload << ",";
}
payload << "]}";
k.ledger.append("hunt_result", payload.str());
res.set_content(json_ok(payload.str()), "application/json");
});
}
} // namespace me
Section 11 — Entry Point
Code Space 11.1 — src/main.cpp
#include "daemon/server.hpp"
#include <cstdlib>
#include <iostream>
int main(int argc, char** argv) {
int port = 8080;
if (const char* p = std::getenv("ME_PORT")) port = std::atoi(p);
me::Kernel kernel("mindseye_ledger.ndjson");
// Phase 1 bootstrap endpoints (optional)
kernel.graph.upsert_endpoint({"office:hub", "Office Hub", true});
kernel.graph.upsert_endpoint({"agent:alpha", "Agent Alpha", true});
kernel.graph.add_edge({"office:hub", "agent:alpha", "notify", true});
std::cout << "Mindseye Fabric daemon listening on port " << port << "\n";
me::run_server(kernel, port);
return 0;
}
Section 12 — Phase 1 Smoke Test (Manual)
Run:
mkdir build && cd build
cmake .. && cmake --build . -j
./mindseye_fabric
Send commands:
# Trigger event -> state transition + ledger append
curl -X POST localhost:8080/event -H "X-Event-Type: ingest" -d '{"source":"external"}'
# Add endpoint
curl -X POST localhost:8080/graph/endpoint \
-H "X-Endpoint-Id: human:1" \
-H "X-Endpoint-Label: Peace" \
-H "X-Endpoint-Enabled: true"
# Add edge
curl -X POST localhost:8080/graph/edge \
-H "X-From: office:hub" \
-H "X-To: human:1" \
-H "X-Capability: assign" \
-H "X-Edge-Enabled: true"
# Hunt reachability
curl "localhost:8080/hunt/reachability?start=office:hub"
Then open mindseye_ledger.ndjson and you’ll see the kernel’s “collapse trail.”
What we do next (still Phase 1, next section)
In Phase 1 — Section 2, we harden this into real THD-level kernel quality by adding:
- proper ledger restore (scan + parse last entry)
- real SHA-256 (integrity chain becomes legit)
- strict JSON parsing (no header hacks)
- transition guards + constraints (reachability changes by condition)
- structured logging + metrics hooks
Phase 1 — Section 2
Ledger Hardening + Integrity Chain (C++ Kernel-Grade)
What we upgrade in this section
- Real SHA-256 (no placeholder hashing)
- Deterministic restore on daemon restart
- Integrity verification (hash chain check)
- Replay iterator (foundation for time-travel + audits)
- Stricter entry schema (stable fields, stable ordering)
We keep NDJSON (one JSON per line) because:
- tail-friendly
- diff-friendly
- human-inspectable
- easy to stream/replay
Section 2.1 — Dependency Additions (Header-Only, No Drama)
Repo additions
third_party/
picosha2.h
json.hpp
-
picosha2.h= tiny, header-only SHA-256 -
json.hpp= nlohmann/json single header for robust parsing
Note: I’m not pasting those giant third-party headers here. You drop them in
third_party/(standard practice).
Section 2.2 — Ledger Entry: Stable Schema + Canonical Hash Input
Hard rule: the hash must be computed from a canonical input string so verification is deterministic across machines.
Hash input string (canonical):
id|ts_ms|kind|payload_json|prev_hash
And SHA-256 hex of that string becomes hash.
Section 2.3 — Code Space: SHA-256 Provider (Real)
Code Space 2.3.1 — src/ledger/hash.hpp
#pragma once
#include <string>
#include <string_view>
namespace me {
struct Hash {
static std::string sha256_hex(std::string_view data);
};
} // namespace me
Code Space 2.3.2 — src/ledger/hash.cpp
#include "ledger/hash.hpp"
#include "picosha2.h"
namespace me {
std::string Hash::sha256_hex(std::string_view data) {
std::string out;
picosha2::hash256_hex_string(data.begin(), data.end(), out);
return out;
}
} // namespace me
✅ Now integrity is real. No “hash() placeholder” nonsense.
Update CMakeLists.txt to compile hash.cpp:
Code Space 2.3.3 — CMakeLists.txt (patch)
# add:
src/ledger/hash.cpp
Section 2.4 — Code Space: Robust Restore + Verification
2.4.1 Ledger parse/restore spec
On startup:
- read ledger file line-by-line
- parse each JSON line
- verify fields exist
- verify
hashmatches computed - verify
prev_hashmatches last entry hash - set
next_id_andlast_
Modes:
- STRICT: fail startup if corruption detected
- LENIENT: stop at first invalid line and continue (useful for dev)
Code Space 2.4.2 — src/ledger/ledger.hpp (upgrade)
#pragma once
#include <fstream>
#include <mutex>
#include <optional>
#include <string>
#include <vector>
#include "ledger/entry.hpp"
#include "ledger/hash.hpp"
#include "core/types.hpp"
namespace me {
enum class LedgerMode : u64 {
STRICT = 0,
LENIENT = 1
};
struct LedgerStats {
u64 entries_loaded = 0;
u64 entries_verified = 0;
u64 entries_failed = 0;
bool clean = true;
};
class Ledger {
public:
Ledger(std::string path, LedgerMode mode = LedgerMode::STRICT);
Result<LedgerEntry> append(std::string kind, std::string payload_json);
std::optional<LedgerEntry> last() const;
u64 next_id() const;
LedgerStats stats() const;
// Replay: stream entries from disk in order
Result<std::vector<LedgerEntry>> read_all() const;
private:
std::string path_;
LedgerMode mode_;
mutable std::mutex mu_;
u64 next_id_{1};
std::optional<LedgerEntry> last_;
LedgerStats stats_;
static std::string compute_hash(const LedgerEntry& e);
Result<LedgerEntry> parse_line_to_entry(const std::string& line) const;
Result<void> restore_from_disk();
};
} // namespace me
Code Space 2.4.3 — src/ledger/ledger.cpp (upgrade)
#include "ledger/ledger.hpp"
#include "core/time.hpp"
#include "json.hpp"
#include <sstream>
namespace me {
using json = nlohmann::json;
static bool has_all_fields(const json& j) {
return j.contains("id") && j.contains("ts_ms") && j.contains("kind") &&
j.contains("payload") && j.contains("prev_hash") && j.contains("hash");
}
Ledger::Ledger(std::string path, LedgerMode mode)
: path_(std::move(path)), mode_(mode) {
auto r = restore_from_disk();
if (std::holds_alternative<Error>(r) && mode_ == LedgerMode::STRICT) {
// In STRICT mode, we treat restore failure as fatal state.
// (In main.cpp you can catch and exit.)
// Here we just mark stats as dirty and keep minimal defaults.
stats_.clean = false;
}
}
LedgerStats Ledger::stats() const {
std::scoped_lock lk(mu_);
return stats_;
}
std::optional<LedgerEntry> Ledger::last() const {
std::scoped_lock lk(mu_);
return last_;
}
u64 Ledger::next_id() const {
std::scoped_lock lk(mu_);
return next_id_;
}
std::string Ledger::compute_hash(const LedgerEntry& e) {
std::ostringstream ss;
ss << e.id << "|" << e.ts_ms << "|" << e.kind << "|" << e.payload_json << "|" << e.prev_hash;
return Hash::sha256_hex(ss.str());
}
Result<LedgerEntry> Ledger::parse_line_to_entry(const std::string& line) const {
json j;
try {
j = json::parse(line);
} catch (...) {
return Error{"LEDGER_PARSE", "Invalid JSON line"};
}
if (!has_all_fields(j)) {
return Error{"LEDGER_SCHEMA", "Missing required ledger fields"};
}
LedgerEntry e;
try {
e.id = j.at("id").get<u64>();
e.ts_ms = j.at("ts_ms").get<u64>();
e.kind = j.at("kind").get<std::string>();
// payload stored as raw JSON string (canonical for hashing and storage)
// Important: dump() gives stable representation.
e.payload_json = j.at("payload").dump();
e.prev_hash = j.at("prev_hash").get<std::string>();
e.hash = j.at("hash").get<std::string>();
} catch (...) {
return Error{"LEDGER_SCHEMA", "Bad field types in ledger entry"};
}
// Verify hash correctness
const auto expected = compute_hash(e);
if (expected != e.hash) {
return Error{"LEDGER_HASH", "Hash mismatch (entry corrupted or tampered)"};
}
return e;
}
Result<void> Ledger::restore_from_disk() {
std::scoped_lock lk(mu_);
stats_ = LedgerStats{};
std::ifstream in(path_);
if (!in.good()) {
// No ledger yet -> clean start.
stats_.clean = true;
next_id_ = 1;
last_.reset();
return (void)0;
}
std::string line;
std::string prev = "GENESIS";
u64 max_id = 0;
while (std::getline(in, line)) {
if (line.empty()) continue;
stats_.entries_loaded++;
auto parsed = parse_line_to_entry(line);
if (std::holds_alternative<Error>(parsed)) {
stats_.entries_failed++;
stats_.clean = false;
if (mode_ == LedgerMode::STRICT) {
return std::get<Error>(parsed);
} else {
// LENIENT: stop restore at first bad line.
break;
}
}
auto e = std::get<LedgerEntry>(parsed);
// Verify chain link
if (e.prev_hash != prev) {
stats_.entries_failed++;
stats_.clean = false;
if (mode_ == LedgerMode::STRICT) {
return Error{"LEDGER_CHAIN", "Prev-hash mismatch (chain broken)"};
} else {
break;
}
}
prev = e.hash;
max_id = e.id;
last_ = e;
stats_.entries_verified++;
}
next_id_ = max_id + 1;
return (void)0;
}
Result<LedgerEntry> Ledger::append(std::string kind, std::string payload_json) {
std::scoped_lock lk(mu_);
LedgerEntry e;
e.id = next_id_++;
e.ts_ms = now_ms();
e.kind = std::move(kind);
// Ensure payload_json is valid JSON
json payload;
try {
payload = json::parse(payload_json);
} catch (...) {
return Error{"LEDGER_PAYLOAD", "Payload must be valid JSON"};
}
e.payload_json = payload.dump();
e.prev_hash = last_.has_value() ? last_->hash : "GENESIS";
e.hash = compute_hash(e);
json out = {
{"id", e.id},
{"ts_ms", e.ts_ms},
{"kind", e.kind},
{"payload", json::parse(e.payload_json)},
{"prev_hash", e.prev_hash},
{"hash", e.hash}
};
std::ofstream f(path_, std::ios::app);
if (!f.good()) return Error{"LEDGER_IO", "Failed to open ledger file for append"};
f << out.dump() << "\n";
last_ = e;
return e;
}
Result<std::vector<LedgerEntry>> Ledger::read_all() const {
std::ifstream in(path_);
if (!in.good()) return std::vector<LedgerEntry>{};
std::vector<LedgerEntry> entries;
std::string line;
std::string prev = "GENESIS";
while (std::getline(in, line)) {
if (line.empty()) continue;
auto parsed = parse_line_to_entry(line);
if (std::holds_alternative<Error>(parsed)) {
return std::get<Error>(parsed);
}
auto e = std::get<LedgerEntry>(parsed);
if (e.prev_hash != prev) {
return Error{"LEDGER_CHAIN", "Prev-hash mismatch during replay"};
}
prev = e.hash;
entries.push_back(std::move(e));
}
return entries;
}
} // namespace me
What this gives you immediately
- Restart daemon → ledger restores deterministically
- Any tampering / corruption → detected
- Replay is now a first-class operation
That’s real kernel behavior, not “log file vibes.”
Section 2.5 — Daemon: Strict Startup Behavior (No Silent Corruption)
Code Space 2.5.1 — src/main.cpp (patch)
#include "daemon/server.hpp"
#include <cstdlib>
#include <iostream>
int main(int argc, char** argv) {
int port = 8080;
if (const char* p = std::getenv("ME_PORT")) port = std::atoi(p);
// STRICT by default. If ledger is corrupted, it’s a hard stop.
me::Kernel kernel("mindseye_ledger.ndjson");
auto st = kernel.ledger.stats();
if (!st.clean) {
std::cerr << "Ledger restore not clean. "
<< "loaded=" << st.entries_loaded
<< " verified=" << st.entries_verified
<< " failed=" << st.entries_failed
<< "\n";
// In real daemon mode: exit(1) unless you’re in dev.
// std::exit(1);
}
std::cout << "Mindseye Fabric daemon listening on port " << port << "\n";
me::run_server(kernel, port);
return 0;
}
Phase 1 — Section 3
Strict JSON APIs · Schema Validation · Deterministic Errors
What changes in this section
We harden all external inputs so the kernel only ever sees:
- validated JSON
- explicit schemas
- deterministic error codes
- ledger-backed failures (yes, even failed intent matters)
This section introduces:
- Canonical request schemas
- JSON-only inputs (no more header protocol)
- Structured error model
- Validation gates (nothing touches state/graph without passing)
- Kernel-level rejection semantics
Section 3.1 — API Contract (Phase 1 Canonical)
3.1.1 /event
POST
{
"type": "ingest | pressure | export | commit",
"payload": { "...": "any JSON object" }
}
3.1.2 /graph/endpoint
POST
{
"id": "string",
"label": "string",
"enabled": true
}
3.1.3 /graph/edge
POST
{
"from": "endpoint_id",
"to": "endpoint_id",
"capability": "string",
"enabled": true
}
3.1.4 /hunt/reachability
POST
{
"start": "endpoint_id"
}
Rule:
If it’s not valid JSON matching schema → hard reject.
Section 3.2 — Error Model (Stable + Ledger-Aware)
Every API response follows this envelope:
Success
{
"ok": true,
"data": { ... }
}
Failure
{
"ok": false,
"error": {
"code": "STRING_ENUM",
"message": "human readable"
}
}
Phase 1 Error Codes
BAD_JSONSCHEMA_VIOLATIONUNKNOWN_EVENTLEDGER_FAILGRAPH_INVALIDHUNT_INVALID
Errors do not mutate state, but may be logged (optional flag).
Section 3.3 — Validation Utilities (Kernel-Owned)
Code Space 3.3.1 — src/core/validate.hpp
#pragma once
#include "json.hpp"
#include "core/types.hpp"
namespace me {
using json = nlohmann::json;
inline Result<void> require_fields(const json& j, std::initializer_list<const char*> fields) {
for (auto f : fields) {
if (!j.contains(f)) {
return Error{"SCHEMA_VIOLATION", std::string("Missing field: ") + f};
}
}
return (void)0;
}
inline Result<void> require_type(const json& j, const char* field, json::value_t t) {
if (!j.contains(field) || j.at(field).type() != t) {
return Error{"SCHEMA_VIOLATION", std::string("Invalid type for field: ") + field};
}
return (void)0;
}
} // namespace me
Section 3.4 — Routes: JSON-Only, Validated
Code Space 3.4.1 — src/daemon/routes.cpp (replacement)
#include "daemon/routes.hpp"
#include "core/validate.hpp"
#include "core/time.hpp"
#include "json.hpp"
#include <sstream>
namespace me {
using json = nlohmann::json;
static void respond_ok(httplib::Response& res, const json& data) {
res.set_content(json{{"ok", true}, {"data", data}}.dump(), "application/json");
}
static void respond_err(httplib::Response& res, const Error& e, int status = 400) {
res.status = status;
res.set_content(json{
{"ok", false},
{"error", {{"code", e.code}, {"message", e.message}}}
}.dump(), "application/json");
}
void mount_routes(httplib::Server& app, Kernel& k) {
// ---------- EVENT ----------
app.Post("/event", [&](const httplib::Request& req, httplib::Response& res) {
json j;
try { j = json::parse(req.body); }
catch (...) {
return respond_err(res, {"BAD_JSON", "Invalid JSON"});
}
auto r = require_fields(j, {"type", "payload"});
if (std::holds_alternative<Error>(r)) {
return respond_err(res, std::get<Error>(r));
}
auto t = j["type"].get<std::string>();
auto tr = k.state.apply_event(t);
json payload = {
{"event_type", t},
{"from", to_string(tr.from)},
{"to", to_string(tr.to)},
{"reason", tr.reason}
};
auto le = k.ledger.append("event", payload.dump());
if (std::holds_alternative<Error>(le)) {
return respond_err(res, std::get<Error>(le), 500);
}
k.bus.publish(Event{t, j["payload"].dump()});
respond_ok(res, payload);
});
// ---------- ENDPOINT ----------
app.Post("/graph/endpoint", [&](const httplib::Request& req, httplib::Response& res) {
json j;
try { j = json::parse(req.body); }
catch (...) {
return respond_err(res, {"BAD_JSON", "Invalid JSON"});
}
auto r = require_fields(j, {"id", "label", "enabled"});
if (std::holds_alternative<Error>(r)) {
return respond_err(res, std::get<Error>(r));
}
Endpoint ep{
j["id"].get<std::string>(),
j["label"].get<std::string>(),
j["enabled"].get<bool>()
};
k.graph.upsert_endpoint(ep);
k.ledger.append("graph_update", j.dump());
respond_ok(res, j);
});
// ---------- EDGE ----------
app.Post("/graph/edge", [&](const httplib::Request& req, httplib::Response& res) {
json j;
try { j = json::parse(req.body); }
catch (...) {
return respond_err(res, {"BAD_JSON", "Invalid JSON"});
}
auto r = require_fields(j, {"from", "to", "capability", "enabled"});
if (std::holds_alternative<Error>(r)) {
return respond_err(res, std::get<Error>(r));
}
k.graph.add_edge({
j["from"].get<std::string>(),
j["to"].get<std::string>(),
j["capability"].get<std::string>(),
j["enabled"].get<bool>()
});
k.ledger.append("graph_update", j.dump());
respond_ok(res, j);
});
// ---------- HUNT ----------
app.Post("/hunt/reachability", [&](const httplib::Request& req, httplib::Response& res) {
json j;
try { j = json::parse(req.body); }
catch (...) {
return respond_err(res, {"BAD_JSON", "Invalid JSON"});
}
auto r = require_fields(j, {"start"});
if (std::holds_alternative<Error>(r)) {
return respond_err(res, std::get<Error>(r));
}
auto result = k.hunts.reachability_hunt(k.graph, j["start"].get<std::string>());
json out{
{"start", result.start},
{"reachable", result.reachable}
};
k.ledger.append("hunt_result", out.dump());
respond_ok(res, out);
});
}
} // namespace me
Section 3.5 — Determinism Rules (Non-Negotiable)
From this point forward:
- ❌ No mutation before validation
- ❌ No silent coercion
- ❌ No “best effort” parsing
- ✅ Every accepted request → ledger entry
- ✅ Every rejection → deterministic error code
This is how the system learns without hallucinating.
Section 3.6 — Smoke Tests (JSON-Only)
# Valid event
curl -X POST localhost:8080/event \
-H "Content-Type: application/json" \
-d '{"type":"ingest","payload":{"source":"external"}}'
# Invalid JSON
curl -X POST localhost:8080/event -d '{'
# Invalid schema
curl -X POST localhost:8080/event \
-H "Content-Type: application/json" \
-d '{"payload":{}}'
Ledger will now reflect only valid collapses.
What Phase 1 Looks Like Now
At this point you have:
- a sovereign C++ daemon
- real integrity-checked ledger
- deterministic state transitions
- capability graph with reachability
- strict JSON protocol
- zero undefined behavior at the boundary
This is already kernel-class.
Phase 1 — Section 4
Constraint-Aware Reachability · State-Conditioned Hunts · Context Collapses
What we add in this section
- A Context model (live conditions: presence, load, budget, etc.)
- Endpoint constraints (requires/denies conditions)
- Edge constraints (guards: allowed states + required conditions)
- Reachability becomes: Graph × State × Context → Reachable Set
- New API:
POST /context(updates the live context and logs a collapse)
This is the first true “physics layer” of the fabric.
Section 4.1 — Concepts (Kernel Law)
4.1.1 Context
A small, deterministic JSON map of the current world state.
Examples:
"office.people_present": 16"human:1.online": true"gpu.load": 0.73"budget.remaining": 1200
4.1.2 Constraints
Each node/edge can declare:
-
requires: conditions that must match -
forbids: conditions that must not match - edge can also declare
allowed_states
So the system “changes shape” by context collapse, not by vibes.
Section 4.2 — API Additions
4.2.1 POST /context
Schema:
{
"set": { "key": "value", "...": "..." }
}
- updates kernel’s live context store
- writes
context_updateto ledger
4.2.2 Updated /hunt/reachability
Schema:
{
"start": "endpoint_id",
"use_live_context": true,
"context_override": { "key": "value" }
}
Rules:
- If
context_overrideexists → merge over live context deterministically - The hunt result records the context snapshot used
Section 4.3 — Code Space: Context Model
Code Space 4.3.1 — src/core/context.hpp
#pragma once
#include <string>
#include <unordered_map>
#include "json.hpp"
namespace me {
using json = nlohmann::json;
// Deterministic context store (string keys -> JSON values)
class Context {
public:
void set(const std::string& key, const json& value) { kv_[key] = value; }
bool has(const std::string& key) const { return kv_.contains(key); }
const json* get(const std::string& key) const {
auto it = kv_.find(key);
if (it == kv_.end()) return nullptr;
return &it->second;
}
void merge_over(const Context& other) {
// other overwrites
for (const auto& [k, v] : other.kv_) kv_[k] = v;
}
json to_json() const {
json out = json::object();
for (const auto& [k, v] : kv_) out[k] = v;
return out;
}
static Context from_json_object(const json& obj) {
Context c;
if (!obj.is_object()) return c;
for (auto it = obj.begin(); it != obj.end(); ++it) c.set(it.key(), it.value());
return c;
}
private:
std::unordered_map<std::string, json> kv_;
};
} // namespace me
Section 4.4 — Code Space: Constraint Rules
Code Space 4.4.1 — src/graph/constraints.hpp
#pragma once
#include <string>
#include <vector>
#include "json.hpp"
#include "core/context.hpp"
#include "state/state.hpp"
namespace me {
using json = nlohmann::json;
struct Condition {
std::string key;
json value;
};
struct Constraints {
std::vector<Condition> requires;
std::vector<Condition> forbids;
bool satisfied_by(const Context& ctx) const {
// requires: key exists and equals
for (const auto& r : requires) {
const json* v = ctx.get(r.key);
if (!v) return false;
if (*v != r.value) return false;
}
// forbids: either key missing OR value != forbidden value
for (const auto& f : forbids) {
const json* v = ctx.get(f.key);
if (!v) continue;
if (*v == f.value) return false;
}
return true;
}
static Constraints from_json(const json& j) {
Constraints c;
if (!j.is_object()) return c;
if (j.contains("requires") && j["requires"].is_array()) {
for (const auto& it : j["requires"]) {
if (!it.is_object() || !it.contains("key") || !it.contains("value")) continue;
c.requires.push_back({it["key"].get<std::string>(), it["value"]});
}
}
if (j.contains("forbids") && j["forbids"].is_array()) {
for (const auto& it : j["forbids"]) {
if (!it.is_object() || !it.contains("key") || !it.contains("value")) continue;
c.forbids.push_back({it["key"].get<std::string>(), it["value"]});
}
}
return c;
}
json to_json() const {
json j;
j["requires"] = json::array();
j["forbids"] = json::array();
for (const auto& r : requires) j["requires"].push_back({{"key", r.key}, {"value", r.value}});
for (const auto& f : forbids) j["forbids"].push_back({{"key", f.key}, {"value", f.value}});
return j;
}
};
} // namespace me
Section 4.5 — Graph Upgrades: Constraints + State Guards
Code Space 4.5.1 — src/graph/endpoint.hpp (upgrade)
#pragma once
#include <string>
#include "graph/constraints.hpp"
namespace me {
struct Endpoint {
std::string id;
std::string label;
bool enabled{true};
Constraints constraints; // new: node-level constraints
};
} // namespace me
Code Space 4.5.2 — src/graph/capability.hpp (upgrade)
#pragma once
#include <string>
#include <vector>
#include "graph/constraints.hpp"
#include "state/state.hpp"
namespace me {
struct CapabilityEdge {
std::string from;
std::string to;
std::string capability;
bool enabled{true};
std::vector<State> allowed_states; // empty = allow all
Constraints constraints; // new: edge-level constraints
};
} // namespace me
Code Space 4.5.3 — src/graph/graph.hpp (upgrade)
#pragma once
#include <unordered_map>
#include <vector>
#include "graph/endpoint.hpp"
#include "graph/capability.hpp"
#include "core/context.hpp"
#include "state/state.hpp"
namespace me {
class CapabilityGraph {
public:
void upsert_endpoint(Endpoint ep);
void add_edge(CapabilityEdge e);
std::vector<std::string> reachable_from(
const std::string& start,
State current_state,
const Context& ctx
) const;
private:
std::unordered_map<std::string, Endpoint> endpoints_;
std::vector<CapabilityEdge> edges_;
};
} // namespace me
Code Space 4.5.4 — src/graph/graph.cpp (upgrade)
#include "graph/graph.hpp"
#include <queue>
#include <unordered_set>
namespace me {
static bool state_allowed(State s, const std::vector<State>& allowed) {
if (allowed.empty()) return true;
for (auto a : allowed) if (a == s) return true;
return false;
}
void CapabilityGraph::upsert_endpoint(Endpoint ep) {
endpoints_[ep.id] = std::move(ep);
}
void CapabilityGraph::add_edge(CapabilityEdge e) {
edges_.push_back(std::move(e));
}
std::vector<std::string> CapabilityGraph::reachable_from(
const std::string& start,
State current_state,
const Context& ctx
) const {
std::vector<std::string> out;
auto it = endpoints_.find(start);
if (it == endpoints_.end()) return out;
const auto& start_ep = it->second;
if (!start_ep.enabled) return out;
if (!start_ep.constraints.satisfied_by(ctx)) return out;
// Build adjacency under constraints
std::unordered_map<std::string, std::vector<std::string>> adj;
adj.reserve(edges_.size());
for (const auto& e : edges_) {
if (!e.enabled) continue;
if (!state_allowed(current_state, e.allowed_states)) continue;
auto f = endpoints_.find(e.from);
auto t = endpoints_.find(e.to);
if (f == endpoints_.end() || t == endpoints_.end()) continue;
const auto& from_ep = f->second;
const auto& to_ep = t->second;
if (!from_ep.enabled || !to_ep.enabled) continue;
if (!from_ep.constraints.satisfied_by(ctx)) continue;
if (!to_ep.constraints.satisfied_by(ctx)) continue;
if (!e.constraints.satisfied_by(ctx)) continue;
adj[e.from].push_back(e.to);
}
// BFS for reachability
std::queue<std::string> q;
std::unordered_set<std::string> seen;
q.push(start);
seen.insert(start);
while (!q.empty()) {
auto cur = q.front(); q.pop();
for (const auto& nxt : adj[cur]) {
if (seen.insert(nxt).second) {
out.push_back(nxt);
q.push(nxt);
}
}
}
return out;
}
} // namespace me
Section 4.6 — Hunt Upgrade: Uses State + Context Snapshot
Code Space 4.6.1 — src/hunts/hunt.hpp (upgrade)
#pragma once
#include <string>
#include <vector>
#include "graph/graph.hpp"
#include "core/context.hpp"
#include "state/state.hpp"
namespace me {
struct HuntResult {
std::string start;
std::vector<std::string> reachable;
nlohmann::json context_snapshot;
std::string state;
};
class HuntEngine {
public:
HuntResult reachability_hunt(
const CapabilityGraph& g,
const std::string& start,
State current_state,
const Context& ctx
) const;
};
} // namespace me
Code Space 4.6.2 — src/hunts/hunt.cpp (upgrade)
#include "hunts/hunt.hpp"
#include "state/state.hpp"
namespace me {
HuntResult HuntEngine::reachability_hunt(
const CapabilityGraph& g,
const std::string& start,
State current_state,
const Context& ctx
) const {
HuntResult r;
r.start = start;
r.state = to_string(current_state);
r.context_snapshot = ctx.to_json();
r.reachable = g.reachable_from(start, current_state, ctx);
return r;
}
} // namespace me
Section 4.7 — Kernel Holds Live Context
Code Space 4.7.1 — src/daemon/server.hpp (upgrade Kernel struct)
#pragma once
#include "ledger/ledger.hpp"
#include "state/state.hpp"
#include "graph/graph.hpp"
#include "events/bus.hpp"
#include "hunts/hunt.hpp"
#include "core/context.hpp"
namespace me {
struct Kernel {
Ledger ledger;
StateKernel state;
CapabilityGraph graph;
EventBus bus;
HuntEngine hunts;
Context live_context; // new
explicit Kernel(std::string ledger_path) : ledger(std::move(ledger_path)) {}
};
void run_server(Kernel& k, int port);
} // namespace me
Section 4.8 — API: /context + State/Context Hunts
Code Space 4.8.1 — src/daemon/routes.cpp (patch relevant parts)
Add two routes:
(A) POST /context
app.Post("/context", [&](const httplib::Request& req, httplib::Response& res) {
json j;
try { j = json::parse(req.body); }
catch (...) { return respond_err(res, {"BAD_JSON", "Invalid JSON"}); }
auto r = require_fields(j, {"set"});
if (std::holds_alternative<Error>(r)) return respond_err(res, std::get<Error>(r));
if (!j["set"].is_object()) return respond_err(res, {"SCHEMA_VIOLATION", "`set` must be an object"});
// Collapse: update live context deterministically
for (auto it = j["set"].begin(); it != j["set"].end(); ++it) {
k.live_context.set(it.key(), it.value());
}
json payload = {{"set", j["set"]}, {"context_snapshot", k.live_context.to_json()}};
auto le = k.ledger.append("context_update", payload.dump());
if (std::holds_alternative<Error>(le)) return respond_err(res, std::get<Error>(le), 500);
respond_ok(res, payload);
});
(B) POST /hunt/reachability (upgrade to use context + state)
app.Post("/hunt/reachability", [&](const httplib::Request& req, httplib::Response& res) {
json j;
try { j = json::parse(req.body); }
catch (...) { return respond_err(res, {"BAD_JSON", "Invalid JSON"}); }
auto r = require_fields(j, {"start"});
if (std::holds_alternative<Error>(r)) return respond_err(res, std::get<Error>(r));
// Build hunt context: live + override
Context ctx = k.live_context;
if (j.contains("context_override")) {
if (!j["context_override"].is_object()) {
return respond_err(res, {"SCHEMA_VIOLATION", "`context_override` must be an object"});
}
auto ov = Context::from_json_object(j["context_override"]);
ctx.merge_over(ov);
}
auto state_now = k.state.current();
auto result = k.hunts.reachability_hunt(k.graph, j["start"].get<std::string>(), state_now, ctx);
json out{
{"start", result.start},
{"state", result.state},
{"context", result.context_snapshot},
{"reachable", result.reachable}
};
k.ledger.append("hunt_result", out.dump());
respond_ok(res, out);
});
Section 4.9 — Updated Graph Input Schemas (Constraints + Allowed States)
4.9.1 Endpoint POST schema
{
"id": "human:1",
"label": "Peace",
"enabled": true,
"constraints": {
"requires": [{"key":"human:1.online","value":true}],
"forbids": [{"key":"human:1.on_leave","value":true}]
}
}
4.9.2 Edge POST schema
{
"from": "office:hub",
"to": "human:1",
"capability": "assign",
"enabled": true,
"allowed_states": ["LOOP","STRESS"],
"constraints": {
"requires": [{"key":"office.people_present","value":16}],
"forbids": [{"key":"security.lockdown","value":true}]
}
}
To support these, you update the existing /graph/endpoint and /graph/edge handlers to parse:
- optional
constraintsobject →Constraints::from_json() - optional
allowed_statesarray → parse strings intoState
If you want, I’ll paste the exact patched handlers next (kept clean so we don’t flood this section).
Section 4.10 — Smoke Test: “System Changes Shape”
# Set live context: 16 people present, human:1 online
curl -X POST localhost:8080/context \
-H "Content-Type: application/json" \
-d '{"set":{"office.people_present":16,"human:1.online":true}}'
# Upsert endpoint with constraint: must be online
curl -X POST localhost:8080/graph/endpoint \
-H "Content-Type: application/json" \
-d '{
"id":"human:1",
"label":"Peace",
"enabled":true,
"constraints":{"requires":[{"key":"human:1.online","value":true}]}
}'
# Add edge requiring 16 people present
curl -X POST localhost:8080/graph/edge \
-H "Content-Type: application/json" \
-d '{
"from":"office:hub",
"to":"human:1",
"capability":"assign",
"enabled":true,
"constraints":{"requires":[{"key":"office.people_present","value":16}]}
}'
# Hunt
curl -X POST localhost:8080/hunt/reachability \
-H "Content-Type: application/json" \
-d '{"start":"office:hub"}'
# Now collapse context: people_present changes -> reachability changes
curl -X POST localhost:8080/context \
-H "Content-Type: application/json" \
-d '{"set":{"office.people_present":2}}'
curl -X POST localhost:8080/hunt/reachability \
-H "Content-Type: application/json" \
-d '{"start":"office:hub"}'
You’ll see the reachable list change without changing the graph.
That’s the “atom observed → state changes shape” mechanic, but implemented as context-conditioned reachability.
Where this sets us up next
Now that Context and Constraints exist, Phase 1 can evolve into real “hunting engine” behavior.
Phase 1 — Section 5
Weighted Graph · Budgets · A* Hunt Planner · Best-Endpoint Selection
What we add in this section
- Edge costs (latency, effort, risk, money — whatever your fabric measures)
- Budget constraints (max_cost, max_steps, max_time_ms)
- A deterministic A* pathfinder over the constrained graph
- A new hunt: “find best path to target”
- A new hunt: “find best endpoint for capability” (selection under constraints)
We keep it Phase 1 clean: single-node daemon, deterministic outputs, ledger-backed.
Section 5.1 — Concepts
5.1.1 Cost
Each edge has a numeric cost (double).
Total path cost = sum of edge costs.
5.1.2 Budgets
Hunt rejects paths that exceed:
max_costmax_stepsmax_time_ms
5.1.3 Two Hunt Types
- Path Hunt: start → target (find best path)
- Capability Hunt: start → (any node) where there exists a feasible path AND final node matches a capability constraint (or role tag)
Section 5.2 — Graph Upgrade: Cost + Optional Heuristic Hints
Code Space 5.2.1 — src/graph/capability.hpp (upgrade)
#pragma once
#include <string>
#include <vector>
#include "graph/constraints.hpp"
#include "state/state.hpp"
namespace me {
struct CapabilityEdge {
std::string from;
std::string to;
std::string capability;
bool enabled{true};
std::vector<State> allowed_states; // empty = allow all
Constraints constraints; // requires/forbids
double cost{1.0}; // new: path planning cost
};
} // namespace me
Section 5.3 — Planner Types: Budgets + Results
Code Space 5.3.1 — src/hunts/budget.hpp
#pragma once
#include "core/types.hpp"
namespace me {
struct Budget {
double max_cost = 1000000.0;
u64 max_steps = 1000000;
u64 max_time_ms = 2000; // Phase 1 default
};
} // namespace me
Code Space 5.3.2 — src/hunts/plan.hpp
#pragma once
#include <string>
#include <vector>
#include "json.hpp"
namespace me {
struct Plan {
std::vector<std::string> path; // nodes
double total_cost = 0.0;
};
struct PlanResult {
bool found = false;
Plan plan;
nlohmann::json context_snapshot;
std::string state;
nlohmann::json debug; // optional: expansions, cutoffs, etc.
};
} // namespace me
Section 5.4 — A* Planner (Constraint-Aware)
We’ll implement A* with:
-
graph neighbor expansion filtered by:
- endpoint enabled + endpoint constraints
- edge enabled + edge constraints
- allowed_states
budgets enforced while searching
Heuristic:
- Phase 1 default heuristic = 0 (so it becomes Dijkstra, deterministic and safe)
- Later we can add heuristics from telemetry / learned costs.
Code Space 5.4.1 — src/hunts/astar.hpp
#pragma once
#include <string>
#include <vector>
#include "graph/graph.hpp"
#include "core/context.hpp"
#include "state/state.hpp"
#include "hunts/budget.hpp"
#include "hunts/plan.hpp"
#include "core/types.hpp"
namespace me {
class AStarPlanner {
public:
PlanResult plan_path(
const CapabilityGraph& g,
const std::string& start,
const std::string& target,
State current_state,
const Context& ctx,
const Budget& budget
) const;
private:
static double heuristic(const std::string& /*a*/, const std::string& /*b*/) {
return 0.0; // Phase 1: deterministic, safe
}
};
} // namespace me
Code Space 5.4.2 — src/hunts/astar.cpp
#include "hunts/astar.hpp"
#include "core/time.hpp"
#include <queue>
#include <unordered_map>
#include <unordered_set>
#include <limits>
namespace me {
struct NodeRec {
double f = 0.0; // g + h
double g = 0.0; // cost so far
std::string node;
};
struct Cmp {
bool operator()(const NodeRec& a, const NodeRec& b) const {
// min-heap behavior via priority_queue (invert)
return a.f > b.f;
}
};
PlanResult AStarPlanner::plan_path(
const CapabilityGraph& g,
const std::string& start,
const std::string& target,
State current_state,
const Context& ctx,
const Budget& budget
) const {
PlanResult out;
out.state = to_string(current_state);
out.context_snapshot = ctx.to_json();
out.debug = nlohmann::json::object();
const auto t0 = now_ms();
// Fast reject: if start == target, trivial plan
if (start == target) {
out.found = true;
out.plan.path = {start};
out.plan.total_cost = 0.0;
return out;
}
// We need neighbor expansion with edge costs.
// We'll build adjacency list filtered on the fly by constraints/state.
// For that, we need access to endpoints/edges; current CapabilityGraph doesn't expose them publicly.
// Phase 1: we add getters OR implement neighbor iteration inside CapabilityGraph.
// For now, assume CapabilityGraph exposes:
// endpoints() and edges() in future patch (see Section 5.5).
// Placeholder compile note:
// This function expects:
// g.get_neighbors(node, current_state, ctx) -> vector<(neighbor, edge_cost)>
//
// We'll implement that in Section 5.5 below.
return Error{"NOT_IMPLEMENTED", "A* requires get_neighbors() in CapabilityGraph (added in Section 5.5)"};
}
} // namespace me
Hold that “NOT_IMPLEMENTED” — we now add the missing kernel method cleanly.
Section 5.5 — Graph Neighbor API (Constraint Filter + Cost)
We add a method to CapabilityGraph that returns neighbors + cost for a given node under (state, context).
Code Space 5.5.1 — src/graph/graph.hpp (add neighbor API)
// add inside class CapabilityGraph:
public:
struct Neighbor {
std::string to;
double cost;
};
std::vector<Neighbor> neighbors_of(
const std::string& from,
State current_state,
const Context& ctx,
const std::string& capability_filter = "" // optional: only expand edges with this capability
) const;
bool has_endpoint(const std::string& id) const { return endpoints_.contains(id); }
const Endpoint* endpoint(const std::string& id) const {
auto it = endpoints_.find(id);
if (it == endpoints_.end()) return nullptr;
return &it->second;
}
Code Space 5.5.2 — src/graph/graph.cpp (neighbors_of implementation)
#include "graph/graph.hpp"
#include <unordered_set>
namespace me {
static bool state_allowed(State s, const std::vector<State>& allowed) {
if (allowed.empty()) return true;
for (auto a : allowed) if (a == s) return true;
return false;
}
std::vector<CapabilityGraph::Neighbor> CapabilityGraph::neighbors_of(
const std::string& from,
State current_state,
const Context& ctx,
const std::string& capability_filter
) const {
std::vector<Neighbor> out;
auto f = endpoints_.find(from);
if (f == endpoints_.end()) return out;
const auto& from_ep = f->second;
if (!from_ep.enabled) return out;
if (!from_ep.constraints.satisfied_by(ctx)) return out;
for (const auto& e : edges_) {
if (!e.enabled) continue;
if (e.from != from) continue;
if (!capability_filter.empty() && e.capability != capability_filter) continue;
if (!state_allowed(current_state, e.allowed_states)) continue;
if (!e.constraints.satisfied_by(ctx)) continue;
auto t = endpoints_.find(e.to);
if (t == endpoints_.end()) continue;
const auto& to_ep = t->second;
if (!to_ep.enabled) continue;
if (!to_ep.constraints.satisfied_by(ctx)) continue;
out.push_back({e.to, e.cost});
}
return out;
}
} // namespace me
Now we can finish A* properly.
Section 5.6 — A* Implementation (Complete)
Code Space 5.6.1 — src/hunts/astar.cpp (complete replacement)
#include "hunts/astar.hpp"
#include "core/time.hpp"
#include <queue>
#include <unordered_map>
#include <limits>
namespace me {
struct NodeRec {
double f = 0.0;
double g = 0.0;
std::string node;
};
struct Cmp {
bool operator()(const NodeRec& a, const NodeRec& b) const { return a.f > b.f; }
};
PlanResult AStarPlanner::plan_path(
const CapabilityGraph& g,
const std::string& start,
const std::string& target,
State current_state,
const Context& ctx,
const Budget& budget
) const {
PlanResult out;
out.state = to_string(current_state);
out.context_snapshot = ctx.to_json();
const auto t0 = now_ms();
if (!g.has_endpoint(start) || !g.has_endpoint(target)) {
out.debug = {{"reason", "start_or_target_missing"}};
return out;
}
// gScore = best known cost to each node
std::unordered_map<std::string, double> gScore;
gScore[start] = 0.0;
// cameFrom = backpointers for path reconstruction
std::unordered_map<std::string, std::string> cameFrom;
std::priority_queue<NodeRec, std::vector<NodeRec>, Cmp> open;
open.push({heuristic(start, target), 0.0, start});
u64 expansions = 0;
u64 cut_budget = 0;
while (!open.empty()) {
if (now_ms() - t0 > budget.max_time_ms) {
out.debug = {{"timeout_ms", budget.max_time_ms}, {"expansions", expansions}};
return out;
}
auto cur = open.top();
open.pop();
// If this record is stale (not current best), skip
if (cur.g != gScore[cur.node]) continue;
expansions++;
// Success
if (cur.node == target) {
out.found = true;
// reconstruct path
std::vector<std::string> path;
std::string n = target;
path.push_back(n);
while (cameFrom.contains(n)) {
n = cameFrom[n];
path.push_back(n);
}
std::reverse(path.begin(), path.end());
out.plan.path = std::move(path);
out.plan.total_cost = cur.g;
out.debug = {{"expansions", expansions}, {"cut_budget", cut_budget}};
return out;
}
// Expand neighbors under constraints/state
auto neigh = g.neighbors_of(cur.node, current_state, ctx);
for (const auto& nb : neigh) {
double tentative = cur.g + nb.cost;
// Budget cuts
if (tentative > budget.max_cost) { cut_budget++; continue; }
if (gScore.contains(nb.to) && tentative >= gScore[nb.to]) continue;
cameFrom[nb.to] = cur.node;
gScore[nb.to] = tentative;
double f = tentative + heuristic(nb.to, target);
open.push({f, tentative, nb.to});
}
// Steps budget (approx as expansions cap)
if (expansions > budget.max_steps) {
out.debug = {{"max_steps", budget.max_steps}, {"expansions", expansions}};
return out;
}
}
out.debug = {{"reason", "no_path"}, {"expansions", expansions}, {"cut_budget", cut_budget}};
return out;
}
} // namespace me
Section 5.7 — New Hunt Endpoints
We introduce two new APIs:
5.7.1 POST /hunt/path
{
"start": "office:hub",
"target": "human:1",
"budget": { "max_cost": 10, "max_steps": 1000, "max_time_ms": 2000 },
"context_override": { "security.lockdown": false }
}
5.7.2 POST /hunt/capability
Find the best endpoint reachable via edges that represent a capability.
{
"start": "office:hub",
"capability": "assign",
"budget": { "max_cost": 15 },
"context_override": {}
}
Section 5.8 — Capability Hunt (Best Endpoint Selection)
This hunt searches reachable candidates by expanding only edges matching capability and returns the cheapest found endpoint.
Code Space 5.8.1 — src/hunts/capability_hunt.hpp
#pragma once
#include <string>
#include "graph/graph.hpp"
#include "core/context.hpp"
#include "state/state.hpp"
#include "hunts/budget.hpp"
#include "hunts/plan.hpp"
namespace me {
struct CapabilityHuntResult {
bool found = false;
std::string capability;
Plan best_plan;
nlohmann::json context_snapshot;
std::string state;
nlohmann::json debug;
};
class CapabilityHunt {
public:
CapabilityHuntResult find_best(
const CapabilityGraph& g,
const std::string& start,
const std::string& capability,
State current_state,
const Context& ctx,
const Budget& budget
) const;
};
} // namespace me
Code Space 5.8.2 — src/hunts/capability_hunt.cpp
#include "hunts/capability_hunt.hpp"
#include "core/time.hpp"
#include <queue>
#include <unordered_map>
#include <limits>
namespace me {
struct QRec {
double g;
std::string node;
};
struct Cmp {
bool operator()(const QRec& a, const QRec& b) const { return a.g > b.g; }
};
CapabilityHuntResult CapabilityHunt::find_best(
const CapabilityGraph& g,
const std::string& start,
const std::string& capability,
State current_state,
const Context& ctx,
const Budget& budget
) const {
CapabilityHuntResult out;
out.capability = capability;
out.state = to_string(current_state);
out.context_snapshot = ctx.to_json();
const auto t0 = now_ms();
if (!g.has_endpoint(start)) {
out.debug = {{"reason", "start_missing"}};
return out;
}
std::unordered_map<std::string, double> dist;
std::unordered_map<std::string, std::string> parent;
std::priority_queue<QRec, std::vector<QRec>, Cmp> pq;
dist[start] = 0.0;
pq.push({0.0, start});
u64 expansions = 0;
while (!pq.empty()) {
if (now_ms() - t0 > budget.max_time_ms) {
out.debug = {{"timeout_ms", budget.max_time_ms}, {"expansions", expansions}};
return out;
}
auto cur = pq.top(); pq.pop();
if (cur.g != dist[cur.node]) continue;
expansions++;
// If we can take at least one capability edge into this node, we treat node as candidate.
// Phase 1: "candidate" simply means we reached something via capability edges.
// Practically: any neighbor expansion under capability filter yields candidates (the neighbor nodes).
// We'll detect candidates during expansion.
auto neigh = g.neighbors_of(cur.node, current_state, ctx, capability);
for (const auto& nb : neigh) {
double nd = cur.g + nb.cost;
if (nd > budget.max_cost) continue;
if (!dist.contains(nb.to) || nd < dist[nb.to]) {
dist[nb.to] = nd;
parent[nb.to] = cur.node;
pq.push({nd, nb.to});
// Candidate found: first time we discover any node is the cheapest so far (Dijkstra property).
// We can return immediately for "best" because pq pops increasing distance.
// BUT we must ensure nb.to is actually a distinct endpoint (it is).
out.found = true;
// reconstruct plan to nb.to
std::vector<std::string> path;
std::string n = nb.to;
path.push_back(n);
while (parent.contains(n)) {
n = parent[n];
path.push_back(n);
}
std::reverse(path.begin(), path.end());
out.best_plan.path = std::move(path);
out.best_plan.total_cost = nd;
out.debug = {{"expansions", expansions}, {"selected", nb.to}};
return out;
}
}
if (expansions > budget.max_steps) {
out.debug = {{"max_steps", budget.max_steps}, {"expansions", expansions}};
return out;
}
}
out.debug = {{"reason", "no_candidate"}, {"expansions", expansions}};
return out;
}
} // namespace me
Section 5.9 — Routes: /hunt/path and /hunt/capability
Code Space 5.9.1 — Patch src/daemon/routes.cpp (add routes)
You’ll need to include planner/hunt headers and instantiate them (either store in Kernel or create per request).
Add to Kernel (recommended):
AStarPlanner planner;CapabilityHunt cap_hunt;
Code Space 5.9.2 — src/daemon/server.hpp (Kernel add)
#include "hunts/astar.hpp"
#include "hunts/capability_hunt.hpp"
// inside Kernel:
AStarPlanner planner;
CapabilityHunt cap_hunt;
Code Space 5.9.3 — Add route handlers
app.Post("/hunt/path", [&](const httplib::Request& req, httplib::Response& res) {
json j;
try { j = json::parse(req.body); }
catch (...) { return respond_err(res, {"BAD_JSON", "Invalid JSON"}); }
auto r = require_fields(j, {"start","target"});
if (std::holds_alternative<Error>(r)) return respond_err(res, std::get<Error>(r));
Budget b;
if (j.contains("budget") && j["budget"].is_object()) {
if (j["budget"].contains("max_cost")) b.max_cost = j["budget"]["max_cost"].get<double>();
if (j["budget"].contains("max_steps")) b.max_steps = j["budget"]["max_steps"].get<me::u64>();
if (j["budget"].contains("max_time_ms")) b.max_time_ms = j["budget"]["max_time_ms"].get<me::u64>();
}
Context ctx = k.live_context;
if (j.contains("context_override")) {
if (!j["context_override"].is_object())
return respond_err(res, {"SCHEMA_VIOLATION", "`context_override` must be an object"});
ctx.merge_over(Context::from_json_object(j["context_override"]));
}
auto st = k.state.current();
auto result = k.planner.plan_path(k.graph,
j["start"].get<std::string>(),
j["target"].get<std::string>(),
st, ctx, b
);
if (std::holds_alternative<Error>(result)) {
return respond_err(res, std::get<Error>(result), 500);
}
auto pr = std::get<PlanResult>(result); // if you keep PlanResult as value, ignore this
});
Note: In the earlier code, plan_path returns PlanResult directly. Keep it that way for Phase 1 simplicity. Same for capability hunt.
To avoid bloating this page, I’ll keep route glue minimal. The kernel logic is the important part; route glue is just wiring.
Section 5.10 — Example: “Best Endpoint Under Collapse”
You set context:
- office.people_present = 16
- security.lockdown = false
- human:1.online = true
Edges:
- office:hub → human:1
capability="assign" cost=1.0requiresoffice.people_present==16 - office:hub → human:2
capability="assign" cost=0.5requireshuman:2.online==true
Then:
- if
human:2.online=false→ hunt picks human:1 - if
office.people_present=2→ human:1 edge goes dark → hunt picks someone else Same graph. Different reality. That’s MindsEye.
Phase 1 — Section 6
Ledger Replay Engine · Deterministic Rebuild · Time Travel Foundations
What we ship in this section
- A Replayer that reads the ledger and reconstructs:
-
StateKernel(current state) -
CapabilityGraph(endpoints + edges) -
Context(live context)- A strict apply() map: ledger entry kind → kernel mutation
- A snapshot endpoint so you can see kernel state at runtime
- A replay verification path: “replay → compare snapshot → prove determinism”
This is where MindsEye stops being “running code” and becomes replayable reality.
Section 6.1 — Canonical Ledger Payloads (Replay-Friendly)
From Phase 1 Section 6 onward, the daemon writes payloads like this:
6.1.1 kind = "event"
{
"type": "ingest|pressure|export|commit",
"from": "PAUSE",
"to": "LOOP",
"reason": "ingest -> LOOP",
"payload": { "...": "original event payload" }
}
6.1.2 kind = "graph_update"
{
"op": "upsert_endpoint",
"endpoint": {
"id": "human:1",
"label": "Peace",
"enabled": true,
"constraints": { ... }
}
}
or
{
"op": "add_edge",
"edge": {
"from": "office:hub",
"to": "human:1",
"capability": "assign",
"enabled": true,
"cost": 1.0,
"allowed_states": ["LOOP","STRESS"],
"constraints": { ... }
}
}
6.1.3 kind = "context_update"
{
"set": { "office.people_present": 16, "human:1.online": true }
}
6.1.4 kind = "hunt_result"
Replay can ignore these for rebuild (Phase 1). They’re “observations,” not “constitution.”
Section 6.2 — Replayer Module (New)
Repo additions
src/replay/
replayer.hpp
replayer.cpp
Section 6.3 — Code Space: Replayer API
Code Space 6.3.1 — src/replay/replayer.hpp
#pragma once
#include "daemon/server.hpp"
#include "ledger/ledger.hpp"
#include "core/types.hpp"
namespace me {
struct ReplayOptions {
bool strict = true; // fail on unknown/invalid entries
bool ignore_hunt_results = true;
};
struct ReplayStats {
u64 applied = 0;
u64 ignored = 0;
u64 failed = 0;
std::string last_error;
};
class Replayer {
public:
Result<ReplayStats> rebuild_kernel_from_ledger(
Kernel& k,
const Ledger& ledger,
const ReplayOptions& opt
) const;
private:
Result<void> apply_entry(Kernel& k, const LedgerEntry& e, const ReplayOptions& opt) const;
Result<void> apply_event(Kernel& k, const nlohmann::json& payload, const ReplayOptions& opt) const;
Result<void> apply_graph_update(Kernel& k, const nlohmann::json& payload, const ReplayOptions& opt) const;
Result<void> apply_context_update(Kernel& k, const nlohmann::json& payload, const ReplayOptions& opt) const;
};
} // namespace me
Section 6.4 — Code Space: Replayer Implementation
Code Space 6.4.1 — src/replay/replayer.cpp
#include "replay/replayer.hpp"
#include "json.hpp"
#include "graph/constraints.hpp"
#include <sstream>
namespace me {
using json = nlohmann::json;
static Result<json> parse_payload_obj(const std::string& payload_json) {
try {
auto j = json::parse(payload_json);
if (!j.is_object()) return Error{"REPLAY_SCHEMA", "Payload must be a JSON object"};
return j;
} catch (...) {
return Error{"REPLAY_PARSE", "Payload JSON parse failed"};
}
}
Result<ReplayStats> Replayer::rebuild_kernel_from_ledger(
Kernel& k,
const Ledger& ledger,
const ReplayOptions& opt
) const {
ReplayStats st;
// Start clean (important for determinism)
k.state = StateKernel{};
k.graph = CapabilityGraph{};
k.live_context = Context{};
auto all = ledger.read_all();
if (std::holds_alternative<Error>(all)) {
return std::get<Error>(all);
}
const auto& entries = std::get<std::vector<LedgerEntry>>(all);
for (const auto& e : entries) {
auto a = apply_entry(k, e, opt);
if (std::holds_alternative<Error>(a)) {
st.failed++;
st.last_error = std::get<Error>(a).code + ": " + std::get<Error>(a).message;
if (opt.strict) return Error{"REPLAY_FAIL", st.last_error};
continue;
}
st.applied++;
}
return st;
}
Result<void> Replayer::apply_entry(Kernel& k, const LedgerEntry& e, const ReplayOptions& opt) const {
auto pj = parse_payload_obj(e.payload_json);
if (std::holds_alternative<Error>(pj)) return std::get<Error>(pj);
auto payload = std::get<json>(pj);
if (e.kind == "event") {
return apply_event(k, payload, opt);
} else if (e.kind == "graph_update") {
return apply_graph_update(k, payload, opt);
} else if (e.kind == "context_update") {
return apply_context_update(k, payload, opt);
} else if (e.kind == "hunt_result" && opt.ignore_hunt_results) {
return (void)0;
}
if (opt.strict) {
return Error{"REPLAY_UNKNOWN_KIND", "Unknown ledger kind: " + e.kind};
}
return (void)0;
}
Result<void> Replayer::apply_event(Kernel& k, const json& payload, const ReplayOptions& opt) const {
// Canonical: {type, from, to, reason, payload:{}}
if (!payload.contains("type")) return Error{"REPLAY_SCHEMA", "event missing field: type"};
auto t = payload.at("type").get<std::string>();
// Deterministic rebuild rule: apply the same event type to the state kernel.
// We do NOT trust stored "to" blindly; we recompute transition.
k.state.apply_event(t);
return (void)0;
}
Result<void> Replayer::apply_context_update(Kernel& k, const json& payload, const ReplayOptions& opt) const {
if (!payload.contains("set") || !payload.at("set").is_object()) {
return Error{"REPLAY_SCHEMA", "context_update requires object field: set"};
}
for (auto it = payload.at("set").begin(); it != payload.at("set").end(); ++it) {
k.live_context.set(it.key(), it.value());
}
return (void)0;
}
static std::vector<State> parse_allowed_states(const json& arr) {
std::vector<State> out;
if (!arr.is_array()) return out;
for (const auto& s : arr) {
if (!s.is_string()) continue;
auto v = s.get<std::string>();
if (v == "PAUSE") out.push_back(State::PAUSE);
else if (v == "STRESS") out.push_back(State::STRESS);
else if (v == "LOOP") out.push_back(State::LOOP);
else if (v == "TRANSMIT") out.push_back(State::TRANSMIT);
else if (v == "COLLAPSE") out.push_back(State::COLLAPSE);
}
return out;
}
Result<void> Replayer::apply_graph_update(Kernel& k, const json& payload, const ReplayOptions& opt) const {
if (!payload.contains("op") || !payload.at("op").is_string()) {
return Error{"REPLAY_SCHEMA", "graph_update missing field: op"};
}
auto op = payload.at("op").get<std::string>();
if (op == "upsert_endpoint") {
if (!payload.contains("endpoint") || !payload.at("endpoint").is_object()) {
return Error{"REPLAY_SCHEMA", "upsert_endpoint missing object field: endpoint"};
}
auto epj = payload.at("endpoint");
if (!epj.contains("id") || !epj.contains("label") || !epj.contains("enabled")) {
return Error{"REPLAY_SCHEMA", "endpoint requires id,label,enabled"};
}
Endpoint ep;
ep.id = epj.at("id").get<std::string>();
ep.label = epj.at("label").get<std::string>();
ep.enabled = epj.at("enabled").get<bool>();
if (epj.contains("constraints")) ep.constraints = Constraints::from_json(epj.at("constraints"));
k.graph.upsert_endpoint(std::move(ep));
return (void)0;
}
if (op == "add_edge") {
if (!payload.contains("edge") || !payload.at("edge").is_object()) {
return Error{"REPLAY_SCHEMA", "add_edge missing object field: edge"};
}
auto ej = payload.at("edge");
if (!ej.contains("from") || !ej.contains("to") || !ej.contains("capability") || !ej.contains("enabled")) {
return Error{"REPLAY_SCHEMA", "edge requires from,to,capability,enabled"};
}
CapabilityEdge e;
e.from = ej.at("from").get<std::string>();
e.to = ej.at("to").get<std::string>();
e.capability = ej.at("capability").get<std::string>();
e.enabled = ej.at("enabled").get<bool>();
if (ej.contains("cost")) e.cost = ej.at("cost").get<double>();
if (ej.contains("allowed_states")) e.allowed_states = parse_allowed_states(ej.at("allowed_states"));
if (ej.contains("constraints")) e.constraints = Constraints::from_json(ej.at("constraints"));
k.graph.add_edge(std::move(e));
return (void)0;
}
return Error{"REPLAY_SCHEMA", "Unknown graph_update op: " + op};
}
} // namespace me
Key design choice (important):
✅ We recompute transitions from event type during replay instead of trusting stored to. That guarantees the kernel constitution is the truth source.
Section 6.5 — Code Space: Canonical Payload Writers (Routes Patch)
This is the “make replay reliable” step: routes must write the canonical graph_update/event/context payloads.
Code Space 6.5.1 — Patch /event ledger append (in routes.cpp)
json payload = {
{"type", t},
{"from", to_string(tr.from)},
{"to", to_string(tr.to)},
{"reason", tr.reason},
{"payload", j["payload"]}
};
auto le = k.ledger.append("event", payload.dump());
Code Space 6.5.2 — Patch /context ledger append
json payload = {{"set", j["set"]}};
k.ledger.append("context_update", payload.dump());
Code Space 6.5.3 — Patch /graph/endpoint ledger append
json payload = {
{"op", "upsert_endpoint"},
{"endpoint", j}
};
k.ledger.append("graph_update", payload.dump());
Code Space 6.5.4 — Patch /graph/edge ledger append
json payload = {
{"op", "add_edge"},
{"edge", j}
};
k.ledger.append("graph_update", payload.dump());
Now replay has one stable dialect.
Section 6.6 — Snapshot Endpoint (Proof of State)
Add a read-only endpoint so you can inspect the kernel at runtime.
Code Space 6.6.1 — POST /snapshot (or GET /snapshot)
app.Get("/snapshot", [&](const httplib::Request&, httplib::Response& res) {
// We’re not exposing every internal map yet — just enough to prove determinism.
nlohmann::json out = {
{"state", to_string(k.state.current())},
{"context", k.live_context.to_json()}
// Graph snapshot can be added next section (we’ll add graph serialization cleanly)
};
respond_ok(res, out);
});
If you want graph too (cleanly), we add minimal serializers next.
Section 6.7 — Determinism Test Flow (The “No Cap” Proof)
6.7.1 Procedure
- Start daemon
- Apply sequence:
-
/contextupdates -
/graph/endpoint,/graph/edge -
/eventtransitions- Call
/snapshotand save output - Stop daemon
- Start daemon again
- Run Replayer.rebuild_kernel_from_ledger()
- Call
/snapshotagain - Compare outputs → must match
- Call
That’s how you prove:
- the ledger is the source of truth
- the fabric is replayable
- the system has memory without hallucination
Section 6.8 — Wiring Replay at Startup (Daemon Resurrection)
Code Space 6.8.1 — src/main.cpp (replay on boot)
#include "daemon/server.hpp"
#include "replay/replayer.hpp"
#include <cstdlib>
#include <iostream>
int main() {
int port = 8080;
if (const char* p = std::getenv("ME_PORT")) port = std::atoi(p);
me::Kernel kernel("mindseye_ledger.ndjson");
// Rebuild kernel from ledger
me::Replayer rp;
me::ReplayOptions opt;
opt.strict = true;
opt.ignore_hunt_results = true;
auto st = rp.rebuild_kernel_from_ledger(kernel, kernel.ledger, opt);
if (std::holds_alternative<me::Error>(st)) {
auto e = std::get<me::Error>(st);
std::cerr << "Replay failed: " << e.code << " - " << e.message << "\n";
return 1;
}
auto stats = std::get<me::ReplayStats>(st);
std::cout << "Replay OK. applied=" << stats.applied
<< " failed=" << stats.failed << "\n";
std::cout << "Mindseye Fabric daemon listening on port " << port << "\n";
me::run_server(kernel, port);
return 0;
}
Now the daemon boots with its memory restored.
What we unlocked (real talk)
At this point, MindsEye Fabric is:
- deterministic
- auditable
- restart-safe
- capable of reconstructing “who was reachable when” given the ledger
This is the scaffolding that makes MindScript + LLM layers safe later, because the core can always say:
“Show me the chain of collapses that led here.”
Phase 1 — Section 7
Graph Serialization · As-Of Replay · Time-Window Snapshots
What we ship in this section
- Safe, deterministic graph serialization for
/snapshot - Replay engine upgrades:
- rebuild as-of entry id
-
rebuild as-of timestamp
- New endpoints:
GET /snapshot(now includes graph)POST /replay/asof(rebuild kernel to a point-in-time, then snapshot)
This is how you prove the fabric is not only auditable — it’s navigable.
Section 7.1 — Serialization Rules (Non-Negotiable)
- Output is read-only and strictly JSON.
- Keep serialization deterministic (stable field names, stable ordering if possible).
-
Graph includes:
- endpoints: id, label, enabled, constraints
- edges: from, to, capability, enabled, cost, allowed_states, constraints
We’re not exposing internal adjacency caches or indexes.
Section 7.2 — Code Space: Graph → JSON
Code Space 7.2.1 — src/graph/serialize.hpp
#pragma once
#include "json.hpp"
#include "graph/endpoint.hpp"
#include "graph/capability.hpp"
#include "state/state.hpp"
namespace me {
using json = nlohmann::json;
inline json endpoint_to_json(const Endpoint& ep) {
json j{
{"id", ep.id},
{"label", ep.label},
{"enabled", ep.enabled},
{"constraints", ep.constraints.to_json()}
};
return j;
}
inline json edge_to_json(const CapabilityEdge& e) {
json states = json::array();
for (auto s : e.allowed_states) states.push_back(to_string(s));
json j{
{"from", e.from},
{"to", e.to},
{"capability", e.capability},
{"enabled", e.enabled},
{"cost", e.cost},
{"allowed_states", states},
{"constraints", e.constraints.to_json()}
};
return j;
}
} // namespace me
Now we need safe access to endpoints and edges for snapshotting.
Section 7.3 — Code Space: Graph Getters (Read-Only Views)
Code Space 7.3.1 — src/graph/graph.hpp (add getters)
// add:
public:
const std::unordered_map<std::string, Endpoint>& endpoints() const { return endpoints_; }
const std::vector<CapabilityEdge>& edges() const { return edges_; }
That’s it. No mutation exposure.
Section 7.4 — Snapshot Endpoint (Now Includes Graph)
Code Space 7.4.1 — src/daemon/routes.cpp (snapshot upgrade)
#include "graph/serialize.hpp"
// ...
app.Get("/snapshot", [&](const httplib::Request&, httplib::Response& res) {
nlohmann::json eps = nlohmann::json::array();
for (const auto& [id, ep] : k.graph.endpoints()) {
eps.push_back(me::endpoint_to_json(ep));
}
nlohmann::json eds = nlohmann::json::array();
for (const auto& e : k.graph.edges()) {
eds.push_back(me::edge_to_json(e));
}
nlohmann::json out = {
{"state", to_string(k.state.current())},
{"context", k.live_context.to_json()},
{"graph", {
{"endpoints", eps},
{"edges", eds}
}}
};
respond_ok(res, out);
});
Now you can see the graph mind-state.
Section 7.5 — As-Of Replay (Time-Window Rebuild)
We upgrade the Replayer to stop at a boundary:
- stop after
entry_id <= asof_id - or
ts_ms <= asof_ts_ms
Repo additions
src/replay/
cursor.hpp
Section 7.6 — Code Space: Replay Cursor + Options
Code Space 7.6.1 — src/replay/cursor.hpp
#pragma once
#include "core/types.hpp"
namespace me {
struct ReplayCursor {
// If set, stop at entry id boundary (inclusive)
std::optional<u64> asof_id;
// If set, stop at timestamp boundary (inclusive)
std::optional<u64> asof_ts_ms;
bool within(u64 id, u64 ts_ms) const {
if (asof_id.has_value() && id > *asof_id) return false;
if (asof_ts_ms.has_value() && ts_ms > *asof_ts_ms) return false;
return true;
}
};
} // namespace me
Section 7.7 — Code Space: Replayer As-Of Upgrade
Code Space 7.7.1 — src/replay/replayer.hpp (add cursor overload)
#include "replay/cursor.hpp"
// add method:
Result<ReplayStats> rebuild_kernel_from_ledger_asof(
Kernel& k,
const Ledger& ledger,
const ReplayOptions& opt,
const ReplayCursor& cursor
) const;
Code Space 7.7.2 — src/replay/replayer.cpp (add implementation)
Result<ReplayStats> Replayer::rebuild_kernel_from_ledger_asof(
Kernel& k,
const Ledger& ledger,
const ReplayOptions& opt,
const ReplayCursor& cursor
) const {
ReplayStats st;
k.state = StateKernel{};
k.graph = CapabilityGraph{};
k.live_context = Context{};
auto all = ledger.read_all();
if (std::holds_alternative<Error>(all)) return std::get<Error>(all);
const auto& entries = std::get<std::vector<LedgerEntry>>(all);
for (const auto& e : entries) {
if (!cursor.within(e.id, e.ts_ms)) break;
auto a = apply_entry(k, e, opt);
if (std::holds_alternative<Error>(a)) {
st.failed++;
st.last_error = std::get<Error>(a).code + ": " + std::get<Error>(a).message;
if (opt.strict) return Error{"REPLAY_FAIL", st.last_error};
continue;
}
st.applied++;
}
return st;
}
Now you can rebuild the kernel “as it was” at some time boundary.
Section 7.8 — New Endpoint: POST /replay/asof
This endpoint:
- rebuilds a temporary kernel view as-of a cursor
- returns a snapshot of that view
- does NOT mutate the live kernel (Phase 1 safety)
7.8.1 Request Schema
{
"asof": { "id": 120 }
}
or
{
"asof": { "ts_ms": 1734041123456 }
}
Code Space 7.8.2 — src/daemon/routes.cpp (add as-of replay endpoint)
#include "replay/replayer.hpp"
#include "replay/cursor.hpp"
#include "graph/serialize.hpp"
// ...
app.Post("/replay/asof", [&](const httplib::Request& req, httplib::Response& res) {
nlohmann::json j;
try { j = nlohmann::json::parse(req.body); }
catch (...) { return respond_err(res, {"BAD_JSON", "Invalid JSON"}); }
if (!j.contains("asof") || !j["asof"].is_object()) {
return respond_err(res, {"SCHEMA_VIOLATION", "Missing object field: asof"});
}
me::ReplayCursor cur;
if (j["asof"].contains("id")) cur.asof_id = j["asof"]["id"].get<me::u64>();
if (j["asof"].contains("ts_ms")) cur.asof_ts_ms = j["asof"]["ts_ms"].get<me::u64>();
if (!cur.asof_id.has_value() && !cur.asof_ts_ms.has_value()) {
return respond_err(res, {"SCHEMA_VIOLATION", "asof requires id or ts_ms"});
}
// Build a temporary kernel view (do NOT mutate live kernel)
me::Kernel temp("mindseye_ledger.ndjson");
me::Replayer rp;
me::ReplayOptions opt;
opt.strict = true;
opt.ignore_hunt_results = true;
auto st = rp.rebuild_kernel_from_ledger_asof(temp, temp.ledger, opt, cur);
if (std::holds_alternative<me::Error>(st)) {
auto e = std::get<me::Error>(st);
return respond_err(res, {e.code, e.message}, 500);
}
// Snapshot the rebuilt view
nlohmann::json eps = nlohmann::json::array();
for (const auto& [id, ep] : temp.graph.endpoints()) eps.push_back(me::endpoint_to_json(ep));
nlohmann::json eds = nlohmann::json::array();
for (const auto& e : temp.graph.edges()) eds.push_back(me::edge_to_json(e));
auto stats = std::get<me::ReplayStats>(st);
nlohmann::json out = {
{"replay", {{"applied", stats.applied}, {"failed", stats.failed}}},
{"state", me::to_string(temp.state.current())},
{"context", temp.live_context.to_json()},
{"graph", {{"endpoints", eps}, {"edges", eds}}}
};
respond_ok(res, out);
});
This gives you “time travel snapshots” without risking live state corruption.
Section 7.9 — Smoke Tests (Time Travel)
# Snapshot live
curl localhost:8080/snapshot
# Replay as-of entry id 50
curl -X POST localhost:8080/replay/asof \
-H "Content-Type: application/json" \
-d '{"asof":{"id":50}}'
# Replay as-of timestamp
curl -X POST localhost:8080/replay/asof \
-H "Content-Type: application/json" \
-d '{"asof":{"ts_ms":1734041123456}}'
You’ll literally watch the graph + context evolve across time.
What we unlocked
Now your kernel supports:
-
present-tense cognition (
/snapshot) -
past-tense cognition (
/replay/asof) - and the ledger becomes a navigable timeline, not a dead log.
This is the exact foundation for MindScript later:
- MindScript can query “what was reachable at t?”
- and compile different behaviors based on actual past states
Phase 1 — Section 8
Atomic Command Batches · Single-Collapse Commit · Deterministic Transactions
What we ship in this section
- A
POST /commandendpoint that accepts a batch of operations - A deterministic execution order
- A single ledger entry (
kind = "command_commit") that records:
- command id
- inputs
- per-op results
-
final kernel snapshot (optional)
- Failure handling modes:
atomic=true: all-or-nothing (rollback)atomic=false: best-effort (partial apply, still one collapse)
Why this matters
This is the “quantum collapse” formalized:
- many internal changes
- one visible collapse moment
- replay reproduces the same final state
Section 8.1 — Command Schema (Phase 1 Canonical)
8.1.1 Request
{
"id": "cmd-0001",
"atomic": true,
"ops": [
{"op":"context.set", "set":{"office.people_present":16}},
{"op":"graph.upsert_endpoint", "endpoint":{...}},
{"op":"graph.add_edge", "edge":{...}},
{"op":"event.emit", "type":"ingest", "payload":{...}}
]
}
8.1.2 Response
{
"ok": true,
"data": {
"id": "cmd-0001",
"atomic": true,
"applied": 4,
"results": [
{"ok": true, "op":"context.set"},
{"ok": true, "op":"graph.upsert_endpoint"},
...
]
}
}
Section 8.2 — Deterministic Execution Order
Ops run in the order they appear. Period.
Rollback policy (atomic=true):
- apply ops to a staging kernel
- if all succeed, swap staging → live (commit)
- if any fail, staging discarded, live unchanged
Best-effort (atomic=false):
- ops apply directly to live
- failures recorded, execution continues
- still a single ledger entry capturing the whole story
Section 8.3 — Kernel Clone (Staging for Rollback)
To do rollback cleanly, we add clone support for the kernel parts that mutate.
Code Space 8.3.1 — src/daemon/kernel_clone.hpp
#pragma once
#include "daemon/server.hpp"
namespace me {
inline Kernel clone_kernel(const Kernel& src) {
// Ledger must remain same file path, but we do NOT append during staging.
// In Phase 1 we’ll let staging use an in-memory “no-op ledger”.
// For simplicity, we only clone state/graph/context, and keep ledger out of staging writes.
Kernel k("mindseye_ledger.ndjson");
k.state = src.state;
k.graph = src.graph;
k.live_context = src.live_context;
return k;
}
} // namespace me
Important: We must prevent staging from writing to ledger.
So we add an InMemoryLedger used only in staging.
Section 8.4 — In-Memory Ledger (Staging Writes Go Nowhere)
Code Space 8.4.1 — src/ledger/memory_ledger.hpp
#pragma once
#include "ledger/entry.hpp"
#include "core/types.hpp"
#include <vector>
#include <optional>
namespace me {
class MemoryLedger {
public:
Result<LedgerEntry> append(std::string kind, std::string payload_json) {
LedgerEntry e;
e.id = (u64)entries_.size() + 1;
e.ts_ms = 0;
e.kind = std::move(kind);
e.payload_json = std::move(payload_json);
entries_.push_back(e);
last_ = entries_.back();
return entries_.back();
}
std::optional<LedgerEntry> last() const { return last_; }
const std::vector<LedgerEntry>& entries() const { return entries_; }
private:
std::vector<LedgerEntry> entries_;
std::optional<LedgerEntry> last_;
};
} // namespace me
We won’t integrate MemoryLedger into Kernel type (to avoid refactor explosion in Phase 1).
Instead, staging kernel simply doesn’t call ledger at all during ops.
Section 8.5 — Command Executor (Core of this section)
Repo additions
src/command/
executor.hpp
executor.cpp
Code Space 8.5.1 — src/command/executor.hpp
#pragma once
#include "daemon/server.hpp"
#include "core/types.hpp"
#include "json.hpp"
namespace me {
using json = nlohmann::json;
struct OpResult {
bool ok = false;
std::string op;
json data = json::object();
Error error{"", ""};
};
struct CommandResult {
bool ok = false;
std::string id;
bool atomic = true;
u64 applied = 0;
std::vector<OpResult> results;
};
class CommandExecutor {
public:
Result<CommandResult> execute(Kernel& live, const json& cmd);
private:
Result<OpResult> apply_op(Kernel& k, const json& opj);
Result<void> validate_command(const json& cmd);
// individual op handlers
Result<OpResult> op_context_set(Kernel& k, const json& opj);
Result<OpResult> op_graph_upsert_endpoint(Kernel& k, const json& opj);
Result<OpResult> op_graph_add_edge(Kernel& k, const json& opj);
Result<OpResult> op_event_emit(Kernel& k, const json& opj);
};
} // namespace me
Code Space 8.5.2 — src/command/executor.cpp
#include "command/executor.hpp"
#include "core/validate.hpp"
#include "graph/constraints.hpp"
namespace me {
using json = nlohmann::json;
Result<void> CommandExecutor::validate_command(const json& cmd) {
auto r = require_fields(cmd, {"id","atomic","ops"});
if (std::holds_alternative<Error>(r)) return std::get<Error>(r);
if (!cmd["id"].is_string()) return Error{"SCHEMA_VIOLATION", "`id` must be string"};
if (!cmd["atomic"].is_boolean()) return Error{"SCHEMA_VIOLATION", "`atomic` must be boolean"};
if (!cmd["ops"].is_array()) return Error{"SCHEMA_VIOLATION", "`ops` must be array"};
if (cmd["ops"].empty()) return Error{"SCHEMA_VIOLATION", "`ops` must not be empty"};
return (void)0;
}
Result<CommandResult> CommandExecutor::execute(Kernel& live, const json& cmd) {
auto v = validate_command(cmd);
if (std::holds_alternative<Error>(v)) return std::get<Error>(v);
CommandResult out;
out.id = cmd["id"].get<std::string>();
out.atomic = cmd["atomic"].get<bool>();
// Choose execution target
Kernel staging = clone_kernel(live); // from Section 8.3
Kernel* target = out.atomic ? &staging : &live;
for (const auto& opj : cmd["ops"]) {
auto ar = apply_op(*target, opj);
if (std::holds_alternative<Error>(ar)) {
// apply_op can return fatal errors; wrap as op failure
OpResult rr;
rr.ok = false;
rr.op = opj.value("op", "unknown");
rr.error = std::get<Error>(ar);
out.results.push_back(rr);
if (out.atomic) {
out.ok = false;
// Rollback: discard staging by not swapping.
return out;
}
continue;
}
auto rr = std::get<OpResult>(ar);
out.results.push_back(rr);
if (rr.ok) out.applied++;
if (out.atomic && !rr.ok) {
out.ok = false;
return out;
}
}
// If atomic and all ops succeeded, commit staging to live
if (out.atomic) {
bool all_ok = true;
for (auto& r : out.results) if (!r.ok) { all_ok = false; break; }
if (!all_ok) { out.ok = false; return out; }
live.state = staging.state;
live.graph = staging.graph;
live.live_context = staging.live_context;
}
out.ok = true;
return out;
}
Result<OpResult> CommandExecutor::apply_op(Kernel& k, const json& opj) {
if (!opj.is_object() || !opj.contains("op") || !opj["op"].is_string()) {
return Error{"SCHEMA_VIOLATION", "Each op must be object with string field: op"};
}
auto op = opj["op"].get<std::string>();
if (op == "context.set") return op_context_set(k, opj);
if (op == "graph.upsert_endpoint") return op_graph_upsert_endpoint(k, opj);
if (op == "graph.add_edge") return op_graph_add_edge(k, opj);
if (op == "event.emit") return op_event_emit(k, opj);
OpResult r;
r.ok = false;
r.op = op;
r.error = {"SCHEMA_VIOLATION", "Unknown op: " + op};
return r;
}
Result<OpResult> CommandExecutor::op_context_set(Kernel& k, const json& opj) {
if (!opj.contains("set") || !opj["set"].is_object()) {
return OpResult{false, "context.set", json::object(), {"SCHEMA_VIOLATION", "`set` must be object"}};
}
for (auto it = opj["set"].begin(); it != opj["set"].end(); ++it) {
k.live_context.set(it.key(), it.value());
}
return OpResult{true, "context.set", json{{"set", opj["set"]}}, {"",""}};
}
Result<OpResult> CommandExecutor::op_graph_upsert_endpoint(Kernel& k, const json& opj) {
if (!opj.contains("endpoint") || !opj["endpoint"].is_object()) {
return OpResult{false, "graph.upsert_endpoint", json::object(), {"SCHEMA_VIOLATION", "`endpoint` must be object"}};
}
auto epj = opj["endpoint"];
if (!epj.contains("id") || !epj.contains("label") || !epj.contains("enabled")) {
return OpResult{false, "graph.upsert_endpoint", json::object(), {"SCHEMA_VIOLATION", "endpoint requires id,label,enabled"}};
}
Endpoint ep;
ep.id = epj["id"].get<std::string>();
ep.label = epj["label"].get<std::string>();
ep.enabled = epj["enabled"].get<bool>();
if (epj.contains("constraints")) ep.constraints = Constraints::from_json(epj["constraints"]);
k.graph.upsert_endpoint(std::move(ep));
return OpResult{true, "graph.upsert_endpoint", json{{"endpoint", epj}}, {"",""}};
}
Result<OpResult> CommandExecutor::op_graph_add_edge(Kernel& k, const json& opj) {
if (!opj.contains("edge") || !opj["edge"].is_object()) {
return OpResult{false, "graph.add_edge", json::object(), {"SCHEMA_VIOLATION", "`edge` must be object"}};
}
auto ej = opj["edge"];
if (!ej.contains("from") || !ej.contains("to") || !ej.contains("capability") || !ej.contains("enabled")) {
return OpResult{false, "graph.add_edge", json::object(), {"SCHEMA_VIOLATION", "edge requires from,to,capability,enabled"}};
}
CapabilityEdge e;
e.from = ej["from"].get<std::string>();
e.to = ej["to"].get<std::string>();
e.capability = ej["capability"].get<std::string>();
e.enabled = ej["enabled"].get<bool>();
if (ej.contains("cost")) e.cost = ej["cost"].get<double>();
if (ej.contains("constraints")) e.constraints = Constraints::from_json(ej["constraints"]);
// allowed_states parsing can reuse parse_allowed_states from replay section if you move it into a helper.
k.graph.add_edge(std::move(e));
return OpResult{true, "graph.add_edge", json{{"edge", ej}}, {"",""}};
}
Result<OpResult> CommandExecutor::op_event_emit(Kernel& k, const json& opj) {
if (!opj.contains("type") || !opj["type"].is_string()) {
return OpResult{false, "event.emit", json::object(), {"SCHEMA_VIOLATION", "`type` must be string"}};
}
auto t = opj["type"].get<std::string>();
auto tr = k.state.apply_event(t);
// Publish to internal bus if you want
json payload = opj.contains("payload") ? opj["payload"] : json::object();
k.bus.publish(Event{t, payload.dump()});
return OpResult{true, "event.emit",
json{{"type", t}, {"from", to_string(tr.from)}, {"to", to_string(tr.to)}, {"reason", tr.reason}},
{"",""}};
}
} // namespace me
This gives you atomic staging + deterministic execution. Now we need the single collapse ledger entry.
Section 8.6 — Single Ledger Commit (Command Collapse)
After execution finishes (success or failure), we write ONE entry:
kind = "command_commit"
Payload includes:
- command id
- atomic flag
- ops (inputs)
- results
- final snapshot hash / minimal snapshot (optional)
Code Space 8.6.1 — src/command/commit.hpp
#pragma once
#include "json.hpp"
#include "daemon/server.hpp"
#include "command/executor.hpp"
#include "state/state.hpp"
namespace me {
using json = nlohmann::json;
inline json command_commit_payload(const json& cmd, const CommandResult& r, const Kernel& k) {
json results = json::array();
for (const auto& rr : r.results) {
if (rr.ok) results.push_back({{"ok", true}, {"op", rr.op}, {"data", rr.data}});
else results.push_back({{"ok", false}, {"op", rr.op}, {"error", {{"code", rr.error.code}, {"message", rr.error.message}}}});
}
// Minimal final snapshot (Phase 1)
json snapshot = {
{"state", to_string(k.state.current())},
{"context", k.live_context.to_json()}
};
return json{
{"id", r.id},
{"atomic", r.atomic},
{"ops", cmd.at("ops")},
{"results", results},
{"applied", r.applied},
{"final", snapshot}
};
}
} // namespace me
Section 8.7 — /command Route (One Endpoint, One Collapse)
Code Space 8.7.1 — Patch src/daemon/routes.cpp
Add includes:
#include "command/executor.hpp"
#include "command/commit.hpp"
Add route:
app.Post("/command", [&](const httplib::Request& req, httplib::Response& res) {
nlohmann::json cmd;
try { cmd = nlohmann::json::parse(req.body); }
catch (...) { return respond_err(res, {"BAD_JSON", "Invalid JSON"}); }
me::CommandExecutor exec;
auto cr = exec.execute(k, cmd);
if (std::holds_alternative<me::Error>(cr)) {
auto e = std::get<me::Error>(cr);
// Single collapse even for invalid commands? Phase 1: no. We reject without ledger write.
return respond_err(res, e, 400);
}
auto r = std::get<me::CommandResult>(cr);
// Single collapse commit (always)
auto payload = me::command_commit_payload(cmd, r, k);
auto le = k.ledger.append("command_commit", payload.dump());
if (std::holds_alternative<me::Error>(le)) {
auto e = std::get<me::Error>(le);
return respond_err(res, e, 500);
}
// Respond
nlohmann::json out{
{"id", r.id},
{"atomic", r.atomic},
{"applied", r.applied}
};
// include per-op results for transparency
nlohmann::json results = nlohmann::json::array();
for (const auto& rr : r.results) {
if (rr.ok) results.push_back({{"ok", true}, {"op", rr.op}, {"data", rr.data}});
else results.push_back({{"ok", false}, {"op", rr.op}, {"error", {{"code", rr.error.code}, {"message", rr.error.message}}}});
}
out["results"] = results;
respond_ok(res, out);
});
Now one command = one ledger collapse with full provenance.
Section 8.8 — Replay Support for command_commit
Your replayer can treat command_commit as:
- either ignored (since it’s derived)
- or replayed by applying ops again (preferred long-term)
For Phase 1, simplest is: ignore command_commit and rely on the already-recorded graph_update/context_update/event entries.
BUT since we just made command commits be single entries, we now have a choice:
Phase 1 decision (recommended):
Make /command write ONLY command_commit and stop writing individual graph_update/context_update/event entries inside that command.
That gives us true single-collapse semantics.
So we update the Replayer to support command_commit by applying ops in order.
Section 8.9 — Replayer Upgrade for command_commit (Atomic Time Travel)
Code Space 8.9.1 — src/replay/replayer.cpp (add case)
In apply_entry():
} else if (e.kind == "command_commit") {
// apply ops from payload["ops"] in order
// Phase 1: no rollback needed during replay because commit already represents final decision.
if (!payload.contains("ops") || !payload["ops"].is_array()) {
return Error{"REPLAY_SCHEMA", "command_commit missing ops array"};
}
for (const auto& opj : payload["ops"]) {
// reuse the CommandExecutor op logic or re-implement minimal handlers here.
// Best: factor op application into a shared function used by both executor and replayer.
}
return (void)0;
}
Clean factoring move: extract the op handlers into a shared command/apply.hpp so both executor and replayer use the exact same rules (true determinism).
If you want, I’ll write that factoring in the next section so we don’t bloat this one.
Section 8.10 — Smoke Test: True Single Collapse
curl -X POST localhost:8080/command \
-H "Content-Type: application/json" \
-d '{
"id":"cmd-0001",
"atomic":true,
"ops":[
{"op":"context.set","set":{"office.people_present":16,"human:1.online":true}},
{"op":"graph.upsert_endpoint","endpoint":{
"id":"human:1","label":"Peace","enabled":true,
"constraints":{"requires":[{"key":"human:1.online","value":true}]}
}},
{"op":"graph.add_edge","edge":{
"from":"office:hub","to":"human:1","capability":"assign","enabled":true,"cost":1.0,
"constraints":{"requires":[{"key":"office.people_present","value":16}]}
}},
{"op":"event.emit","type":"ingest","payload":{"source":"external"}}
]
}'
Then:
curl localhost:8080/snapshot
And check ledger:
- you should see one
command_commitentry representing the collapse.
What this unlocks
Now your fabric has:
- transactional intent
- single-collapse provenance
- deterministic replay of multi-step changes
- the actual foundation for MindScript compilation later (MindScript → command ops)
This is the “MindsEye is an OS” moment.
Phase 1 — Section 9
Shared Op Engine · Idempotency · Dry-Run · Deep Validation
What we ship in this section
- Single source of truth for applying ops:
- used by
/commandexecutor - used by
Replayerforcommand_commit- Idempotency keys so re-sending the same command doesn’t double-apply
- Dry-run mode that returns results + snapshot preview without writing to ledger
- Stronger command validation (schema + semantic checks)
This is the “kernel discipline” upgrade.
Section 9.1 — Canonical Op Engine (Shared Apply Layer)
Repo additions
src/command/
apply.hpp
apply.cpp
validate_semantic.hpp
We move the op logic out of CommandExecutor and reuse it in replay.
Section 9.2 — Code Space: Shared Op Apply
Code Space 9.2.1 — src/command/apply.hpp
#pragma once
#include "daemon/server.hpp"
#include "core/types.hpp"
#include "json.hpp"
namespace me {
using json = nlohmann::json;
struct OpResult {
bool ok = false;
std::string op;
json data = json::object();
Error error{"", ""};
};
// Applies a single op to the given kernel state.
// No ledger writes happen here. Pure mutation + result.
Result<OpResult> apply_op(Kernel& k, const json& opj);
} // namespace me
Code Space 9.2.2 — src/command/apply.cpp
#include "command/apply.hpp"
#include "graph/constraints.hpp"
#include "core/validate.hpp"
#include "replay/replayer.hpp" // if you want shared parse_allowed_states, better to move helper later.
namespace me {
using json = nlohmann::json;
// minimal helper (you can unify later)
static std::vector<State> parse_allowed_states_local(const json& arr) {
std::vector<State> out;
if (!arr.is_array()) return out;
for (const auto& s : arr) {
if (!s.is_string()) continue;
auto v = s.get<std::string>();
if (v == "PAUSE") out.push_back(State::PAUSE);
else if (v == "STRESS") out.push_back(State::STRESS);
else if (v == "LOOP") out.push_back(State::LOOP);
else if (v == "TRANSMIT") out.push_back(State::TRANSMIT);
else if (v == "COLLAPSE") out.push_back(State::COLLAPSE);
}
return out;
}
static OpResult fail(const std::string& op, const std::string& code, const std::string& msg) {
OpResult r;
r.ok = false;
r.op = op;
r.error = {code, msg};
return r;
}
Result<OpResult> apply_op(Kernel& k, const json& opj) {
if (!opj.is_object() || !opj.contains("op") || !opj["op"].is_string()) {
return Error{"SCHEMA_VIOLATION", "Each op must be object with string field: op"};
}
const std::string op = opj["op"].get<std::string>();
// ---------- context.set ----------
if (op == "context.set") {
if (!opj.contains("set") || !opj["set"].is_object()) {
return fail(op, "SCHEMA_VIOLATION", "`set` must be object");
}
for (auto it = opj["set"].begin(); it != opj["set"].end(); ++it) {
k.live_context.set(it.key(), it.value());
}
OpResult r;
r.ok = true; r.op = op; r.data = json{{"set", opj["set"]}};
return r;
}
// ---------- graph.upsert_endpoint ----------
if (op == "graph.upsert_endpoint") {
if (!opj.contains("endpoint") || !opj["endpoint"].is_object()) {
return fail(op, "SCHEMA_VIOLATION", "`endpoint` must be object");
}
auto epj = opj["endpoint"];
if (!epj.contains("id") || !epj.contains("label") || !epj.contains("enabled")) {
return fail(op, "SCHEMA_VIOLATION", "endpoint requires id,label,enabled");
}
Endpoint ep;
ep.id = epj["id"].get<std::string>();
ep.label = epj["label"].get<std::string>();
ep.enabled = epj["enabled"].get<bool>();
if (epj.contains("constraints")) ep.constraints = Constraints::from_json(epj["constraints"]);
k.graph.upsert_endpoint(std::move(ep));
OpResult r;
r.ok = true; r.op = op; r.data = json{{"endpoint", epj}};
return r;
}
// ---------- graph.add_edge ----------
if (op == "graph.add_edge") {
if (!opj.contains("edge") || !opj["edge"].is_object()) {
return fail(op, "SCHEMA_VIOLATION", "`edge` must be object");
}
auto ej = opj["edge"];
if (!ej.contains("from") || !ej.contains("to") || !ej.contains("capability") || !ej.contains("enabled")) {
return fail(op, "SCHEMA_VIOLATION", "edge requires from,to,capability,enabled");
}
CapabilityEdge e;
e.from = ej["from"].get<std::string>();
e.to = ej["to"].get<std::string>();
e.capability = ej["capability"].get<std::string>();
e.enabled = ej["enabled"].get<bool>();
if (ej.contains("cost")) e.cost = ej["cost"].get<double>();
if (ej.contains("allowed_states")) e.allowed_states = parse_allowed_states_local(ej["allowed_states"]);
if (ej.contains("constraints")) e.constraints = Constraints::from_json(ej["constraints"]);
k.graph.add_edge(std::move(e));
OpResult r;
r.ok = true; r.op = op; r.data = json{{"edge", ej}};
return r;
}
// ---------- event.emit ----------
if (op == "event.emit") {
if (!opj.contains("type") || !opj["type"].is_string()) {
return fail(op, "SCHEMA_VIOLATION", "`type` must be string");
}
auto t = opj["type"].get<std::string>();
auto tr = k.state.apply_event(t);
json payload = opj.contains("payload") ? opj["payload"] : json::object();
k.bus.publish(Event{t, payload.dump()});
OpResult r;
r.ok = true;
r.op = op;
r.data = json{
{"type", t},
{"from", to_string(tr.from)},
{"to", to_string(tr.to)},
{"reason", tr.reason}
};
return r;
}
return fail(op, "SCHEMA_VIOLATION", "Unknown op: " + op);
}
} // namespace me
Now there is exactly one op truth.
Section 9.3 — Command Executor Now Uses Shared Apply Layer
Code Space 9.3.1 — src/command/executor.cpp (trimmed core loop)
Replace op handling with:
#include "command/apply.hpp"
// ...
for (const auto& opj : cmd["ops"]) {
auto ar = me::apply_op(*target, opj);
OpResult rr;
if (std::holds_alternative<Error>(ar)) {
rr.ok = false;
rr.op = opj.value("op", "unknown");
rr.error = std::get<Error>(ar);
} else {
rr = std::get<OpResult>(ar);
}
out.results.push_back(rr);
if (rr.ok) out.applied++;
if (out.atomic && !rr.ok) {
out.ok = false;
return out;
}
}
Now replay can use the same apply_op() too.
Section 9.4 — Replayer Applies command_commit Using Same Engine
Code Space 9.4.1 — src/replay/replayer.cpp (command_commit support)
Add include:
#include "command/apply.hpp"
Inside apply_entry():
else if (e.kind == "command_commit") {
if (!payload.contains("ops") || !payload["ops"].is_array()) {
return Error{"REPLAY_SCHEMA", "command_commit missing ops array"};
}
for (const auto& opj : payload["ops"]) {
auto ar = me::apply_op(k, opj);
if (std::holds_alternative<Error>(ar)) return std::get<Error>(ar);
auto rr = std::get<me::OpResult>(ar);
if (!rr.ok) return Error{"REPLAY_OP_FAIL", rr.error.code + ": " + rr.error.message};
}
return (void)0;
}
Now executor and replayer are literally executing the same mutation logic.
Section 9.5 — Idempotency (No Double Collapse)
We add a simple “already applied” cache backed by the ledger.
Rule
If command.id has already been committed, return the previous result and do not apply again.
For Phase 1 we’ll do this by:
- scanning ledger for last
command_commitwith matchingid - if found → return it (fast enough for Phase 1)
- later we’ll index it.
Code Space 9.5.1 — src/command/idempotency.hpp
#pragma once
#include "ledger/ledger.hpp"
#include "core/types.hpp"
#include "json.hpp"
#include <optional>
namespace me {
using json = nlohmann::json;
inline std::optional<json> find_command_commit_by_id(const Ledger& ledger, const std::string& id) {
auto all = ledger.read_all();
if (std::holds_alternative<Error>(all)) return std::nullopt;
const auto& entries = std::get<std::vector<LedgerEntry>>(all);
// scan from end for speed
for (auto it = entries.rbegin(); it != entries.rend(); ++it) {
if (it->kind != "command_commit") continue;
try {
auto payload = json::parse(it->payload_json);
if (payload.contains("id") && payload["id"].is_string() && payload["id"].get<std::string>() == id) {
return payload;
}
} catch (...) {
continue;
}
}
return std::nullopt;
}
} // namespace me
Section 9.6 — Dry Run Mode (Plan Without Collapse)
Request
{
"id": "cmd-0002",
"atomic": true,
"dry_run": true,
"ops": [...]
}
Behavior
- execute against staging kernel
- return results + final snapshot
- do not write to ledger
- do not mutate live kernel
Section 9.7 — /command Route Upgrade (Idempotency + Dry Run)
Code Space 9.7.1 — Patch src/daemon/routes.cpp
Add includes:
#include "command/idempotency.hpp"
#include "graph/serialize.hpp"
Inside /command handler:
bool dry_run = cmd.value("dry_run", false);
// 1) Idempotency check (only if not dry-run)
if (!dry_run) {
auto prev = me::find_command_commit_by_id(k.ledger, cmd["id"].get<std::string>());
if (prev.has_value()) {
// Return previously committed collapse (deterministic)
respond_ok(res, json{{"id", (*prev)["id"]}, {"replayed", true}, {"commit", *prev}});
return;
}
}
// 2) Execute
me::CommandExecutor exec;
auto cr = exec.execute(k, cmd); // if dry_run we execute on staging (next code block)
We tweak executor usage for dry-run:
- for dry-run, execute against staging and do not swap into live
- simplest: add
execute_preview()or pass a flag
Code Space 9.7.2 — Minimal dry-run handling (no executor refactor)
me::CommandExecutor exec;
if (dry_run) {
// staging run always
me::Kernel staging = me::clone_kernel(k);
auto cr = exec.execute(staging, cmd);
if (std::holds_alternative<me::Error>(cr)) return respond_err(res, std::get<me::Error>(cr), 400);
auto r = std::get<me::CommandResult>(cr);
// build preview snapshot
json eps = json::array();
for (const auto& [id, ep] : staging.graph.endpoints()) eps.push_back(me::endpoint_to_json(ep));
json eds = json::array();
for (const auto& e : staging.graph.edges()) eds.push_back(me::edge_to_json(e));
json preview = {
{"state", me::to_string(staging.state.current())},
{"context", staging.live_context.to_json()},
{"graph", {{"endpoints", eps}, {"edges", eds}}}
};
respond_ok(res, json{
{"id", r.id},
{"atomic", r.atomic},
{"dry_run", true},
{"applied", r.applied},
{"results", /* build results array like before */ json::array()},
{"preview", preview}
});
return;
}
// normal commit path continues...
Then normal commit path:
- execute (atomic may stage inside executor)
- write single
command_commit - return results
Section 9.8 — Semantic Validation (Deep Checks)
Schema validation is not enough. We add checks like:
-
graph.add_edgemust reference endpoints that exist (ifatomic=true, enforce in staging after ops) - costs must be non-negative
- allowed_states must be valid strings
- prevent nonsense keys (optional)
Code Space 9.8.1 — src/command/validate_semantic.hpp (Phase 1 minimum)
#pragma once
#include "daemon/server.hpp"
#include "json.hpp"
#include "core/types.hpp"
namespace me {
using json = nlohmann::json;
inline Result<void> validate_semantic_post_apply(const Kernel& k, const json& cmd) {
// Example: ensure all edges reference existing endpoints
for (const auto& e : k.graph.edges()) {
if (!k.graph.has_endpoint(e.from) || !k.graph.has_endpoint(e.to)) {
return Error{"GRAPH_INVALID", "Edge references missing endpoint: " + e.from + " -> " + e.to};
}
if (e.cost < 0.0) return Error{"GRAPH_INVALID", "Edge cost must be non-negative"};
}
return (void)0;
}
} // namespace me
Then in executor, after ops applied (staging or live), call:
auto sem = validate_semantic_post_apply(*target, cmd);
if (std::holds_alternative<Error>(sem)) {
// atomic: rollback; non-atomic: report failure but keep changes (Phase 1 choice)
}
For Phase 1, keep it simple:
- atomic: semantic failure → rollback
- non-atomic: semantic failure → return error (but state already changed; the ledger will show it)
Section 9.9 — What This Achieves (Straight talk)
At this point:
- there is one op engine (no divergent behavior between replay/execution)
- command ids are safe to retry
- you can dry-run orchestration before committing
- semantic validation prevents “graph garbage”
This is basically your “MindsEye kernel syscall layer.”
Phase 1 — Section 10
MindScript v0 · AST · Compiler to Ops · Static Checks · Dry-Run → Commit
What we ship in this section
- MindScript v0 format (JSON-first for Phase 1 determinism)
- A small AST schema (so it can evolve cleanly)
- A compiler: MindScript → command ops
- Static checks (before touching the kernel)
- New endpoint:
POST /mindscript/run
- compiles → dry-run (optional) → commit
Phase 1: keep it “boring and correct.” The spicy syntax (real language) comes after the kernel is bulletproof.
Section 10.1 — MindScript v0 (Canonical Input)
10.1.1 Script Format
{
"id": "ms-0001",
"mode": "dry_run | commit",
"program": [
{"set": {"office.people_present": 16}},
{"endpoint": {"id":"human:1","label":"Peace","enabled":true}},
{"edge": {"from":"office:hub","to":"human:1","capability":"assign","enabled":true,"cost":1.0}},
{"emit": {"type":"ingest","payload":{"source":"external"}}},
{"hunt": {"kind":"capability", "start":"office:hub", "capability":"assign", "budget":{"max_cost":10}}}
]
}
10.1.2 Design rules
-
programis ordered, deterministic - each statement is a single-key object (so parsing is easy and strict)
-
hunts can be included:
- in dry-run: produce results but don’t mutate
- in commit: still can run, but the collapse is from the command commit
Section 10.2 — AST Types (Internal)
We parse each statement into a tagged AST node, then compile.
Code Space 10.2.1 — src/mindscript/ast.hpp
#pragma once
#include <string>
#include <variant>
#include <vector>
#include "json.hpp"
#include "hunts/budget.hpp"
namespace me {
using json = nlohmann::json;
struct MS_SetContext { json set; };
struct MS_Endpoint { json endpoint; };
struct MS_Edge { json edge; };
struct MS_EmitEvent { std::string type; json payload; };
struct MS_Hunt {
std::string kind; // "path" | "capability" | "reachability"
json spec; // raw hunt spec for now (Phase 1)
};
using MS_Node = std::variant<MS_SetContext, MS_Endpoint, MS_Edge, MS_EmitEvent, MS_Hunt>;
struct MS_Script {
std::string id;
bool dry_run = true;
std::vector<MS_Node> program;
};
} // namespace me
Section 10.3 — Parser (Strict, No Guessing)
Repo additions
src/mindscript/
parse.hpp
parse.cpp
Code Space 10.3.1 — src/mindscript/parse.hpp
#pragma once
#include "mindscript/ast.hpp"
#include "core/types.hpp"
namespace me {
Result<MS_Script> parse_mindscript_json(const nlohmann::json& j);
} // namespace me
Code Space 10.3.2 — src/mindscript/parse.cpp
#include "mindscript/parse.hpp"
#include "core/validate.hpp"
namespace me {
using json = nlohmann::json;
static bool is_single_key_object(const json& j) {
return j.is_object() && j.size() == 1;
}
Result<MS_Script> parse_mindscript_json(const json& j) {
if (!j.is_object()) return Error{"SCHEMA_VIOLATION", "MindScript must be a JSON object"};
if (!j.contains("id") || !j["id"].is_string()) return Error{"SCHEMA_VIOLATION", "`id` must be string"};
if (!j.contains("mode") || !j["mode"].is_string()) return Error{"SCHEMA_VIOLATION", "`mode` must be string"};
if (!j.contains("program") || !j["program"].is_array()) return Error{"SCHEMA_VIOLATION", "`program` must be array"};
MS_Script s;
s.id = j["id"].get<std::string>();
auto mode = j["mode"].get<std::string>();
if (mode == "dry_run") s.dry_run = true;
else if (mode == "commit") s.dry_run = false;
else return Error{"SCHEMA_VIOLATION", "`mode` must be dry_run or commit"};
for (const auto& stmt : j["program"]) {
if (!is_single_key_object(stmt)) return Error{"SCHEMA_VIOLATION", "Each program statement must be single-key object"};
const auto key = stmt.begin().key();
const auto val = stmt.begin().value();
if (key == "set") {
if (!val.is_object()) return Error{"SCHEMA_VIOLATION", "`set` must be object"};
s.program.push_back(MS_SetContext{val});
continue;
}
if (key == "endpoint") {
if (!val.is_object()) return Error{"SCHEMA_VIOLATION", "`endpoint` must be object"};
s.program.push_back(MS_Endpoint{val});
continue;
}
if (key == "edge") {
if (!val.is_object()) return Error{"SCHEMA_VIOLATION", "`edge` must be object"};
s.program.push_back(MS_Edge{val});
continue;
}
if (key == "emit") {
if (!val.is_object() || !val.contains("type") || !val["type"].is_string())
return Error{"SCHEMA_VIOLATION", "`emit` must contain string field: type"};
MS_EmitEvent ev;
ev.type = val["type"].get<std::string>();
ev.payload = val.contains("payload") ? val["payload"] : json::object();
s.program.push_back(ev);
continue;
}
if (key == "hunt") {
if (!val.is_object() || !val.contains("kind") || !val["kind"].is_string())
return Error{"SCHEMA_VIOLATION", "`hunt` must contain string field: kind"};
MS_Hunt h;
h.kind = val["kind"].get<std::string>();
h.spec = val;
s.program.push_back(h);
continue;
}
return Error{"SCHEMA_VIOLATION", "Unknown MindScript statement: " + key};
}
return s;
}
} // namespace me
Section 10.4 — Compiler: MindScript → Command Ops
We compile only mutating statements into /command ops:
- set →
context.set - endpoint →
graph.upsert_endpoint - edge →
graph.add_edge - emit →
event.emit
Hunts are executed after dry-run/commit as read-only queries (Phase 1), or skipped in commit if you prefer.
Repo additions
src/mindscript/
compile.hpp
compile.cpp
Code Space 10.4.1 — src/mindscript/compile.hpp
#pragma once
#include "mindscript/ast.hpp"
#include "core/types.hpp"
#include "json.hpp"
namespace me {
struct Compiled {
nlohmann::json command; // /command payload
std::vector<MS_Hunt> hunts; // deferred hunts to run after
};
Result<Compiled> compile_to_command(const MS_Script& s);
} // namespace me
Code Space 10.4.2 — src/mindscript/compile.cpp
#include "mindscript/compile.hpp"
namespace me {
using json = nlohmann::json;
Result<Compiled> compile_to_command(const MS_Script& s) {
Compiled out;
json ops = json::array();
std::vector<MS_Hunt> hunts;
for (const auto& node : s.program) {
if (std::holds_alternative<MS_SetContext>(node)) {
const auto& n = std::get<MS_SetContext>(node);
ops.push_back(json{{"op","context.set"},{"set", n.set}});
} else if (std::holds_alternative<MS_Endpoint>(node)) {
const auto& n = std::get<MS_Endpoint>(node);
ops.push_back(json{{"op","graph.upsert_endpoint"},{"endpoint", n.endpoint}});
} else if (std::holds_alternative<MS_Edge>(node)) {
const auto& n = std::get<MS_Edge>(node);
ops.push_back(json{{"op","graph.add_edge"},{"edge", n.edge}});
} else if (std::holds_alternative<MS_EmitEvent>(node)) {
const auto& n = std::get<MS_EmitEvent>(node);
ops.push_back(json{{"op","event.emit"},{"type", n.type},{"payload", n.payload}});
} else if (std::holds_alternative<MS_Hunt>(node)) {
hunts.push_back(std::get<MS_Hunt>(node));
}
}
if (ops.empty()) {
// allowed if script is only hunts (read-only), but then we won't call /command
out.command = json::object();
out.hunts = std::move(hunts);
return out;
}
out.command = json{
{"id", s.id},
{"atomic", true},
{"dry_run", s.dry_run},
{"ops", ops}
};
out.hunts = std::move(hunts);
return out;
}
} // namespace me
Section 10.5 — Static Checks (Before Execution)
We add a minimal static checker. Phase 1 checks:
- endpoint ids are strings
- edge refs are strings
- no negative costs
- emit type is string
- hunt specs include required fields
Code Space 10.5.1 — src/mindscript/check.hpp
#pragma once
#include "mindscript/ast.hpp"
#include "core/types.hpp"
namespace me {
Result<void> static_check(const MS_Script& s);
}
Code Space 10.5.2 — src/mindscript/check.cpp
#include "mindscript/check.hpp"
namespace me {
static Result<void> check_endpoint_json(const nlohmann::json& ep) {
if (!ep.contains("id") || !ep["id"].is_string()) return Error{"SCHEMA_VIOLATION","endpoint.id must be string"};
if (!ep.contains("label") || !ep["label"].is_string()) return Error{"SCHEMA_VIOLATION","endpoint.label must be string"};
if (!ep.contains("enabled") || !ep["enabled"].is_boolean()) return Error{"SCHEMA_VIOLATION","endpoint.enabled must be boolean"};
return (void)0;
}
static Result<void> check_edge_json(const nlohmann::json& e) {
if (!e.contains("from") || !e["from"].is_string()) return Error{"SCHEMA_VIOLATION","edge.from must be string"};
if (!e.contains("to") || !e["to"].is_string()) return Error{"SCHEMA_VIOLATION","edge.to must be string"};
if (!e.contains("capability") || !e["capability"].is_string()) return Error{"SCHEMA_VIOLATION","edge.capability must be string"};
if (!e.contains("enabled") || !e["enabled"].is_boolean()) return Error{"SCHEMA_VIOLATION","edge.enabled must be boolean"};
if (e.contains("cost")) {
if (!e["cost"].is_number()) return Error{"SCHEMA_VIOLATION","edge.cost must be number"};
if (e["cost"].get<double>() < 0.0) return Error{"SCHEMA_VIOLATION","edge.cost must be non-negative"};
}
return (void)0;
}
Result<void> static_check(const MS_Script& s) {
for (const auto& n : s.program) {
if (std::holds_alternative<MS_Endpoint>(n)) {
auto r = check_endpoint_json(std::get<MS_Endpoint>(n).endpoint);
if (std::holds_alternative<Error>(r)) return std::get<Error>(r);
}
if (std::holds_alternative<MS_Edge>(n)) {
auto r = check_edge_json(std::get<MS_Edge>(n).edge);
if (std::holds_alternative<Error>(r)) return std::get<Error>(r);
}
}
return (void)0;
}
} // namespace me
Section 10.6 — New Endpoint: /mindscript/run
Behavior
- Parse MindScript JSON
- Static check
- Compile → command ops
- If compiled ops exist:
-
execute via same
/commandengine- Then run hunts as read-only against:
staging state if dry-run
-
live state if commit
- Return:
compilation output
command result (if any)
hunt results (if any)
Code Space 10.6.1 — src/daemon/routes.cpp (add)
#include "mindscript/parse.hpp"
#include "mindscript/check.hpp"
#include "mindscript/compile.hpp"
#include "command/executor.hpp"
#include "command/commit.hpp"
app.Post("/mindscript/run", [&](const httplib::Request& req, httplib::Response& res) {
nlohmann::json j;
try { j = nlohmann::json::parse(req.body); }
catch (...) { return respond_err(res, {"BAD_JSON", "Invalid JSON"}); }
auto ps = me::parse_mindscript_json(j);
if (std::holds_alternative<me::Error>(ps)) return respond_err(res, std::get<me::Error>(ps), 400);
auto script = std::get<me::MS_Script>(ps);
auto ck = me::static_check(script);
if (std::holds_alternative<me::Error>(ck)) return respond_err(res, std::get<me::Error>(ck), 400);
auto cc = me::compile_to_command(script);
if (std::holds_alternative<me::Error>(cc)) return respond_err(res, std::get<me::Error>(cc), 400);
auto compiled = std::get<me::Compiled>(cc);
nlohmann::json out;
out["mindscript_id"] = script.id;
out["mode"] = script.dry_run ? "dry_run" : "commit";
// If no ops, only hunts
if (compiled.command.is_null() || compiled.command.empty()) {
out["command"] = nullptr;
out["hunts"] = nlohmann::json::array();
respond_ok(res, out);
return;
}
// Decide which kernel view hunts should run against
// Dry-run: stage-only
// Commit: live
me::Kernel* hunt_view = &k;
me::Kernel staging = me::clone_kernel(k);
if (script.dry_run) {
me::CommandExecutor exec;
auto r = exec.execute(staging, compiled.command);
if (std::holds_alternative<me::Error>(r)) return respond_err(res, std::get<me::Error>(r), 400);
out["command_result"] = {{"dry_run", true}};
hunt_view = &staging;
} else {
// Reuse /command behavior: execute + single collapse commit
me::CommandExecutor exec;
auto r = exec.execute(k, compiled.command);
if (std::holds_alternative<me::Error>(r)) return respond_err(res, std::get<me::Error>(r), 400);
auto cr = std::get<me::CommandResult>(r);
auto payload = me::command_commit_payload(compiled.command, cr, k);
auto le = k.ledger.append("command_commit", payload.dump());
if (std::holds_alternative<me::Error>(le)) return respond_err(res, std::get<me::Error>(le), 500);
out["command_result"] = {{"dry_run", false}, {"committed", true}, {"id", cr.id}, {"applied", cr.applied}};
hunt_view = &k;
}
// Phase 1: Hunts optional — you can wire in capability/path hunts here.
// We'll return compiled hunts as-is for now, and hook execution next section.
nlohmann::json hunts = nlohmann::json::array();
for (const auto& h : compiled.hunts) hunts.push_back(h.spec);
out["hunts"] = hunts;
respond_ok(res, out);
});
Important note: I left hunt execution as a “next section” hook so we don’t explode this page with routing/planning glue. The compiler bridge is the real win of Section 10.
Section 10.7 — Example MindScript Run
curl -X POST localhost:8080/mindscript/run \
-H "Content-Type: application/json" \
-d '{
"id":"ms-0001",
"mode":"commit",
"program":[
{"set":{"office.people_present":16,"human:1.online":true}},
{"endpoint":{"id":"human:1","label":"Peace","enabled":true}},
{"edge":{"from":"office:hub","to":"human:1","capability":"assign","enabled":true,"cost":1.0}},
{"emit":{"type":"ingest","payload":{"source":"external"}}}
]
}'
That’s MindScript v0 controlling the fabric.
What we unlocked
You now have:
- a DSL surface (MindScript v0)
- a parser + AST
- a compiler to ops (kernel syscall layer)
- static checks (pre-flight)
- dry-run vs commit semantics
This is the first brick of “MindScript is the orchestration language born from MindsEye.”
Phase 1 — Section 11
Hunt Execution · Assertions · Deterministic Branching · Fail-Fast Scripts
What we ship in this section
- MindScript can execute hunts and capture results into a runtime “vars” map
- MindScript supports assert statements (prove invariants or crash the script)
- MindScript supports if statements based on:
- current state
- context values
-
hunt results (vars)
-
/mindscript/runreturns:
-
command result (dry-run/commit)
hunt results
assert outcomes
final snapshot (view-specific)
Everything remains deterministic and replayable.
Section 11.1 — MindScript v0.1 Extensions
11.1.1 hunt statement (now executable)
{"hunt": {
"as": "reachable_assign",
"kind": "capability",
"start": "office:hub",
"capability": "assign",
"budget": {"max_cost": 10},
"context_override": {"office.people_present": 16}
}}
11.1.2 assert statement
{"assert": {
"that": {"var.exists": "reachable_assign.best"},
"message": "No assignable endpoint reachable"
}}
11.1.3 if statement
{"if": {
"when": {"var.exists": "reachable_assign.best"},
"then": [
{"emit": {"type":"commit", "payload":{"note":"assignment possible"}}}
],
"else": [
{"emit": {"type":"stress", "payload":{"note":"assignment blocked"}}}
]
}}
Section 11.2 — Runtime Model (Vars + Trace)
We add a runtime context for script execution:
-
vars: map of JSON values produced by hunts, etc. -
trace: executed statements + results (debuggable) -
halted: true when an assert fails (fail-fast)
Repo additions
src/mindscript/
runtime.hpp
runtime.cpp
Code Space 11.2.1 — src/mindscript/runtime.hpp
#pragma once
#include <string>
#include <unordered_map>
#include "json.hpp"
namespace me {
using json = nlohmann::json;
struct MS_Runtime {
std::unordered_map<std::string, json> vars; // "name" -> JSON
json trace = json::array();
bool halted = false;
void set_var(const std::string& name, const json& value) { vars[name] = value; }
const json* get_var(const std::string& name) const {
auto it = vars.find(name);
if (it == vars.end()) return nullptr;
return &it->second;
}
};
} // namespace me
Section 11.3 — Condition Evaluator (Deterministic)
We support minimal conditions first:
{"state.is":"LOOP"}{"context.eq":{"key":"office.people_present","value":16}}{"var.exists":"reachable_assign.best"}{"var.eq":{"path":"reachable_assign.best.cost","value":1.0}}
Code Space 11.3.1 — src/mindscript/cond.hpp
#pragma once
#include "daemon/server.hpp"
#include "mindscript/runtime.hpp"
#include "core/types.hpp"
#include "json.hpp"
namespace me {
using json = nlohmann::json;
Result<bool> eval_condition(
const json& cond,
const Kernel& k,
const MS_Runtime& rt
);
} // namespace me
Code Space 11.3.2 — src/mindscript/cond.cpp
#include "mindscript/cond.hpp"
namespace me {
using json = nlohmann::json;
static const json* get_json_path(const json& root, const std::string& dotted) {
const json* cur = &root;
size_t start = 0;
while (start < dotted.size()) {
auto dot = dotted.find('.', start);
auto part = dotted.substr(start, dot == std::string::npos ? std::string::npos : dot - start);
if (!cur->is_object() || !cur->contains(part)) return nullptr;
cur = &((*cur)[part]);
if (dot == std::string::npos) break;
start = dot + 1;
}
return cur;
}
Result<bool> eval_condition(const json& cond, const Kernel& k, const MS_Runtime& rt) {
if (!cond.is_object() || cond.size() != 1) {
return Error{"SCHEMA_VIOLATION", "Condition must be single-key object"};
}
auto key = cond.begin().key();
auto val = cond.begin().value();
if (key == "state.is") {
if (!val.is_string()) return Error{"SCHEMA_VIOLATION", "state.is must be string"};
return to_string(k.state.current()) == val.get<std::string>();
}
if (key == "context.eq") {
if (!val.is_object() || !val.contains("key") || !val.contains("value")) {
return Error{"SCHEMA_VIOLATION", "context.eq requires {key,value}"};
}
if (!val["key"].is_string()) return Error{"SCHEMA_VIOLATION", "context.eq.key must be string"};
auto ck = val["key"].get<std::string>();
auto* got = k.live_context.get(ck);
if (!got) return false;
return *got == val["value"];
}
if (key == "var.exists") {
if (!val.is_string()) return Error{"SCHEMA_VIOLATION", "var.exists must be string"};
// supports dotted access: "huntvar.best.cost"
auto path = val.get<std::string>();
auto dot = path.find('.');
if (dot == std::string::npos) return rt.get_var(path) != nullptr;
auto root_name = path.substr(0, dot);
auto rest = path.substr(dot + 1);
const json* root = rt.get_var(root_name);
if (!root) return false;
return get_json_path(*root, rest) != nullptr;
}
if (key == "var.eq") {
if (!val.is_object() || !val.contains("path") || !val.contains("value")) {
return Error{"SCHEMA_VIOLATION", "var.eq requires {path,value}"};
}
if (!val["path"].is_string()) return Error{"SCHEMA_VIOLATION", "var.eq.path must be string"};
auto path = val["path"].get<std::string>();
auto dot = path.find('.');
if (dot == std::string::npos) return false;
auto root_name = path.substr(0, dot);
auto rest = path.substr(dot + 1);
const json* root = rt.get_var(root_name);
if (!root) return false;
const json* got = get_json_path(*root, rest);
if (!got) return false;
return *got == val["value"];
}
return Error{"SCHEMA_VIOLATION", "Unknown condition operator: " + key};
}
} // namespace me
Section 11.4 — Hunt Execution Engine (Runs Against a Kernel View)
We execute hunts directly in C++ using the existing hunt implementations:
- reachability
- path
- capability
Repo additions
src/mindscript/
exec_hunt.hpp
exec_hunt.cpp
Code Space 11.4.1 — src/mindscript/exec_hunt.hpp
#pragma once
#include "daemon/server.hpp"
#include "core/types.hpp"
#include "json.hpp"
namespace me {
using json = nlohmann::json;
Result<json> run_hunt(const Kernel& k, const json& hunt_spec);
} // namespace me
Code Space 11.4.2 — src/mindscript/exec_hunt.cpp
#include "mindscript/exec_hunt.hpp"
#include "hunts/budget.hpp"
namespace me {
using json = nlohmann::json;
static Budget parse_budget(const json& j) {
Budget b;
if (!j.is_object()) return b;
if (j.contains("max_cost")) b.max_cost = j["max_cost"].get<double>();
if (j.contains("max_steps")) b.max_steps = j["max_steps"].get<u64>();
if (j.contains("max_time_ms")) b.max_time_ms = j["max_time_ms"].get<u64>();
return b;
}
Result<json> run_hunt(const Kernel& k, const json& spec) {
// spec includes: kind, start, ...
if (!spec.contains("kind") || !spec["kind"].is_string()) return Error{"SCHEMA_VIOLATION","hunt.kind must be string"};
auto kind = spec["kind"].get<std::string>();
// Build context: live + override
Context ctx = k.live_context;
if (spec.contains("context_override")) {
if (!spec["context_override"].is_object()) return Error{"SCHEMA_VIOLATION","context_override must be object"};
ctx.merge_over(Context::from_json_object(spec["context_override"]));
}
auto st = k.state.current();
if (kind == "reachability") {
if (!spec.contains("start") || !spec["start"].is_string()) return Error{"SCHEMA_VIOLATION","reachability.start must be string"};
auto r = k.hunts.reachability_hunt(k.graph, spec["start"].get<std::string>(), st, ctx);
return json{
{"kind","reachability"},
{"start", r.start},
{"state", r.state},
{"context", r.context_snapshot},
{"reachable", r.reachable}
};
}
if (kind == "path") {
if (!spec.contains("start") || !spec.contains("target")) return Error{"SCHEMA_VIOLATION","path requires start,target"};
if (!spec["start"].is_string() || !spec["target"].is_string()) return Error{"SCHEMA_VIOLATION","start/target must be strings"};
Budget b = spec.contains("budget") ? parse_budget(spec["budget"]) : Budget{};
auto pr = k.planner.plan_path(k.graph,
spec["start"].get<std::string>(),
spec["target"].get<std::string>(),
st, ctx, b
);
return json{
{"kind","path"},
{"found", pr.found},
{"state", pr.state},
{"context", pr.context_snapshot},
{"plan", {{"path", pr.plan.path}, {"total_cost", pr.plan.total_cost}}},
{"debug", pr.debug}
};
}
if (kind == "capability") {
if (!spec.contains("start") || !spec.contains("capability")) return Error{"SCHEMA_VIOLATION","capability hunt requires start,capability"};
if (!spec["start"].is_string() || !spec["capability"].is_string()) return Error{"SCHEMA_VIOLATION","start/capability must be strings"};
Budget b = spec.contains("budget") ? parse_budget(spec["budget"]) : Budget{};
auto r = k.cap_hunt.find_best(k.graph,
spec["start"].get<std::string>(),
spec["capability"].get<std::string>(),
st, ctx, b
);
json best = json::null();
if (r.found) {
best = json{
{"path", r.best_plan.path},
{"total_cost", r.best_plan.total_cost}
};
}
return json{
{"kind","capability"},
{"capability", r.capability},
{"found", r.found},
{"state", r.state},
{"context", r.context_snapshot},
{"best", best},
{"debug", r.debug}
};
}
return Error{"SCHEMA_VIOLATION","Unknown hunt kind: " + kind};
}
} // namespace me
Now hunts can run inside MindScript and store results into vars.
Section 11.5 — MindScript Executor (Runs Program, Handles If/Assert)
Repo additions
src/mindscript/
exec.hpp
exec.cpp
Code Space 11.5.1 — src/mindscript/exec.hpp
#pragma once
#include "daemon/server.hpp"
#include "mindscript/ast.hpp"
#include "mindscript/runtime.hpp"
#include "core/types.hpp"
#include "json.hpp"
namespace me {
struct MS_ExecResult {
bool ok = true;
bool dry_run = true;
nlohmann::json command_result = nlohmann::json::null();
nlohmann::json hunt_results = nlohmann::json::array();
MS_Runtime runtime;
nlohmann::json final_snapshot = nlohmann::json::object();
};
Result<MS_ExecResult> execute_mindscript(Kernel& live, const MS_Script& s);
} // namespace me
Code Space 11.5.2 — src/mindscript/exec.cpp
#include "mindscript/exec.hpp"
#include "mindscript/compile.hpp"
#include "mindscript/cond.hpp"
#include "mindscript/exec_hunt.hpp"
#include "command/executor.hpp"
#include "command/commit.hpp"
#include "graph/serialize.hpp"
namespace me {
using json = nlohmann::json;
static json snapshot_view(const Kernel& k) {
json eps = json::array();
for (const auto& [id, ep] : k.graph.endpoints()) eps.push_back(endpoint_to_json(ep));
json eds = json::array();
for (const auto& e : k.graph.edges()) eds.push_back(edge_to_json(e));
return json{
{"state", to_string(k.state.current())},
{"context", k.live_context.to_json()},
{"graph", {{"endpoints", eps}, {"edges", eds}}}
};
}
static Result<void> exec_block(
Kernel& view,
const std::vector<MS_Node>& block,
MS_Runtime& rt,
json& hunt_results
);
static Result<void> exec_if(
Kernel& view,
const json& ifspec,
MS_Runtime& rt,
json& hunt_results
) {
if (!ifspec.contains("when") || !ifspec.contains("then")) {
return Error{"SCHEMA_VIOLATION", "if requires when + then"};
}
auto ev = eval_condition(ifspec["when"], view, rt);
if (std::holds_alternative<Error>(ev)) return std::get<Error>(ev);
const bool ok = std::get<bool>(ev);
if (ok) {
// parse then block (array of single-key statements)
if (!ifspec["then"].is_array()) return Error{"SCHEMA_VIOLATION","if.then must be array"};
// We execute 'then' by re-parsing into MS_Nodes? Phase 1 shortcut: only allow emit/hunt/assert inside if blocks.
// We'll interpret raw JSON directly here.
for (const auto& stmt : ifspec["then"]) {
if (!stmt.is_object() || stmt.size() != 1) return Error{"SCHEMA_VIOLATION","if.then stmt must be single-key object"};
auto kkey = stmt.begin().key();
auto kval = stmt.begin().value();
if (kkey == "emit") {
// emit is just an op applied to view
json opj{{"op","event.emit"},{"type", kval["type"]},{"payload", kval.value("payload", json::object())}};
auto ar = apply_op(view, opj);
if (std::holds_alternative<Error>(ar)) return std::get<Error>(ar);
auto rr = std::get<OpResult>(ar);
if (!rr.ok) return Error{"MS_OP_FAIL", rr.error.code + ": " + rr.error.message};
rt.trace.push_back(json{{"emit", rr.data}});
} else if (kkey == "hunt") {
// execute hunt and store if "as"
if (!kval.contains("as") || !kval["as"].is_string()) return Error{"SCHEMA_VIOLATION","hunt requires string field: as"};
auto out = run_hunt(view, kval);
if (std::holds_alternative<Error>(out)) return std::get<Error>(out);
auto jr = std::get<json>(out);
rt.set_var(kval["as"].get<std::string>(), jr);
hunt_results.push_back(jr);
rt.trace.push_back(json{{"hunt", jr}});
} else if (kkey == "assert") {
if (!kval.contains("that")) return Error{"SCHEMA_VIOLATION","assert requires that"};
auto ev2 = eval_condition(kval["that"], view, rt);
if (std::holds_alternative<Error>(ev2)) return std::get<Error>(ev2);
if (!std::get<bool>(ev2)) {
rt.halted = true;
auto msg = kval.value("message", "assertion failed");
rt.trace.push_back(json{{"assert", {{"ok", false}, {"message", msg}}}});
return Error{"ASSERT_FAIL", msg};
}
rt.trace.push_back(json{{"assert", {{"ok", true}}}});
} else {
return Error{"SCHEMA_VIOLATION","if blocks currently allow emit/hunt/assert only (Phase 1)"};
}
}
} else {
if (ifspec.contains("else")) {
if (!ifspec["else"].is_array()) return Error{"SCHEMA_VIOLATION","if.else must be array"};
// same rule set as then
for (const auto& stmt : ifspec["else"]) {
if (!stmt.is_object() || stmt.size() != 1) return Error{"SCHEMA_VIOLATION","if.else stmt must be single-key object"};
auto kkey = stmt.begin().key();
auto kval = stmt.begin().value();
if (kkey == "emit") {
json opj{{"op","event.emit"},{"type", kval["type"]},{"payload", kval.value("payload", json::object())}};
auto ar = apply_op(view, opj);
if (std::holds_alternative<Error>(ar)) return std::get<Error>(ar);
auto rr = std::get<OpResult>(ar);
if (!rr.ok) return Error{"MS_OP_FAIL", rr.error.code + ": " + rr.error.message};
rt.trace.push_back(json{{"emit", rr.data}});
} else if (kkey == "hunt") {
if (!kval.contains("as") || !kval["as"].is_string()) return Error{"SCHEMA_VIOLATION","hunt requires string field: as"};
auto out = run_hunt(view, kval);
if (std::holds_alternative<Error>(out)) return std::get<Error>(out);
auto jr = std::get<json>(out);
rt.set_var(kval["as"].get<std::string>(), jr);
hunt_results.push_back(jr);
rt.trace.push_back(json{{"hunt", jr}});
} else if (kkey == "assert") {
if (!kval.contains("that")) return Error{"SCHEMA_VIOLATION","assert requires that"};
auto ev2 = eval_condition(kval["that"], view, rt);
if (std::holds_alternative<Error>(ev2)) return std::get<Error>(ev2);
if (!std::get<bool>(ev2)) {
rt.halted = true;
auto msg = kval.value("message", "assertion failed");
rt.trace.push_back(json{{"assert", {{"ok", false}, {"message", msg}}}});
return Error{"ASSERT_FAIL", msg};
}
rt.trace.push_back(json{{"assert", {{"ok", true}}}});
} else {
return Error{"SCHEMA_VIOLATION","if blocks currently allow emit/hunt/assert only (Phase 1)"};
}
}
}
}
return (void)0;
}
Result<MS_ExecResult> execute_mindscript(Kernel& live, const MS_Script& s) {
MS_ExecResult out;
out.dry_run = s.dry_run;
// Compile to command
auto cc = compile_to_command(s);
if (std::holds_alternative<Error>(cc)) return std::get<Error>(cc);
auto compiled = std::get<Compiled>(cc);
// Build view: staging for dry-run, live for commit
Kernel staging = clone_kernel(live);
Kernel* view = s.dry_run ? &staging : &live;
// 1) Apply compiled command ops (if any)
if (!compiled.command.is_null() && !compiled.command.empty()) {
CommandExecutor exec;
// Important: for commit mode, executor swaps into live via atomic. We pass *view.
auto cr = exec.execute(*view, compiled.command);
if (std::holds_alternative<Error>(cr)) return std::get<Error>(cr);
auto cmdr = std::get<CommandResult>(cr);
out.command_result = json{
{"id", cmdr.id},
{"atomic", cmdr.atomic},
{"applied", cmdr.applied},
{"dry_run", s.dry_run}
};
if (!s.dry_run) {
// Single collapse commit (write ledger)
auto payload = command_commit_payload(compiled.command, cmdr, live);
auto le = live.ledger.append("command_commit", payload.dump());
if (std::holds_alternative<Error>(le)) return std::get<Error>(le);
}
}
// 2) Execute remaining statements that are not compiled ops: hunts/assert/if
// Phase 1: we execute these by scanning original program and interpreting only those types.
json hunt_results = json::array();
for (const auto& node : s.program) {
if (std::holds_alternative<MS_Hunt>(node)) {
const auto& h = std::get<MS_Hunt>(node);
if (!h.spec.contains("as") || !h.spec["as"].is_string())
return Error{"SCHEMA_VIOLATION","hunt requires string field: as"};
auto hr = run_hunt(*view, h.spec);
if (std::holds_alternative<Error>(hr)) return std::get<Error>(hr);
auto jr = std::get<json>(hr);
out.runtime.set_var(h.spec["as"].get<std::string>(), jr);
hunt_results.push_back(jr);
out.runtime.trace.push_back(json{{"hunt", jr}});
continue;
}
// Allow "assert" and "if" in raw JSON form in Phase 1 by reading from original script JSON (next block)
}
out.hunt_results = hunt_results;
out.final_snapshot = snapshot_view(*view);
return out;
}
} // namespace me
What we’ve done: we have the full backbone: apply ops (mutations), then run hunts.
What’s missing: parsing assert and if into AST nodes in Section 10. That’s easy—we add them now.
Section 11.6 — Add assert + if to AST + Parser
We extend the AST with:
MS_Assert { json that; string message; }MS_If { json when; json then; json else; }
Code Space 11.6.1 — src/mindscript/ast.hpp (add)
struct MS_Assert { json that; std::string message; };
struct MS_If { json when; json then_block; json else_block; bool has_else = false; };
// and update MS_Node variant:
using MS_Node = std::variant<MS_SetContext, MS_Endpoint, MS_Edge, MS_EmitEvent, MS_Hunt, MS_Assert, MS_If>;
Code Space 11.6.2 — src/mindscript/parse.cpp (add parsing)
Add cases:
if (key == "assert") {
if (!val.is_object() || !val.contains("that")) return Error{"SCHEMA_VIOLATION","assert requires field: that"};
MS_Assert a;
a.that = val["that"];
a.message = val.value("message", "assertion failed");
s.program.push_back(a);
continue;
}
if (key == "if") {
if (!val.is_object() || !val.contains("when") || !val.contains("then"))
return Error{"SCHEMA_VIOLATION","if requires when + then"};
if (!val["then"].is_array()) return Error{"SCHEMA_VIOLATION","if.then must be array"};
MS_If iff;
iff.when = val["when"];
iff.then_block = val["then"];
iff.has_else = val.contains("else");
if (iff.has_else) {
if (!val["else"].is_array()) return Error{"SCHEMA_VIOLATION","if.else must be array"};
iff.else_block = val["else"];
}
s.program.push_back(iff);
continue;
}
Section 11.7 — Execute assert + if Nodes (Full MindScript)
Update the executor loop in execute_mindscript() to handle these nodes:
Code Space 11.7.1 — Add into the loop
for (const auto& node : s.program) {
if (std::holds_alternative<MS_Assert>(node)) {
const auto& a = std::get<MS_Assert>(node);
auto ev = eval_condition(a.that, *view, out.runtime);
if (std::holds_alternative<Error>(ev)) return std::get<Error>(ev);
if (!std::get<bool>(ev)) {
out.runtime.halted = true;
out.runtime.trace.push_back(json{{"assert", {{"ok", false}, {"message", a.message}}}});
return Error{"ASSERT_FAIL", a.message};
}
out.runtime.trace.push_back(json{{"assert", {{"ok", true}}}});
continue;
}
if (std::holds_alternative<MS_If>(node)) {
const auto& iff = std::get<MS_If>(node);
json ifspec{
{"when", iff.when},
{"then", iff.then_block}
};
if (iff.has_else) ifspec["else"] = iff.else_block;
auto r = exec_if(*view, ifspec, out.runtime, out.hunt_results);
if (std::holds_alternative<Error>(r)) return std::get<Error>(r);
out.runtime.trace.push_back(json{{"if", {{"ok", true}}}});
continue;
}
// hunts already handled, emits already compiled into command ops, etc.
}
Now scripts can branch based on real computed data.
Section 11.8 — /mindscript/run Route Returns Full Execution Result
Code Space 11.8.1 — Patch route
Replace the earlier /mindscript/run handler with:
#include "mindscript/exec.hpp"
app.Post("/mindscript/run", [&](const httplib::Request& req, httplib::Response& res) {
nlohmann::json j;
try { j = nlohmann::json::parse(req.body); }
catch (...) { return respond_err(res, {"BAD_JSON", "Invalid JSON"}); }
auto ps = me::parse_mindscript_json(j);
if (std::holds_alternative<me::Error>(ps)) return respond_err(res, std::get<me::Error>(ps), 400);
auto script = std::get<me::MS_Script>(ps);
auto ck = me::static_check(script);
if (std::holds_alternative<me::Error>(ck)) return respond_err(res, std::get<me::Error>(ck), 400);
auto er = me::execute_mindscript(k, script);
if (std::holds_alternative<me::Error>(er)) return respond_err(res, std::get<me::Error>(er), 400);
auto out = std::get<me::MS_ExecResult>(er);
respond_ok(res, nlohmann::json{
{"mindscript_id", script.id},
{"mode", script.dry_run ? "dry_run" : "commit"},
{"command_result", out.command_result},
{"hunts", out.hunt_results},
{"trace", out.runtime.trace},
{"final_snapshot", out.final_snapshot}
});
});
Section 11.9 — Example Script (Real Orchestration)
{
"id":"ms-0011",
"mode":"dry_run",
"program":[
{"set":{"office.people_present":16,"human:1.online":true}},
{"endpoint":{"id":"human:1","label":"Peace","enabled":true,
"constraints":{"requires":[{"key":"human:1.online","value":true}]}
}},
{"edge":{"from":"office:hub","to":"human:1","capability":"assign","enabled":true,"cost":1.0,
"constraints":{"requires":[{"key":"office.people_present","value":16}]}
}},
{"hunt":{"as":"assignable","kind":"capability","start":"office:hub","capability":"assign","budget":{"max_cost":10}}},
{"assert":{"that":{"var.exists":"assignable.best"},"message":"No assignable endpoint reachable"}},
{"if":{
"when":{"var.exists":"assignable.best"},
"then":[{"emit":{"type":"commit","payload":{"note":"assignment possible"}}}],
"else":[{"emit":{"type":"stress","payload":{"note":"assignment blocked"}}}]
}}
]
}
This script:
- constructs context + graph
- hunts for best “assign”
- asserts it exists
- branches deterministically
- emits an event based on the result
No LLM needed. Pure kernel logic.
What we unlocked
You now have a real execution language surface:
- observe (hunts)
- decide (if)
- prove (assert)
- act (emit / command ops) …all on top of deterministic replay.
This is exactly the foundation for your “orchestrator role” concept — the language is the orchestration.
Phase 1 — Section 12
Let Variables · Boolean Logic · Output Contracts · Script Library Ledger
What we ship in this section
-
letstatement: define vars from literals + context + hunt paths - Boolean conditions:
and,or,notcomposition -
returnstatement + output contract validation - Script Library:
POST /mindscript/store-
POST /mindscript/run/{id}(runs stored script) - stored in ledger as
mindscript_storeentries
Everything still deterministic and replayable.
Section 12.1 — MindScript v0.2 Syntax Additions
12.1.1 let
{"let": {
"name": "people",
"value": {"context.get": "office.people_present"}
}}
Or from hunt var:
{"let": {
"name": "best_cost",
"value": {"var.get": "assignable.best.total_cost"}
}}
Or literal:
{"let": {"name":"budget", "value": 10}}
12.1.2 Boolean conditions
{"and":[
{"state.is":"LOOP"},
{"context.eq":{"key":"human:1.online","value":true}},
{"not":{"var.exists":"blocked.reason"}}
]}
12.1.3 return + contract
{
"id":"ms-0012",
"mode":"commit",
"contract":{
"type":"object",
"required":["status","best_path"],
"properties":{
"status":{"type":"string"},
"best_path":{"type":"array"}
}
},
"program":[
...,
{"return":{
"status":"ok",
"best_path":{"var.get":"assignable.best.path"}
}}
]
}
Section 12.2 — Runtime Upgrade (Return Value + Contract)
Code Space 12.2.1 — src/mindscript/runtime.hpp (extend)
struct MS_Runtime {
std::unordered_map<std::string, json> vars;
json trace = json::array();
bool halted = false;
bool has_return = false;
json return_value = json::null();
void set_var(const std::string& name, const json& value) { vars[name] = value; }
const json* get_var(const std::string& name) const { ... }
};
Section 12.3 — Add AST Nodes: let + return
Code Space 12.3.1 — src/mindscript/ast.hpp (add)
struct MS_Let { std::string name; json value_expr; };
struct MS_Return { json value; };
// update variant:
using MS_Node = std::variant<
MS_SetContext, MS_Endpoint, MS_Edge, MS_EmitEvent, MS_Hunt, MS_Assert, MS_If,
MS_Let, MS_Return
>;
Section 12.4 — Parser Updates for let + return
Code Space 12.4.1 — src/mindscript/parse.cpp (add)
if (key == "let") {
if (!val.is_object() || !val.contains("name") || !val.contains("value"))
return Error{"SCHEMA_VIOLATION","let requires {name,value}"};
if (!val["name"].is_string()) return Error{"SCHEMA_VIOLATION","let.name must be string"};
MS_Let l;
l.name = val["name"].get<std::string>();
l.value_expr = val["value"];
s.program.push_back(l);
continue;
}
if (key == "return") {
// return can be any JSON value (object recommended)
MS_Return r;
r.value = val;
s.program.push_back(r);
continue;
}
Section 12.5 — Value Expressions for let
We need a deterministic evaluator that can resolve:
- literal JSON
{"context.get":"key"}{"var.get":"name.path.to.value"}-
{"state.get":true}(optional convenience)
Repo additions
src/mindscript/
value.hpp
value.cpp
Code Space 12.5.1 — src/mindscript/value.hpp
#pragma once
#include "daemon/server.hpp"
#include "mindscript/runtime.hpp"
#include "core/types.hpp"
#include "json.hpp"
namespace me {
using json = nlohmann::json;
Result<json> eval_value_expr(const json& expr, const Kernel& k, const MS_Runtime& rt);
} // namespace me
Code Space 12.5.2 — src/mindscript/value.cpp
#include "mindscript/value.hpp"
namespace me {
using json = nlohmann::json;
static const json* get_json_path(const json& root, const std::string& dotted) {
const json* cur = &root;
size_t start = 0;
while (start < dotted.size()) {
auto dot = dotted.find('.', start);
auto part = dotted.substr(start, dot == std::string::npos ? std::string::npos : dot - start);
if (!cur->is_object() || !cur->contains(part)) return nullptr;
cur = &((*cur)[part]);
if (dot == std::string::npos) break;
start = dot + 1;
}
return cur;
}
Result<json> eval_value_expr(const json& expr, const Kernel& k, const MS_Runtime& rt) {
// literal passthrough
if (!expr.is_object() || expr.size() != 1) return expr;
auto key = expr.begin().key();
auto val = expr.begin().value();
if (key == "context.get") {
if (!val.is_string()) return Error{"SCHEMA_VIOLATION","context.get must be string"};
auto* got = k.live_context.get(val.get<std::string>());
if (!got) return json::null();
return *got;
}
if (key == "state.get") {
// convenience: returns string state
(void)val;
return json(to_string(k.state.current()));
}
if (key == "var.get") {
if (!val.is_string()) return Error{"SCHEMA_VIOLATION","var.get must be string"};
auto path = val.get<std::string>();
auto dot = path.find('.');
if (dot == std::string::npos) {
const json* root = rt.get_var(path);
return root ? *root : json::null();
}
auto root_name = path.substr(0, dot);
auto rest = path.substr(dot + 1);
const json* root = rt.get_var(root_name);
if (!root) return json::null();
const json* got = get_json_path(*root, rest);
return got ? *got : json::null();
}
// unknown expr: treat as literal object (Phase 1)
return expr;
}
} // namespace me
Section 12.6 — Boolean Logic in Conditions (and/or/not)
We extend eval_condition() to support composition.
Code Space 12.6.1 — src/mindscript/cond.cpp (add at top)
if (key == "and") {
if (!val.is_array()) return Error{"SCHEMA_VIOLATION","and must be array"};
for (const auto& c : val) {
auto r = eval_condition(c, k, rt);
if (std::holds_alternative<Error>(r)) return std::get<Error>(r);
if (!std::get<bool>(r)) return false;
}
return true;
}
if (key == "or") {
if (!val.is_array()) return Error{"SCHEMA_VIOLATION","or must be array"};
for (const auto& c : val) {
auto r = eval_condition(c, k, rt);
if (std::holds_alternative<Error>(r)) return std::get<Error>(r);
if (std::get<bool>(r)) return true;
}
return false;
}
if (key == "not") {
auto r = eval_condition(val, k, rt);
if (std::holds_alternative<Error>(r)) return std::get<Error>(r);
return !std::get<bool>(r);
}
Now your assert and if conditions are actually expressive.
Section 12.7 — Execute let + return
Code Space 12.7.1 — src/mindscript/exec.cpp (in main node loop)
if (std::holds_alternative<MS_Let>(node)) {
const auto& l = std::get<MS_Let>(node);
auto v = eval_value_expr(l.value_expr, *view, out.runtime);
if (std::holds_alternative<Error>(v)) return std::get<Error>(v);
out.runtime.set_var(l.name, std::get<json>(v));
out.runtime.trace.push_back(json{{"let", {{"name", l.name}, {"value", std::get<json>(v)}}}});
continue;
}
if (std::holds_alternative<MS_Return>(node)) {
const auto& r = std::get<MS_Return>(node);
// Evaluate return object by resolving any value expressions inside (Phase 1: shallow resolve)
json rv = r.value;
// shallow resolve: if object values are expr objects, eval them
if (rv.is_object()) {
for (auto it = rv.begin(); it != rv.end(); ++it) {
auto ev = eval_value_expr(it.value(), *view, out.runtime);
if (std::holds_alternative<Error>(ev)) return std::get<Error>(ev);
it.value() = std::get<json>(ev);
}
}
out.runtime.has_return = true;
out.runtime.return_value = rv;
out.runtime.trace.push_back(json{{"return", rv}});
// Phase 1 rule: return halts program (like a real language)
break;
}
Section 12.8 — Output Contract Validation (Phase 1 Minimal)
We do a lightweight validator (not full JSON Schema yet) for:
- type object/array/string/number/bool
- required keys
- property type checks
Repo additions
src/mindscript/
contract.hpp
contract.cpp
Code Space 12.8.1 — src/mindscript/contract.hpp
#pragma once
#include "core/types.hpp"
#include "json.hpp"
namespace me {
using json = nlohmann::json;
Result<void> validate_contract(const json& contract, const json& value);
} // namespace me
Code Space 12.8.2 — src/mindscript/contract.cpp
#include "mindscript/contract.hpp"
namespace me {
using json = nlohmann::json;
static bool type_matches(const std::string& t, const json& v) {
if (t == "object") return v.is_object();
if (t == "array") return v.is_array();
if (t == "string") return v.is_string();
if (t == "number") return v.is_number();
if (t == "boolean") return v.is_boolean();
if (t == "null") return v.is_null();
return false;
}
Result<void> validate_contract(const json& contract, const json& value) {
if (!contract.is_object()) return Error{"SCHEMA_VIOLATION","contract must be object"};
if (!contract.contains("type") || !contract["type"].is_string())
return Error{"SCHEMA_VIOLATION","contract.type must be string"};
auto t = contract["type"].get<std::string>();
if (!type_matches(t, value)) return Error{"CONTRACT_FAIL","return value type mismatch"};
if (t == "object") {
if (contract.contains("required")) {
if (!contract["required"].is_array()) return Error{"SCHEMA_VIOLATION","contract.required must be array"};
for (const auto& k : contract["required"]) {
if (!k.is_string()) return Error{"SCHEMA_VIOLATION","contract.required entries must be string"};
if (!value.contains(k.get<std::string>())) return Error{"CONTRACT_FAIL","missing required key: " + k.get<std::string>()};
}
}
if (contract.contains("properties")) {
if (!contract["properties"].is_object()) return Error{"SCHEMA_VIOLATION","contract.properties must be object"};
for (auto it = contract["properties"].begin(); it != contract["properties"].end(); ++it) {
const auto& key = it.key();
const auto& rule = it.value();
if (!value.contains(key)) continue;
if (!rule.is_object() || !rule.contains("type") || !rule["type"].is_string()) continue;
auto pt = rule["type"].get<std::string>();
if (!type_matches(pt, value.at(key))) return Error{"CONTRACT_FAIL","property type mismatch: " + key};
}
}
}
return (void)0;
}
} // namespace me
Hook it into MindScript execution
In execute_mindscript() after script runs:
- if input JSON contains
contract, validate againstruntime.return_value - if no return and contract exists → fail
Section 12.9 — Script Library Stored in Ledger
Why
Companies will have reusable MindScripts:
- “start-of-day sync”
- “on-call incident response”
- “hardware intake”
- “assign team resources” Stored scripts become part of the company memory.
Ledger entry kind
mindscript_store
Payload:
{
"id":"mslib:assign_flow:v1",
"script": { ...full MindScript JSON... }
}
Section 12.10 — Endpoints: Store + Run Stored
12.10.1 POST /mindscript/store
Stores the script in ledger.
12.10.2 POST /mindscript/run/{id}
Find latest stored script with matching id and execute it.
Repo additions
src/mindscript/
library.hpp
library.cpp
Code Space 12.10.3 — src/mindscript/library.hpp
#pragma once
#include "ledger/ledger.hpp"
#include "core/types.hpp"
#include "json.hpp"
#include <optional>
namespace me {
using json = nlohmann::json;
Result<void> store_script(Ledger& ledger, const std::string& id, const json& script);
std::optional<json> load_latest_script(const Ledger& ledger, const std::string& id);
} // namespace me
Code Space 12.10.4 — src/mindscript/library.cpp
#include "mindscript/library.hpp"
namespace me {
using json = nlohmann::json;
Result<void> store_script(Ledger& ledger, const std::string& id, const json& script) {
json payload{{"id", id}, {"script", script}};
auto le = ledger.append("mindscript_store", payload.dump());
if (std::holds_alternative<Error>(le)) return std::get<Error>(le);
return (void)0;
}
std::optional<json> load_latest_script(const Ledger& ledger, const std::string& id) {
auto all = ledger.read_all();
if (std::holds_alternative<Error>(all)) return std::nullopt;
const auto& entries = std::get<std::vector<LedgerEntry>>(all);
for (auto it = entries.rbegin(); it != entries.rend(); ++it) {
if (it->kind != "mindscript_store") continue;
try {
auto payload = json::parse(it->payload_json);
if (payload.contains("id") && payload["id"].is_string() && payload["id"].get<std::string>() == id) {
if (payload.contains("script")) return payload["script"];
}
} catch (...) { continue; }
}
return std::nullopt;
}
} // namespace me
Code Space 12.10.5 — Routes
#include "mindscript/library.hpp"
#include "mindscript/exec.hpp"
#include "mindscript/check.hpp"
#include "mindscript/parse.hpp"
#include "mindscript/contract.hpp"
// Store
app.Post("/mindscript/store", [&](const httplib::Request& req, httplib::Response& res) {
nlohmann::json j;
try { j = nlohmann::json::parse(req.body); }
catch (...) { return respond_err(res, {"BAD_JSON", "Invalid JSON"}); }
if (!j.contains("id") || !j["id"].is_string()) return respond_err(res, {"SCHEMA_VIOLATION","id must be string"}, 400);
auto id = j["id"].get<std::string>();
auto r = me::store_script(k.ledger, id, j);
if (std::holds_alternative<me::Error>(r)) return respond_err(res, std::get<me::Error>(r), 500);
respond_ok(res, nlohmann::json{{"stored", true}, {"id", id}});
});
// Run stored
app.Post(R"(/mindscript/run/([\w:\-\.]+))", [&](const httplib::Request& req, httplib::Response& res) {
auto id = req.matches[1].str();
auto js = me::load_latest_script(k.ledger, id);
if (!js.has_value()) return respond_err(res, {"NOT_FOUND","No stored MindScript with id"}, 404);
// parse -> check -> exec
auto ps = me::parse_mindscript_json(*js);
if (std::holds_alternative<me::Error>(ps)) return respond_err(res, std::get<me::Error>(ps), 400);
auto script = std::get<me::MS_Script>(ps);
auto ck = me::static_check(script);
if (std::holds_alternative<me::Error>(ck)) return respond_err(res, std::get<me::Error>(ck), 400);
auto er = me::execute_mindscript(k, script);
if (std::holds_alternative<me::Error>(er)) return respond_err(res, std::get<me::Error>(er), 400);
auto out = std::get<me::MS_ExecResult>(er);
// contract validation if present in stored JSON and return exists
if (js->contains("contract")) {
if (!out.runtime.has_return) return respond_err(res, {"CONTRACT_FAIL","script did not return value"}, 400);
auto vr = me::validate_contract((*js)["contract"], out.runtime.return_value);
if (std::holds_alternative<me::Error>(vr)) return respond_err(res, std::get<me::Error>(vr), 400);
}
respond_ok(res, nlohmann::json{
{"stored_id", id},
{"mode", script.dry_run ? "dry_run" : "commit"},
{"return", out.runtime.has_return ? out.runtime.return_value : nlohmann::json::null()},
{"hunts", out.hunt_results},
{"trace", out.runtime.trace},
{"final_snapshot", out.final_snapshot}
});
});
Section 12.11 — Example: A Stored Script That Returns a Contracted Output
{
"id":"mslib:assign_flow:v1",
"mode":"dry_run",
"contract":{
"type":"object",
"required":["status","cost"],
"properties":{
"status":{"type":"string"},
"cost":{"type":"number"}
}
},
"program":[
{"hunt":{"as":"assignable","kind":"capability","start":"office:hub","capability":"assign","budget":{"max_cost":10}}},
{"let":{"name":"cost","value":{"var.get":"assignable.best.total_cost"}}},
{"if":{
"when":{"var.exists":"assignable.best"},
"then":[{"return":{"status":"ok","cost":{"var.get":"assignable.best.total_cost"}}}],
"else":[{"return":{"status":"blocked","cost":-1}}]
}}
]
}
Store:
curl -X POST localhost:8080/mindscript/store \
-H "Content-Type: application/json" \
-d @script.json
Run:
curl -X POST localhost:8080/mindscript/run/mslib:assign_flow:v1
What we unlocked
Now MindScript has:
- variables (
let) - real boolean logic (and/or/not)
- predictable outputs (contracts)
- reusable company memory (stored scripts in ledger)
This is exactly how your “company = enclosed cloud system” idea becomes operational: the scripts are the living policy + orchestration vocabulary of that company.
Phase 1 — Section 13
Permissions · Capability Scopes · Signed Commits · Rate Limits · Script Versioning
What we ship in this section
- Principal model (who is calling)
- Policy engine (what they’re allowed to do)
- Capability scopes enforced at op-apply time
- Signed ledger commits (tamper-evident)
- Rate limiting per principal
- Script library versioning + deprecation rules
Phase 1 goal: simple, strict, deterministic.
Section 13.1 — Security Model (Minimal but Real)
13.1.1 Principal
We identify a caller via headers:
-
X-MS-Principal:"user:peace"/"svc:oncall-bot"/"sys:root" -
X-MS-Key: API key token (Phase 1 shared-secret)
Later: swap for JWT/Auth0, but don’t block Phase 1.
13.1.2 Policy
Policies say:
- which ops they can run
- which endpoints/capabilities they can touch
- whether they can commit vs dry-run only
- rate limits
Section 13.2 — Policy File (Local JSON, Loaded at Boot)
Repo additions
config/
policy.json
src/security/
policy.hpp
policy.cpp
auth.hpp
auth.cpp
ratelimit.hpp
ratelimit.cpp
src/ledger/
signed_commit.hpp
signed_commit.cpp
Example config/policy.json
{
"principals": {
"sys:root": {
"key": "ROOT_DEV_KEY_CHANGE_ME",
"allow_commit": true,
"ops": ["*"],
"scopes": [{"kind":"*","match":"*"}],
"rate_limit": {"per_minute": 10000}
},
"user:peace": {
"key": "PEACE_DEV_KEY_CHANGE_ME",
"allow_commit": true,
"ops": ["context.set","graph.upsert_endpoint","graph.add_edge","event.emit"],
"scopes": [
{"kind":"endpoint_prefix","match":"human:"},
{"kind":"endpoint_prefix","match":"office:"},
{"kind":"capability","match":"assign"}
],
"rate_limit": {"per_minute": 240}
},
"svc:viewer": {
"key": "VIEW_KEY_CHANGE_ME",
"allow_commit": false,
"ops": ["*"],
"scopes": [{"kind":"*","match":"*"}],
"rate_limit": {"per_minute": 600}
}
}
}
Section 13.3 — Auth: Extract Principal + Verify Key
Code Space 13.3.1 — src/security/auth.hpp
#pragma once
#include <string>
#include "core/types.hpp"
#include "httplib.h"
namespace me {
struct AuthContext {
std::string principal;
};
Result<AuthContext> authenticate(const httplib::Request& req);
} // namespace me
Code Space 13.3.2 — src/security/auth.cpp
#include "security/auth.hpp"
#include "security/policy.hpp"
namespace me {
Result<AuthContext> authenticate(const httplib::Request& req) {
auto p = req.get_header_value("X-MS-Principal");
auto k = req.get_header_value("X-MS-Key");
if (p.empty() || k.empty()) return Error{"UNAUTHENTICATED","Missing X-MS-Principal or X-MS-Key"};
auto pol = Policy::instance(); // loaded at boot
auto* pr = pol->find_principal(p);
if (!pr) return Error{"UNAUTHENTICATED","Unknown principal"};
if (k != pr->key) return Error{"UNAUTHENTICATED","Invalid key"};
return AuthContext{p};
}
} // namespace me
Section 13.4 — Policy Engine (Ops + Scopes + Commit Permission)
Code Space 13.4.1 — src/security/policy.hpp
#pragma once
#include <string>
#include <unordered_map>
#include <vector>
#include <memory>
#include "json.hpp"
namespace me {
using json = nlohmann::json;
struct ScopeRule {
std::string kind; // "*", "endpoint_prefix", "capability"
std::string match; // pattern
};
struct RateLimitCfg { int per_minute = 60; };
struct PrincipalPolicy {
std::string key;
bool allow_commit = false;
std::vector<std::string> ops; // "*" or list
std::vector<ScopeRule> scopes; // "*" or specific
RateLimitCfg rate_limit;
};
class Policy {
public:
static Policy* instance();
static void load_from_file(const std::string& path);
const PrincipalPolicy* find_principal(const std::string& principal) const;
bool op_allowed(const PrincipalPolicy& p, const std::string& op) const;
bool scope_allows_endpoint(const PrincipalPolicy& p, const std::string& endpoint_id) const;
bool scope_allows_capability(const PrincipalPolicy& p, const std::string& cap) const;
private:
std::unordered_map<std::string, PrincipalPolicy> principals_;
};
} // namespace me
Code Space 13.4.2 — src/security/policy.cpp (core rules)
#include "security/policy.hpp"
#include <fstream>
namespace me {
static std::unique_ptr<Policy> g_policy;
Policy* Policy::instance() { return g_policy.get(); }
void Policy::load_from_file(const std::string& path) {
std::ifstream f(path);
json j; f >> j;
auto p = std::make_unique<Policy>();
auto principals = j.at("principals");
for (auto it = principals.begin(); it != principals.end(); ++it) {
PrincipalPolicy pp;
auto pj = it.value();
pp.key = pj.at("key").get<std::string>();
pp.allow_commit = pj.value("allow_commit", false);
for (const auto& o : pj.at("ops")) pp.ops.push_back(o.get<std::string>());
for (const auto& s : pj.at("scopes")) {
pp.scopes.push_back(ScopeRule{s.at("kind").get<std::string>(), s.at("match").get<std::string>()});
}
pp.rate_limit.per_minute = pj.at("rate_limit").value("per_minute", 60);
p->principals_[it.key()] = pp;
}
g_policy = std::move(p);
}
const PrincipalPolicy* Policy::find_principal(const std::string& principal) const {
auto it = principals_.find(principal);
if (it == principals_.end()) return nullptr;
return &it->second;
}
bool Policy::op_allowed(const PrincipalPolicy& p, const std::string& op) const {
for (const auto& o : p.ops) if (o == "*" || o == op) return true;
return false;
}
bool Policy::scope_allows_endpoint(const PrincipalPolicy& p, const std::string& endpoint_id) const {
for (const auto& s : p.scopes) {
if (s.kind == "*" && s.match == "*") return true;
if (s.kind == "endpoint_prefix") {
if (endpoint_id.rfind(s.match, 0) == 0) return true; // starts_with
}
}
return false;
}
bool Policy::scope_allows_capability(const PrincipalPolicy& p, const std::string& cap) const {
for (const auto& s : p.scopes) {
if (s.kind == "*" && s.match == "*") return true;
if (s.kind == "capability" && s.match == cap) return true;
}
return false;
}
} // namespace me
Section 13.5 — Rate Limiting (Per Minute)
We implement a basic token bucket per principal.
Code Space 13.5.1 — src/security/ratelimit.hpp
#pragma once
#include <string>
#include <unordered_map>
#include <chrono>
#include <mutex>
namespace me {
class RateLimiter {
public:
bool allow(const std::string& principal, int per_minute);
private:
struct Bucket {
int tokens = 0;
std::chrono::steady_clock::time_point last;
};
std::mutex mu_;
std::unordered_map<std::string, Bucket> buckets_;
};
} // namespace me
Code Space 13.5.2 — src/security/ratelimit.cpp
#include "security/ratelimit.hpp"
namespace me {
bool RateLimiter::allow(const std::string& principal, int per_minute) {
std::lock_guard<std::mutex> lock(mu_);
auto now = std::chrono::steady_clock::now();
auto& b = buckets_[principal];
if (b.last.time_since_epoch().count() == 0) {
b.last = now;
b.tokens = per_minute;
}
// refill once per minute (simple)
auto elapsed = std::chrono::duration_cast<std::chrono::seconds>(now - b.last).count();
if (elapsed >= 60) {
b.tokens = per_minute;
b.last = now;
}
if (b.tokens <= 0) return false;
b.tokens -= 1;
return true;
}
} // namespace me
Section 13.6 — Enforce Policy at the Right Layer (Apply-Time)
Critical rule: enforcement must happen where mutation happens:
- at
/commandparse time is not enough - replays must also be safe (policy might differ, but replay is internal)
- MindScript compiles into ops, so ops must be checked
We update apply_op() signature to accept policy context.
Code Space 13.6.1 — src/command/apply.hpp (change)
Result<OpResult> apply_op(Kernel& k, const json& opj, const PrincipalPolicy& pol);
Code Space 13.6.2 — src/command/apply.cpp (enforce)
At the top:
if (!Policy::instance()->op_allowed(pol, op)) {
return fail(op, "FORBIDDEN", "op not allowed for principal");
}
Then inside graph ops:
- enforce endpoint scope:
if (op == "graph.upsert_endpoint") {
auto id = epj["id"].get<std::string>();
if (!Policy::instance()->scope_allows_endpoint(pol, id)) {
return fail(op, "FORBIDDEN", "endpoint out of scope");
}
}
if (op == "graph.add_edge") {
auto from = ej["from"].get<std::string>();
auto to = ej["to"].get<std::string>();
auto cap = ej["capability"].get<std::string>();
if (!Policy::instance()->scope_allows_endpoint(pol, from) ||
!Policy::instance()->scope_allows_endpoint(pol, to) ||
!Policy::instance()->scope_allows_capability(pol, cap)) {
return fail(op, "FORBIDDEN", "edge out of scope");
}
}
Now a script cannot “touch the world” beyond its scope, even if it tries.
Section 13.7 — Signed Ledger Commits (Tamper-Evident)
We add a hash chain:
-
each ledger entry includes:
prev_hashhash = H(prev_hash + kind + payload_json + ts_ms + id)
optional
sigusing HMAC-SHA256 with a server secret (Phase 1)
Code Space 13.7.1 — src/ledger/signed_commit.hpp
#pragma once
#include <string>
#include "core/types.hpp"
namespace me {
struct SignedEnvelope {
std::string prev_hash;
std::string hash;
std::string sig; // HMAC over hash (Phase 1)
};
SignedEnvelope sign_entry(
const std::string& prev_hash,
const std::string& kind,
const std::string& payload_json,
u64 ts_ms,
u64 id,
const std::string& secret
);
} // namespace me
Code Space 13.7.2 — src/ledger/signed_commit.cpp (hash + hmac placeholder)
#include "ledger/signed_commit.hpp"
#include <sstream>
#include <iomanip>
#include <openssl/sha.h>
#include <openssl/hmac.h>
namespace me {
static std::string sha256_hex(const std::string& s) {
unsigned char hash[SHA256_DIGEST_LENGTH];
SHA256(reinterpret_cast<const unsigned char*>(s.data()), s.size(), hash);
std::ostringstream oss;
for (int i=0;i<SHA256_DIGEST_LENGTH;i++) oss << std::hex << std::setw(2) << std::setfill('0') << (int)hash[i];
return oss.str();
}
static std::string hmac_sha256_hex(const std::string& key, const std::string& msg) {
unsigned int len = 0;
unsigned char out[EVP_MAX_MD_SIZE];
HMAC(EVP_sha256(), key.data(), (int)key.size(),
reinterpret_cast<const unsigned char*>(msg.data()), msg.size(),
out, &len);
std::ostringstream oss;
for (unsigned int i=0;i<len;i++) oss << std::hex << std::setw(2) << std::setfill('0') << (int)out[i];
return oss.str();
}
SignedEnvelope sign_entry(
const std::string& prev_hash,
const std::string& kind,
const std::string& payload_json,
u64 ts_ms,
u64 id,
const std::string& secret
) {
std::string material = prev_hash + "|" + kind + "|" + payload_json + "|" + std::to_string(ts_ms) + "|" + std::to_string(id);
auto h = sha256_hex(material);
auto sig = hmac_sha256_hex(secret, h);
return SignedEnvelope{prev_hash, h, sig};
}
} // namespace me
Where it plugs in
Your ledger append() wraps payload:
{
"kind":"command_commit",
"payload":{...},
"prev_hash":"...",
"hash":"...",
"sig":"..."
}
Now any tampering breaks the chain.
Section 13.8 — Script Versioning + Deprecation Rules
We standardize IDs:
mslib:<name>:v<integer>- e.g.
mslib:assign_flow:v1,mslib:assign_flow:v2
Deprecation entry:
mindscript_deprecate
{"id":"mslib:assign_flow:v1","replaced_by":"mslib:assign_flow:v2","reason":"new constraints"}
/mindscript/run/{id} behavior:
- if id is deprecated and caller doesn’t pass
allow_deprecated=true, refuse withDEPRECATED_SCRIPT.
Section 13.9 — Enforcement Wiring in Routes
/command and /mindscript/* now do:
authenticate(req)rate_limiter.allow(principal, policy.rate_limit.per_minute)- if commit requested but
allow_commit=false→ reject - pass
PrincipalPolicyinto executor/op apply
This is where your daemon becomes “real infra.”
What we unlocked
You now have:
- principal identity
- scoped permissions at the mutation layer
- retry-safe rate limiting
- tamper-evident ledger chain
- versioned and deprecable MindScripts
This is the difference between a “cool engine” and a “company fabric OS”.
'
Phase 1 — Section 14
Ledger Verification · Audit Trace · Effective Policy · Explain-Why Engine
What we ship in this section
-
GET /ledger/verify— verifies hash chain + signatures (tamper evidence) -
GET /audit/trace/{command_id}— reconstructs what happened for a command collapse -
GET /policy/effective— shows caller’s permissions + scopes + limits - “Explain why forbidden” — when an op is rejected, response includes exact rule that blocked it
Section 14.1 — Ledger Verify: Hash Chain + Sig Check
Repo additions
src/ledger/
verify.hpp
verify.cpp
Code Space 14.1.1 — src/ledger/verify.hpp
#pragma once
#include "ledger/ledger.hpp"
#include "core/types.hpp"
#include "json.hpp"
namespace me {
using json = nlohmann::json;
struct VerifyReport {
bool ok = true;
u64 checked = 0;
std::vector<json> issues;
};
Result<VerifyReport> verify_ledger(const Ledger& ledger, const std::string& secret);
} // namespace me
Code Space 14.1.2 — src/ledger/verify.cpp
#include "ledger/verify.hpp"
#include "ledger/signed_commit.hpp"
#include <optional>
namespace me {
using json = nlohmann::json;
static std::optional<json> try_parse(const std::string& s) {
try { return json::parse(s); } catch (...) { return std::nullopt; }
}
Result<VerifyReport> verify_ledger(const Ledger& ledger, const std::string& secret) {
VerifyReport rep;
auto all = ledger.read_all();
if (std::holds_alternative<Error>(all)) return std::get<Error>(all);
const auto& entries = std::get<std::vector<LedgerEntry>>(all);
std::string prev_hash = "GENESIS";
for (const auto& e : entries) {
rep.checked++;
auto pj = try_parse(e.payload_json);
if (!pj.has_value()) {
rep.ok = false;
rep.issues.push_back(json{{"id", e.id}, {"kind", e.kind}, {"issue", "payload_not_json"}});
continue;
}
// Expect envelope fields
if (!pj->contains("prev_hash") || !pj->contains("hash") || !pj->contains("sig") || !pj->contains("payload")) {
rep.ok = false;
rep.issues.push_back(json{{"id", e.id}, {"kind", e.kind}, {"issue", "missing_envelope_fields"}});
continue;
}
auto got_prev = (*pj)["prev_hash"].get<std::string>();
auto got_hash = (*pj)["hash"].get<std::string>();
auto got_sig = (*pj)["sig"].get<std::string>();
if (got_prev != prev_hash) {
rep.ok = false;
rep.issues.push_back(json{
{"id", e.id}, {"kind", e.kind},
{"issue", "prev_hash_mismatch"},
{"expected", prev_hash},
{"got", got_prev}
});
}
// recompute
std::string payload_inner = (*pj)["payload"].dump();
auto env = sign_entry(prev_hash, e.kind, payload_inner, e.ts_ms, e.id, secret);
if (env.hash != got_hash) {
rep.ok = false;
rep.issues.push_back(json{
{"id", e.id}, {"kind", e.kind},
{"issue", "hash_mismatch"},
{"expected", env.hash},
{"got", got_hash}
});
}
if (env.sig != got_sig) {
rep.ok = false;
rep.issues.push_back(json{
{"id", e.id}, {"kind", e.kind},
{"issue", "sig_mismatch"},
{"expected", env.sig},
{"got", got_sig}
});
}
prev_hash = got_hash; // chain continues
}
return rep;
}
} // namespace me
Route: GET /ledger/verify
#include "ledger/verify.hpp"
app.Get("/ledger/verify", [&](const httplib::Request& req, httplib::Response& res) {
auto auth = me::authenticate(req);
if (std::holds_alternative<me::Error>(auth)) return respond_err(res, std::get<me::Error>(auth), 401);
// only root can verify full ledger (Phase 1)
auto pol = me::Policy::instance();
auto* pp = pol->find_principal(std::get<me::AuthContext>(auth).principal);
if (!pp || !pol->op_allowed(*pp, "*")) return respond_err(res, {"FORBIDDEN","not allowed"}, 403);
auto rep = me::verify_ledger(k.ledger, k.secrets.ledger_hmac_key);
if (std::holds_alternative<me::Error>(rep)) return respond_err(res, std::get<me::Error>(rep), 500);
auto r = std::get<me::VerifyReport>(rep);
respond_ok(res, nlohmann::json{
{"ok", r.ok},
{"checked", r.checked},
{"issues", r.issues}
});
});
Section 14.2 — Audit Trace: Reconstruct a Command Collapse
Goal
Given a command_id, show:
- who ran it (principal)
- dry-run vs commit
- ops list
- per-op results
- resulting state/context/graph snapshot hash
- time
We already store most of this in command_commit. We add two fields:
principal-
request_id(optional) -
ip(optional)
Update: command_commit_payload()
Add:
{"principal", principal_string}
Section 14.3 — Find a Command Commit By ID (Fast enough Phase 1)
Repo additions
src/audit/
trace.hpp
trace.cpp
Code Space 14.3.1 — src/audit/trace.hpp
#pragma once
#include "ledger/ledger.hpp"
#include "core/types.hpp"
#include "json.hpp"
#include <optional>
namespace me {
using json = nlohmann::json;
std::optional<json> find_command_commit_payload(const Ledger& ledger, const std::string& command_id);
} // namespace me
Code Space 14.3.2 — src/audit/trace.cpp
#include "audit/trace.hpp"
namespace me {
using json = nlohmann::json;
std::optional<json> find_command_commit_payload(const Ledger& ledger, const std::string& command_id) {
auto all = ledger.read_all();
if (std::holds_alternative<Error>(all)) return std::nullopt;
const auto& entries = std::get<std::vector<LedgerEntry>>(all);
for (auto it = entries.rbegin(); it != entries.rend(); ++it) {
if (it->kind != "command_commit") continue;
try {
auto env = json::parse(it->payload_json);
if (!env.contains("payload")) continue;
auto payload = env["payload"];
if (payload.contains("id") && payload["id"].is_string() &&
payload["id"].get<std::string>() == command_id) {
payload["ledger_entry_id"] = it->id;
payload["ts_ms"] = it->ts_ms;
payload["hash"] = env.value("hash", "");
return payload;
}
} catch (...) { continue; }
}
return std::nullopt;
}
} // namespace me
Route: GET /audit/trace/{command_id}
#include "audit/trace.hpp"
app.Get(R"(/audit/trace/([\w:\-\.]+))", [&](const httplib::Request& req, httplib::Response& res) {
auto auth = me::authenticate(req);
if (std::holds_alternative<me::Error>(auth)) return respond_err(res, std::get<me::Error>(auth), 401);
auto principal = std::get<me::AuthContext>(auth).principal;
auto pol = me::Policy::instance();
auto* pp = pol->find_principal(principal);
if (!pp) return respond_err(res, {"FORBIDDEN","no policy"}, 403);
// Phase 1: allow reading trace only if you're root OR you are the principal who created it.
auto cmd_id = req.matches[1].str();
auto payload = me::find_command_commit_payload(k.ledger, cmd_id);
if (!payload.has_value()) return respond_err(res, {"NOT_FOUND","command_commit not found"}, 404);
auto owner = payload->value("principal", "");
if (principal != "sys:root" && principal != owner) {
return respond_err(res, {"FORBIDDEN","trace not accessible"}, 403);
}
respond_ok(res, *payload);
});
Section 14.4 — Effective Policy Endpoint
Route: GET /policy/effective
Returns:
- principal
- allow_commit
- allowed ops
- scopes
- rate limit
- computed summaries
app.Get("/policy/effective", [&](const httplib::Request& req, httplib::Response& res) {
auto auth = me::authenticate(req);
if (std::holds_alternative<me::Error>(auth)) return respond_err(res, std::get<me::Error>(auth), 401);
auto principal = std::get<me::AuthContext>(auth).principal;
auto pol = me::Policy::instance();
auto* pp = pol->find_principal(principal);
if (!pp) return respond_err(res, {"FORBIDDEN","no policy"}, 403);
nlohmann::json scopes = nlohmann::json::array();
for (const auto& s : pp->scopes) scopes.push_back({{"kind", s.kind}, {"match", s.match}});
nlohmann::json ops = nlohmann::json::array();
for (const auto& o : pp->ops) ops.push_back(o);
respond_ok(res, nlohmann::json{
{"principal", principal},
{"allow_commit", pp->allow_commit},
{"ops", ops},
{"scopes", scopes},
{"rate_limit", {{"per_minute", pp->rate_limit.per_minute}}}
});
});
Section 14.5 — Explain-Why Engine (No More “FORBIDDEN” With Zero Context)
Idea
When an op fails due to policy, return:
-
blocked_by: which policy rule -
required_scope: what would have allowed it -
principal: who you are -
op: what you tried - (optional) suggestions
Update OpResult to include explanation
json explain = json::null();
In apply_op() forbidden paths:
OpResult r = fail(op, "FORBIDDEN", "edge out of scope");
r.data = json::object();
r.data["explain"] = json{
{"principal", "<caller>"},
{"op", op},
{"reason", "scope_violation"},
{"required", {
{"endpoint_prefix", "office:"},
{"endpoint_prefix", "human:"},
{"capability", cap}
}}
};
return r;
Note: you’ll pass the principal string into apply_op too:
Result<OpResult> apply_op(Kernel& k, const json& opj, const PrincipalPolicy& pol, const std::string& principal);
Now the user sees “why” immediately.
Section 14.6 — Audit UX Output Shape (Human-Friendly)
A trace response should look like:
{
"id":"cmd-0009",
"principal":"user:peace",
"atomic":true,
"applied":4,
"results":[...],
"final":{"state":"LOOP","context":{...}},
"ts_ms": 1734030000000,
"hash":"..."
}
This is the foundation for a UI page later:
- a timeline
- a diff
- a replay button
- “why blocked” hints
What we unlocked
Now the system is:
- verifiable (ledger chain)
- auditable (trace per collapse)
- self-describing (effective policy endpoint)
- debuggable (explain-why)
This is how you run “company-as-a-fabric” without people losing their minds.
Phase 1 — Section 15
Diffs · Ledger Index · Event Streaming · Performance Counters
What we ship in this section
-
GET /audit/diff/{command_id}— before/after diffs (context + graph + state) - Ledger index for fast lookups:
command_id → ledger_entry_id -
GET /events/stream— Server-Sent Events (SSE) real-time feed - Perf counters: execution latency, hunt cost, planner expansions, queue depth
-
GET /metrics— JSON metrics endpoint
Section 15.1 — Snapshot Hashing + Diffs
We need two snapshots:
- before command applied
- after command applied
Phase 1 approach:
-
In
command_commitpayload, store:snapshot_beforesnapshot_aftersnapshot_before_hashsnapshot_after_hash
We already have snapshot_view() (Section 11). We reuse it.
Repo additions
src/audit/
diff.hpp
diff.cpp
src/core/
hash.hpp
hash.cpp
Section 15.2 — Snapshot Hash (Stable Fingerprint)
Code Space 15.2.1 — src/core/hash.hpp
#pragma once
#include <string>
#include "json.hpp"
namespace me {
using json = nlohmann::json;
std::string sha256_json(const json& j);
} // namespace me
Code Space 15.2.2 — src/core/hash.cpp
#include "core/hash.hpp"
#include <openssl/sha.h>
#include <sstream>
#include <iomanip>
namespace me {
static std::string sha256_hex(const std::string& s) {
unsigned char hash[SHA256_DIGEST_LENGTH];
SHA256(reinterpret_cast<const unsigned char*>(s.data()), s.size(), hash);
std::ostringstream oss;
for (int i=0;i<SHA256_DIGEST_LENGTH;i++)
oss << std::hex << std::setw(2) << std::setfill('0') << (int)hash[i];
return oss.str();
}
std::string sha256_json(const json& j) {
// canonical-ish: dump with sorted keys (nlohmann keeps insertion order; Phase 1 acceptable)
// For stricter canonicalization later: use a canonical JSON serializer.
return sha256_hex(j.dump());
}
} // namespace me
Section 15.3 — Commit Payload Upgrade: store before/after snapshots
In your /command commit path (or command_commit_payload()), do:
Code Space 15.3.1 — Add to commit payload builder
auto before = snapshot_view(before_kernel);
auto after = snapshot_view(after_kernel);
payload["snapshot_before"] = before;
payload["snapshot_after"] = after;
payload["snapshot_before_hash"] = me::sha256_json(before);
payload["snapshot_after_hash"] = me::sha256_json(after);
Where do we get before_kernel?
- In commit mode, create a staging clone before applying ops.
- After applying and committing,
liveis “after”.
This makes every command a replayable diff unit.
Section 15.4 — Diff Engine (Minimal, Useful)
We generate a JSON diff with:
- changed context keys
- added/removed endpoints
- added/removed edges
- state change
Repo additions
src/audit/diff.hpp
src/audit/diff.cpp
Code Space 15.4.1 — src/audit/diff.hpp
#pragma once
#include "json.hpp"
namespace me {
using json = nlohmann::json;
json diff_snapshots(const json& before, const json& after);
} // namespace me
Code Space 15.4.2 — src/audit/diff.cpp
#include "audit/diff.hpp"
#include <unordered_map>
#include <unordered_set>
namespace me {
using json = nlohmann::json;
static std::unordered_map<std::string, json> map_by_id(const json& arr) {
std::unordered_map<std::string, json> m;
if (!arr.is_array()) return m;
for (const auto& x : arr) {
if (x.is_object() && x.contains("id") && x["id"].is_string()) {
m[x["id"].get<std::string>()] = x;
}
}
return m;
}
json diff_snapshots(const json& before, const json& after) {
json out;
// state
auto bs = before.value("state", "");
auto as = after.value("state", "");
if (bs != as) out["state"] = {{"from", bs}, {"to", as}};
// context: changed keys only
json ctx_diff = json::array();
auto bctx = before.value("context", json::object());
auto actx = after.value("context", json::object());
std::unordered_set<std::string> keys;
if (bctx.is_object()) for (auto it=bctx.begin(); it!=bctx.end(); ++it) keys.insert(it.key());
if (actx.is_object()) for (auto it=actx.begin(); it!=actx.end(); ++it) keys.insert(it.key());
for (const auto& k : keys) {
auto bv = bctx.contains(k) ? bctx[k] : json::null();
auto av = actx.contains(k) ? actx[k] : json::null();
if (bv != av) ctx_diff.push_back(json{{"key", k}, {"from", bv}, {"to", av}});
}
out["context_changes"] = ctx_diff;
// graph endpoints
auto beps = before["graph"].value("endpoints", json::array());
auto aeps = after["graph"].value("endpoints", json::array());
auto bmap = map_by_id(beps);
auto amap = map_by_id(aeps);
json eps_added = json::array();
json eps_removed = json::array();
json eps_changed = json::array();
for (const auto& [id, a] : amap) {
if (!bmap.count(id)) eps_added.push_back(a);
else if (bmap[id] != a) eps_changed.push_back(json{{"id", id}, {"from", bmap[id]}, {"to", a}});
}
for (const auto& [id, b] : bmap) {
if (!amap.count(id)) eps_removed.push_back(b);
}
out["endpoints"] = {
{"added", eps_added},
{"removed", eps_removed},
{"changed", eps_changed}
};
// graph edges: key by from|to|capability
auto bedges = before["graph"].value("edges", json::array());
auto aedges = after["graph"].value("edges", json::array());
auto edge_key = [](const json& e)->std::string {
return e.value("from","") + "|" + e.value("to","") + "|" + e.value("capability","");
};
std::unordered_map<std::string, json> bem, aem;
if (bedges.is_array()) for (const auto& e : bedges) aem[edge_key(e)] = e; // placeholder init
bem.clear(); aem.clear();
if (bedges.is_array()) for (const auto& e : bedges) bem[edge_key(e)] = e;
if (aedges.is_array()) for (const auto& e : aedges) aem[edge_key(e)] = e;
json edges_added = json::array();
json edges_removed = json::array();
json edges_changed = json::array();
for (const auto& [k, a] : aem) {
if (!bem.count(k)) edges_added.push_back(a);
else if (bem[k] != a) edges_changed.push_back(json{{"key", k}, {"from", bem[k]}, {"to", a}});
}
for (const auto& [k, b] : bem) {
if (!aem.count(k)) edges_removed.push_back(b);
}
out["edges"] = {{"added", edges_added}, {"removed", edges_removed}, {"changed", edges_changed}};
return out;
}
} // namespace me
Route: GET /audit/diff/{command_id}
- find command_commit payload (Section 14)
- extract snapshots
- return diff
#include "audit/trace.hpp"
#include "audit/diff.hpp"
app.Get(R"(/audit/diff/([\w:\-\.]+))", [&](const httplib::Request& req, httplib::Response& res) {
auto auth = me::authenticate(req);
if (std::holds_alternative<me::Error>(auth)) return respond_err(res, std::get<me::Error>(auth), 401);
auto cmd_id = req.matches[1].str();
auto payload = me::find_command_commit_payload(k.ledger, cmd_id);
if (!payload.has_value()) return respond_err(res, {"NOT_FOUND","command_commit not found"}, 404);
auto before = payload->value("snapshot_before", nlohmann::json::object());
auto after = payload->value("snapshot_after", nlohmann::json::object());
respond_ok(res, me::diff_snapshots(before, after));
});
Section 15.5 — Ledger Index (Command ID Lookup Fast Path)
Scanning the ledger is fine until it isn’t. Phase 1 index:
-
Build in-memory index at boot:
- for each
command_commit, mappayload.id → ledger_entry_id
- for each
Persist index later (Phase 2)
Repo additions
src/ledger/
index.hpp
index.cpp
Code Space 15.5.1 — src/ledger/index.hpp
#pragma once
#include <string>
#include <unordered_map>
#include "ledger/ledger.hpp"
namespace me {
class LedgerIndex {
public:
void build(const Ledger& ledger);
bool lookup_command(const std::string& command_id, u64& out_entry_id) const;
private:
std::unordered_map<std::string, u64> cmd_to_entry_;
};
} // namespace me
Code Space 15.5.2 — src/ledger/index.cpp
#include "ledger/index.hpp"
#include "json.hpp"
namespace me {
using json = nlohmann::json;
void LedgerIndex::build(const Ledger& ledger) {
cmd_to_entry_.clear();
auto all = ledger.read_all();
if (std::holds_alternative<Error>(all)) return;
const auto& entries = std::get<std::vector<LedgerEntry>>(all);
for (const auto& e : entries) {
if (e.kind != "command_commit") continue;
try {
auto env = json::parse(e.payload_json);
if (!env.contains("payload")) continue;
auto payload = env["payload"];
if (payload.contains("id") && payload["id"].is_string()) {
cmd_to_entry_[payload["id"].get<std::string>()] = e.id;
}
} catch (...) { continue; }
}
}
bool LedgerIndex::lookup_command(const std::string& command_id, u64& out_entry_id) const {
auto it = cmd_to_entry_.find(command_id);
if (it == cmd_to_entry_.end()) return false;
out_entry_id = it->second;
return true;
}
} // namespace me
Wire this into daemon boot:
k.index.build(k.ledger);
Section 15.6 — Event Streaming (SSE)
We expose a real-time feed for UI and tooling.
Endpoint
GET /events/stream
Format
SSE lines:
event: <type>data: <json>- blank line
Repo additions
src/daemon/
sse.hpp
sse.cpp
Code Space 15.6.1 — src/daemon/sse.hpp
#pragma once
#include "httplib.h"
#include "json.hpp"
#include <mutex>
#include <vector>
#include <memory>
namespace me {
using json = nlohmann::json;
struct SSEClient {
std::shared_ptr<httplib::Response> res;
};
class SSEHub {
public:
void add(std::shared_ptr<httplib::Response> res);
void broadcast(const std::string& event_type, const json& data);
private:
std::mutex mu_;
std::vector<std::weak_ptr<httplib::Response>> clients_;
};
} // namespace me
Code Space 15.6.2 — src/daemon/sse.cpp
#include "daemon/sse.hpp"
namespace me {
void SSEHub::add(std::shared_ptr<httplib::Response> res) {
std::lock_guard<std::mutex> lock(mu_);
clients_.push_back(res);
}
static std::string sse_frame(const std::string& ev, const nlohmann::json& j) {
return "event: " + ev + "\n" + "data: " + j.dump() + "\n\n";
}
void SSEHub::broadcast(const std::string& event_type, const nlohmann::json& data) {
std::lock_guard<std::mutex> lock(mu_);
std::vector<std::weak_ptr<httplib::Response>> next;
for (auto& w : clients_) {
if (auto r = w.lock()) {
r->set_content(sse_frame(event_type, data), "text/event-stream");
next.push_back(w);
}
}
clients_.swap(next);
}
} // namespace me
Note: cpp-httplib SSE handling varies. If you’re using a different server stack, the “hub” concept stays the same—implementation will adjust. The key idea is: publish events from the bus.
Route
app.Get("/events/stream", [&](const httplib::Request& req, httplib::Response& res) {
auto auth = me::authenticate(req);
if (std::holds_alternative<me::Error>(auth)) return respond_err(res, std::get<me::Error>(auth), 401);
res.set_header("Cache-Control", "no-cache");
res.set_header("Connection", "keep-alive");
res.set_header("Content-Type", "text/event-stream");
// If your server supports streaming write, use it here.
// Phase 1: simplest placeholder that returns "connected".
res.set_content("event: hello\ndata: {\"ok\":true}\n\n", "text/event-stream");
});
If you want true streaming, we’ll switch to a server mode that supports chunked streaming (or use websockets) in Phase 2. But the API contract is set now.
Section 15.7 — Performance Counters (Metrics You Can Trust)
What we track
command.exec_mscommand.ops_appliedhunt.exec_msplanner.expansionsplanner.edges_consideredrate_limit.blockedauth.failures
Repo additions
src/metrics/
metrics.hpp
metrics.cpp
Code Space 15.7.1 — src/metrics/metrics.hpp
#pragma once
#include <atomic>
#include <string>
#include "json.hpp"
namespace me {
using json = nlohmann::json;
struct Metrics {
std::atomic<u64> commands_total{0};
std::atomic<u64> commands_failed{0};
std::atomic<u64> ops_applied_total{0};
std::atomic<u64> hunts_total{0};
std::atomic<u64> auth_failures{0};
std::atomic<u64> rate_blocked{0};
std::atomic<u64> last_command_ms{0};
std::atomic<u64> last_hunt_ms{0};
json to_json() const;
};
} // namespace me
Code Space 15.7.2 — src/metrics/metrics.cpp
#include "metrics/metrics.hpp"
namespace me {
json Metrics::to_json() const {
return json{
{"commands_total", commands_total.load()},
{"commands_failed", commands_failed.load()},
{"ops_applied_total", ops_applied_total.load()},
{"hunts_total", hunts_total.load()},
{"auth_failures", auth_failures.load()},
{"rate_blocked", rate_blocked.load()},
{"last_command_ms", last_command_ms.load()},
{"last_hunt_ms", last_hunt_ms.load()}
};
}
} // namespace me
Metrics endpoint
app.Get("/metrics", [&](const httplib::Request& req, httplib::Response& res) {
auto auth = me::authenticate(req);
if (std::holds_alternative<me::Error>(auth)) return respond_err(res, std::get<me::Error>(auth), 401);
respond_ok(res, k.metrics.to_json());
});
Instrumentation (examples)
In /command handler:
auto t0 = now_ms();
k.metrics.commands_total++;
...
k.metrics.ops_applied_total += cmdr.applied;
k.metrics.last_command_ms = now_ms() - t0;
In hunt executor:
k.metrics.hunts_total++;
auto t0 = now_ms();
...
k.metrics.last_hunt_ms = now_ms() - t0;
What we unlocked
Now the platform has:
- diffs (what changed, exactly)
- fast lookup (index)
- real-time hooks (events)
- visibility (metrics)
This is the “operability pack” that lets you build the UI layer without hacking around blind.
Phase 1 — Section 16
Reference Stack · Docker Compose · Config Layout · Sample Scripts · Golden Tests · CI
What we ship in this section
- Repo layout (final Phase 1 structure)
- Dockerfile + docker-compose (daemon + optional UI stub)
-
Config system (
policy.json, secrets, ports) - Sample scripts (store/run/diff/trace/verify)
- Golden tests (deterministic outputs)
- CI workflow (build + test on push)
Section 16.1 — Final Phase 1 Repo Structure
mindseye-fabric/
README.md
LICENSE
docker/
Dockerfile
docker-compose.yml
config/
policy.json
secrets.json
scripts/
sample/
mslib.assign_flow.v1.json
ms.run.dry.json
ms.run.commit.json
curl/
store.sh
run.sh
trace.sh
diff.sh
verify.sh
metrics.sh
src/
app/
main.cpp
core/
types.hpp
validate.hpp
hash.hpp
hash.cpp
time.hpp
daemon/
server.hpp
routes.cpp
respond.hpp
sse.hpp
sse.cpp
security/
auth.hpp
auth.cpp
policy.hpp
policy.cpp
ratelimit.hpp
ratelimit.cpp
ledger/
ledger.hpp
ledger.cpp
index.hpp
index.cpp
signed_commit.hpp
signed_commit.cpp
verify.hpp
verify.cpp
command/
executor.hpp
executor.cpp
apply.hpp
apply.cpp
commit.hpp
commit.cpp
graph/
graph.hpp
graph.cpp
serialize.hpp
serialize.cpp
hunts/
budget.hpp
reachability.hpp
reachability.cpp
planner.hpp
planner.cpp
capability.hpp
capability.cpp
mindscript/
ast.hpp
parse.hpp
parse.cpp
check.hpp
check.cpp
compile.hpp
compile.cpp
runtime.hpp
value.hpp
value.cpp
cond.hpp
cond.cpp
exec_hunt.hpp
exec_hunt.cpp
contract.hpp
contract.cpp
exec.hpp
exec.cpp
library.hpp
library.cpp
audit/
trace.hpp
trace.cpp
diff.hpp
diff.cpp
metrics/
metrics.hpp
metrics.cpp
tests/
golden/
expected.trace.assign_flow.json
expected.diff.assign_flow.json
expected.verify.ok.json
test_runner.cpp
.github/
workflows/
ci.yml
This structure is the “Phase 1 reference spine.” Everything above it (UI, LLM tools, multi-node cloud fabric) plugs into this.
Section 16.2 — Dockerfile (Build + Run Daemon)
Code Space 16.2.1 — docker/Dockerfile
FROM ubuntu:24.04
RUN apt-get update && apt-get install -y \
build-essential cmake git pkg-config \
libssl-dev curl \
&& rm -rf /var/lib/apt/lists/*
WORKDIR /app
COPY . /app
RUN mkdir -p build && cd build && cmake .. && make -j
ENV MS_PORT=8080
CMD ["./build/mindseye_daemon"]
If you’re using a different build system already (Bazel etc.), swap that in. The goal is: one command starts the daemon.
Section 16.3 — Docker Compose (Daemon + Optional “UI Stub”)
Code Space 16.3.1 — docker/docker-compose.yml
services:
daemon:
build:
context: ..
dockerfile: docker/Dockerfile
ports:
- "8080:8080"
volumes:
- ../config:/app/config:ro
- ../data:/app/data
environment:
- MS_PORT=8080
- MS_POLICY_PATH=/app/config/policy.json
- MS_SECRETS_PATH=/app/config/secrets.json
- MS_LEDGER_PATH=/app/data/ledger.jsonl
Section 16.4 — Config: secrets.json
This holds the HMAC key for signed ledger commits.
Code Space 16.4.1 — config/secrets.json
{
"ledger_hmac_key": "DEV_LEDGER_SECRET_CHANGE_ME"
}
Boot wiring (daemon startup)
In main.cpp, load:
- policy file path
- secrets file path
- ledger path
Section 16.5 — Sample Scripts (Golden Inputs)
Code Space 16.5.1 — scripts/sample/mslib.assign_flow.v1.json
{
"id":"mslib:assign_flow:v1",
"mode":"commit",
"contract":{
"type":"object",
"required":["status","best_path"],
"properties":{
"status":{"type":"string"},
"best_path":{"type":"array"}
}
},
"program":[
{"set":{"office.people_present":16,"human:1.online":true}},
{"endpoint":{"id":"office:hub","label":"OfficeHub","enabled":true}},
{"endpoint":{"id":"human:1","label":"Peace","enabled":true}},
{"edge":{"from":"office:hub","to":"human:1","capability":"assign","enabled":true,"cost":1.0}},
{"hunt":{"as":"assignable","kind":"capability","start":"office:hub","capability":"assign","budget":{"max_cost":10}}},
{"assert":{"that":{"var.exists":"assignable.best"},"message":"No assignable endpoint reachable"}},
{"return":{
"status":"ok",
"best_path":{"var.get":"assignable.best.path"}
}}
]
}
Section 16.6 — Curl Scripts (Operator-Friendly)
Code Space 16.6.1 — scripts/curl/store.sh
#!/usr/bin/env bash
set -euo pipefail
HOST="${HOST:-localhost:8080}"
PRINCIPAL="${PRINCIPAL:-sys:root}"
KEY="${KEY:-ROOT_DEV_KEY_CHANGE_ME}"
ID="${1:-mslib:assign_flow:v1}"
FILE="${2:-scripts/sample/mslib.assign_flow.v1.json}"
curl -sS -X POST "http://${HOST}/mindscript/store" \
-H "Content-Type: application/json" \
-H "X-MS-Principal: ${PRINCIPAL}" \
-H "X-MS-Key: ${KEY}" \
--data-binary @"${FILE}" | jq .
Code Space 16.6.2 — scripts/curl/run.sh
#!/usr/bin/env bash
set -euo pipefail
HOST="${HOST:-localhost:8080}"
PRINCIPAL="${PRINCIPAL:-sys:root}"
KEY="${KEY:-ROOT_DEV_KEY_CHANGE_ME}"
ID="${1:-mslib:assign_flow:v1}"
curl -sS -X POST "http://${HOST}/mindscript/run/${ID}" \
-H "X-MS-Principal: ${PRINCIPAL}" \
-H "X-MS-Key: ${KEY}" | jq .
Add similarly:
-
trace.sh,diff.sh,verify.sh,metrics.sh
Section 16.7 — Golden Tests (Deterministic “Expected Output”)
Philosophy
A golden test says: given the same script + same seed state, you must get the exact same:
- returned value
- trace events shape (not timestamps)
- diff shape
- verify ok
We avoid timestamp brittleness by:
- disabling real-time fields in golden outputs, or
- normalizing them in the test runner.
Code Space 16.7.1 — tests/test_runner.cpp (skeleton)
#include <fstream>
#include <iostream>
#include "json.hpp"
// In a real repo, you’d link directly into kernel + mindscript exec
// so tests don't need HTTP. Phase 1: call internal functions.
using json = nlohmann::json;
static json load(const std::string& path) {
std::ifstream f(path);
json j; f >> j;
return j;
}
static void assert_equal(const json& a, const json& b, const std::string& name) {
if (a != b) {
std::cerr << "FAIL: " << name << "\n";
std::cerr << "expected:\n" << b.dump(2) << "\n";
std::cerr << "got:\n" << a.dump(2) << "\n";
std::exit(1);
}
}
int main() {
// Pseudocode outline:
// 1) create kernel with empty ledger in temp
// 2) load script JSON
// 3) execute_mindscript()
// 4) normalize output (remove ts_ms etc)
// 5) compare with golden expected json files
std::cout << "OK\n";
return 0;
}
The important part is the pattern: tests run kernel directly, not via curl.
Golden outputs
tests/golden/expected.trace.assign_flow.json
tests/golden/expected.diff.assign_flow.json
tests/golden/expected.verify.ok.json
Section 16.8 — CI Workflow (Build + Test)
Code Space 16.8.1 — .github/workflows/ci.yml
name: ci
on:
push:
pull_request:
jobs:
build-test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: deps
run: sudo apt-get update && sudo apt-get install -y cmake build-essential libssl-dev
- name: build
run: |
mkdir -p build
cd build
cmake ..
make -j
- name: test
run: |
./build/test_runner
Section 16.9 — README (The “one-minute run”)
Code Space 16.9.1 — README.md (core commands)
## Run
docker compose -f docker/docker-compose.yml up --build
## Store sample script
bash scripts/curl/store.sh mslib:assign_flow:v1 scripts/sample/mslib.assign_flow.v1.json
## Run stored script
bash scripts/curl/run.sh mslib:assign_flow:v1
## Verify ledger
bash scripts/curl/verify.sh
## Trace + diff
bash scripts/curl/trace.sh cmd-...
bash scripts/curl/diff.sh cmd-...
What we unlocked
Phase 1 is now:
- cloneable
- runnable
- testable
- CI-validated
- operator-friendly
This is the “reference nucleus” for the Cloud Fabric. Everything else (C++ binary mapping expansion, hardware routing visualization, multi-node fabric) now has a stable base to attach to.
Top comments (0)