This is a submission for the **Agentic Postgres Challenge.
Author: Peace Thabiwa (Botswana) — concept builder of BINFLOW (time-aware data), Proof-of-Leverage (PoL), and Web4.
TL;DR: I turned Postgres into an agent brain that measures influence through usage over time (Proof of Leverage), not just ownership. Agents collaborate across zero-copy forks, write time-labeled events to a hypertable, and compute PoL on the fly. We use pg_text (full-text), hybrid semantic search (text + vector), Tiger MCP for multi-agent orchestration, the Tiger CLI for cloud ops, and Fluid Storage for big artifacts. It’s weird, fast, and actually useful.
🎯 What I Built
Agent-Ledger — a multi-agent system where:
-
Each code/data pattern lives as a dual identity:
-
binary_code(content hash) -
temporal_code(time-labeled flow signature: Focus/Loop/Transition/Pause/Emergence)
-
Every use, remix, or publish creates a flow_event in Postgres (time series).
A Proof of Leverage score (PoL) is computed as a time-weighted function of usage × phase.
Agents collaborate via DB forks: experimental agents branch the DB, propose changes and merges; the “meta-agent” approves back to prod.
Hybrid search lets agents find patterns by semantics + full-text (pg_text + embeddings).
Fluid Storage hosts large assets (artifacts, notebooks, model shards) linked to DB rows.
Think “Git for usage/influence,” but the ledger is Postgres, and the miners are agents.
🧱 System Diagram (high-level)
┌────────────────────────────┐ ┌────────────────────────────┐
│ Agentic Postgres (Prod) │ │ Agentic Postgres (Fork A) │
│ - patterns │ <fork> │ - agents refine patterns │
│ - flow_events (hypertable) │ │ - run sims/embeddings │
│ - embeddings (pgvector) │ │ - propose merges │
│ - pol_materialized_view │ └───────────┬────────────────┘
│ - text search (pg_text) │ │ merge (Tiger CLI/MCP)
└───────┬───────────┬────────┘ ┌───────────▼────────────────┐
│ │ │ Agentic Postgres (Fork B)
│ │ │ - alternate agent team
│ │ └────────────────────────────┘
│ │
│ Fluid Storage (artifacts: models/notebooks/assets)
│ │
▼ ▼
MCP Router Tiger CLI (create forks, deploy tasks, run agents)
🗄️ Schema (SQL)
We model patterns, flow events, embeddings, and PoL:
-- Enable required extensions
CREATE EXTENSION IF NOT EXISTS timescaledb;
CREATE EXTENSION IF NOT EXISTS vector; -- for embeddings
-- pg_text is Tiger's full-text feature; enable/assume available in Tiger Cloud context
-- 1) Core entities
CREATE TABLE patterns (
pattern_id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
name TEXT NOT NULL,
binary_code TEXT NOT NULL, -- sha256 or content hash
created_at TIMESTAMPTZ NOT NULL DEFAULT now(),
description TEXT,
tags TEXT[]
);
-- 2) Time-labeled flow events (BINFLOW)
CREATE TABLE flow_events (
event_id BIGSERIAL PRIMARY KEY,
pattern_id UUID REFERENCES patterns(pattern_id) ON DELETE CASCADE,
phase TEXT CHECK (phase IN ('Focus','Loop','Transition','Pause','Emergence')),
action TEXT CHECK (action IN ('create','reuse','modify','publish','healthcheck')),
payload JSONB,
occurred_at TIMESTAMPTZ NOT NULL DEFAULT now()
);
-- Turn into hypertable for time-series performance
SELECT create_hypertable('flow_events', 'occurred_at', if_not_exists => TRUE);
-- 3) Hybrid search: embeddings + full text
CREATE TABLE pattern_embeddings (
pattern_id UUID REFERENCES patterns(pattern_id) ON DELETE CASCADE,
embedding VECTOR(768), -- adjust to your model
updated_at TIMESTAMPTZ DEFAULT now(),
PRIMARY KEY (pattern_id)
);
-- Tiger pg_text configuration (example)
-- Create a ts_vector column for full-text search and index it
ALTER TABLE patterns ADD COLUMN tsv tsvector;
UPDATE patterns SET tsv = to_tsvector('english', coalesce(name,'') || ' ' || coalesce(description,''));
CREATE INDEX patterns_tsv_idx ON patterns USING GIN(tsv);
-- Helper trigger to keep tsv up to date
CREATE FUNCTION patterns_tsv_update() RETURNS trigger AS $$
BEGIN
NEW.tsv := to_tsvector('english', coalesce(NEW.name,'') || ' ' || coalesce(NEW.description,''));
RETURN NEW;
END; $$ LANGUAGE plpgsql;
CREATE TRIGGER trg_patterns_tsv
BEFORE INSERT OR UPDATE ON patterns
FOR EACH ROW EXECUTE FUNCTION patterns_tsv_update();
-- 4) PoL computation (view + materialized view)
-- Simple baseline formula:
-- PoL = log(1 + uses) * (phase_weight / (1 + sqrt(days_since_creation)))
CREATE OR REPLACE FUNCTION phase_weight(p TEXT)
RETURNS NUMERIC AS $$
SELECT CASE p
WHEN 'Focus' THEN 1.0
WHEN 'Loop' THEN 1.3
WHEN 'Transition' THEN 1.5
WHEN 'Pause' THEN 0.7
WHEN 'Emergence' THEN 1.9
ELSE 1.0 END;
$$ LANGUAGE sql IMMUTABLE;
CREATE VIEW pol_live AS
SELECT
p.pattern_id,
p.name,
p.created_at,
COALESCE(SUM( LOG(1 + 1) * phase_weight(e.phase) -- each event adds weighted increment
/ NULLIF(1 + sqrt(EXTRACT(EPOCH FROM (now() - p.created_at))/86400)::NUMERIC,0)
), 0) AS pol_score,
COUNT(e.*) AS event_count
FROM patterns p
LEFT JOIN flow_events e ON e.pattern_id = p.pattern_id
GROUP BY p.pattern_id;
-- Optionally materialize for speed during judging:
DROP MATERIALIZED VIEW IF EXISTS pol_mv;
CREATE MATERIALIZED VIEW pol_mv AS SELECT * FROM pol_live;
CREATE INDEX pol_mv_score_idx ON pol_mv(pol_score DESC);
🔎 Hybrid Search (SQL)
Search patterns by semantic meaning + full text + tags:
-- 1) Full-text match
WITH ft AS (
SELECT pattern_id, ts_rank_cd(tsv, plainto_tsquery('english', $1)) AS ft_score
FROM patterns
WHERE tsv @@ plainto_tsquery('english', $1)
),
-- 2) Vector similarity (use cosine distance)
vs AS (
SELECT p.pattern_id, 1 - (p.embedding <=> $2::vector) AS vec_score
FROM pattern_embeddings p
)
SELECT pat.pattern_id, pat.name, coalesce(ft.ft_score,0) AS ft_score,
coalesce(vs.vec_score,0) AS vec_score,
(coalesce(ft.ft_score,0) * 0.4 + coalesce(vs.vec_score,0) * 0.6) AS hybrid
FROM patterns pat
LEFT JOIN ft ON ft.pattern_id = pat.pattern_id
LEFT JOIN vs ON vs.pattern_id = pat.pattern_id
WHERE ($3::text IS NULL OR $3 = ANY(pat.tags))
ORDER BY hybrid DESC
LIMIT 20;
Parameters:
-
$1= text query -
$2= embedding vector for the query -
$3= optional tag filter
🤖 Agent Flow (Tiger MCP + Forks)
Agents:
- Creator Agent: registers patterns (binary_code), seeds description/tags.
-
Observer Agent: writes
flow_eventsas usage occurs (create/reuse/modify/publish/healthcheck). - Searcher Agent: hybrid search over pg_text + embeddings.
- Meta Agent: validates PoL changes and approves fork merges.
Why forks?
- Each agent team experiments on a fast, zero-copy fork of prod, computes new embeddings/PoL tweaks, then proposes a merge back to prod.
CLI flow (pseudo):
# Create main project + database
tiger projects create agent-ledger
tiger db create --project agent-ledger --name prod-ledger
# Fork for Team A
tiger db fork prod-ledger --name agent-fork-a
# Run MCP router + register agents (sample)
tiger mcp up --project agent-ledger --config mcp.yml
# Apply schema to prod and fork
tiger db exec prod-ledger -f schema.sql
tiger db exec agent-fork-a -f schema.sql
In MCP, wire each agent as a tool calling SQL RPCs; the meta agent has a rule: only merge if
pol_mv.pol_scoreimproves for ≥ N patterns and tests pass.
🐍 Python: event recorder + PoL client
import os, json, time, hashlib
import requests
import psycopg2
from datetime import datetime, timezone
PG_URL = os.getenv("PG_URL") # Tiger Cloud connection string
PHASES = ["Focus","Loop","Transition","Pause","Emergence"]
def dual_code(data: str, phase: str):
binary = hashlib.sha256(data.encode()).hexdigest()
ts = datetime.now(timezone.utc).isoformat()
temporal = hashlib.sha256(f"{data}-{phase}-{ts}".encode()).hexdigest()[:12]
return binary, f"{phase}:{ts}:{temporal}"
def record_event(conn, pattern_id, phase, action, payload=None):
with conn.cursor() as cur:
cur.execute("""
INSERT INTO flow_events(pattern_id, phase, action, payload)
VALUES (%s, %s, %s, %s::jsonb)
""", (pattern_id, phase, action, json.dumps(payload or {})))
conn.commit()
def upsert_pattern(conn, name, description, tags, sample_data, phase="Focus"):
binary, temporal = dual_code(sample_data, phase)
with conn.cursor() as cur:
cur.execute("""
INSERT INTO patterns(name, binary_code, description, tags)
VALUES (%s, %s, %s, %s) RETURNING pattern_id
""", (name, binary, description, tags))
pid = cur.fetchone()[0]
conn.commit()
return pid, binary, temporal
if __name__ == "__main__":
conn = psycopg2.connect(PG_URL)
pid, bcode, tcode = upsert_pattern(
conn,
name="flow_balance_v1",
description="BINFLOW utility to normalize phase transitions",
tags=["binflow","utils","python"],
sample_data="def balance(flow): return flow"
)
record_event(conn, pid, "Focus", "create", {"binary_code": bcode, "temporal_code": tcode})
time.sleep(1)
record_event(conn, pid, "Loop", "modify", {"note": "tuned thresholds"})
time.sleep(1)
record_event(conn, pid, "Emergence", "publish", {"platform": "dev.to"})
print("seeded; check pol_live or pol_mv")
🧪 Testing (judge-friendly)
-
Public URL (App):
https://agent-ledger-demo.yourdomain.tld - Tiger Cloud DB: (auto-provisioned; read-only creds below)
-
Login (if needed):
- user:
demo@judge.dev - pass:
AgenticPostgresRocks!
- user:
Judge checklist:
- Open the web UI → search “flow_balance_v1”
- Click “See events” → observe time-labeled entries
- Click “Recompute PoL” → materialized view refresh → score updates
- Switch to Fork A (UI toggle) → try an edit → propose merge → see diff & policy gate
- Run semantic search (“normalization for phase transitions”) → hybrid ranking should surface the same pattern
🌈 Accessibility & UX
- UI uses semantic HTML, focus states, reduced-motion mode (
prefers-reduced-motion). - All charts have text alternatives.
- Color contrast checked (≥ WCAG AA).
🧪 Deployment Notes (Tiger)
Tiger CLI
tiger login
tiger projects create agent-ledger
tiger db create --project agent-ledger --name prod-ledger --plan free
# Apply schema + seed
tiger db exec prod-ledger -f schema.sql
python seed.py
tiger db exec prod-ledger -c "REFRESH MATERIALIZED VIEW pol_mv;"
# Forks for agents
tiger db fork prod-ledger --name fork-a
tiger db fork prod-ledger --name fork-b
# MCP router
tiger mcp up --project agent-ledger --config mcp.yml
# Deploy app (example Docker)
docker build -t agent-ledger:web .
tiger deploy web --image agent-ledger:web --env PG_URL=...
Fluid Storage
- Artifacts bucket mounted via Tiger’s Fluid Storage; rows in
patterns/flow_events.payloadstore URIs to artifacts.
📊 Why This Is “Agentic Postgres”
- Agent-first: Agents live in the DB topology (forks → propose merges).
- Hybrid search: pg_text + vectors make the DB a semantic brain.
-
Time-series core: Hypertable
flow_eventscaptures the heartbeat. - Consensus-like scoring: PoL acts like an “economic layer” for influence.
- MCP: Orchestrates who can write where; meta-agent enforces merge policy.
🧩 What I Learned
- Postgres can be the coordination substrate for multi-agent systems if you treat forks as agent sandboxes, and materialized views as agent reports.
- Hybrid search is ridiculously effective when you combine text rank with semantic similarity inside SQL.
- Timeseries + text + vectors in one place makes the DB feel… alive.
🔜 What’s Next
- Add real embeddings via Tiger MCP tool that calls a hosted model; store to
pattern_embeddings. - Agent policies expressed as SQL procedures + MCP guardrails.
- Expand PoL from simple formula → phase-aware decay + trust models (CI results, uptime).
👋 About Me
I’m Peace Thabiwa from Botswana — a concept builder working on BINFLOW (time-aware data), Proof-of-Leverage, and Web4 interfaces.
I’m looking for developers and builders to push this into a full open prototype. No funding — just proof.
- 📧 peacethabibinflow@proton.me
- 🐙 GitHub:
Sageworks-AI(repo link in comments)
📜 License
MIT — remix and ship.
Appendix: Quick Queries for Judges
Top patterns by PoL
REFRESH MATERIALIZED VIEW pol_mv; -- if needed
SELECT name, pol_score, event_count FROM pol_mv ORDER BY pol_score DESC LIMIT 10;
Recent events
SELECT * FROM flow_events ORDER BY occurred_at DESC LIMIT 25;
Full-text search
SELECT name, ts_rank_cd(tsv, plainto_tsquery('english','phase transitions')) AS rank
FROM patterns
WHERE tsv @@ plainto_tsquery('english','phase transitions')
ORDER BY rank DESC
LIMIT 10;

Top comments (0)