DEV Community

YedanYagami
YedanYagami

Posted on

Edge-Native Knowledge Graphs with Cloudflare D1 + Vectorize — No Database Server Required

Knowledge graphs usually mean Neo4j, a dedicated server, and a hefty monthly bill. I built one that runs entirely on Cloudflare's edge — D1 for the graph, Vectorize for semantic search, KV for caching. Total monthly cost: about $1.

Why Edge-Native?

Traditional knowledge graph setups:

  • Neo4j: $65+/month for managed, or a server to maintain
  • Amazon Neptune: $0.10/hour minimum = $73/month
  • Local: Great for dev, useless for production agents

Edge-native with Cloudflare:

  • D1: 5GB free, reads are essentially instant from edge
  • Vectorize: 5M vectors on paid plan, HNSW indexing
  • KV: Perfect for caching frequent queries
  • Workers: Serve the graph as an API, globally, for free

The Schema

The core is simple — entities, relationships, and vectors:

CREATE TABLE kg_entities (
  id TEXT PRIMARY KEY,
  name TEXT NOT NULL,
  type TEXT NOT NULL,
  properties TEXT DEFAULT '{}',
  confidence REAL DEFAULT 0.8,
  source TEXT,
  created_at TEXT,
  updated_at TEXT
);

CREATE TABLE kg_relationships (
  id TEXT PRIMARY KEY,
  source_id TEXT NOT NULL,
  target_id TEXT NOT NULL,
  type TEXT NOT NULL,
  properties TEXT DEFAULT '{}',
  confidence REAL DEFAULT 0.8,
  FOREIGN KEY (source_id) REFERENCES kg_entities(id),
  FOREIGN KEY (target_id) REFERENCES kg_entities(id)
);
Enter fullscreen mode Exit fullscreen mode

Plus supporting tables for communities, causal edges, experience replay, and full-text search (FTS5).

Agentic RAG: Let the Agent Choose Its Retrieval

The interesting part isn't the graph itself — it's how you query it. I implemented Agentic RAG (A-RAG), where the system classifies each query and picks the best retrieval strategy automatically.

function classifyQuery(query) {
  const lower = query.toLowerCase();

  if (lower.includes('caused') || lower.includes('why did'))
    return 'causal';
  if (lower.includes('timeline') || lower.includes('when'))
    return 'temporal';
  if (lower.includes('how') || lower.includes('explain'))
    return 'exploratory';
  if (lower.includes('related') || lower.includes('connected'))
    return 'multi_hop';
  return 'simple';
}
Enter fullscreen mode Exit fullscreen mode

Each query type triggers different retrieval interfaces:

Query Type Interfaces Used Example
Simple FTS5 keyword search "What is MCP?"
Exploratory Semantic vectors + graph walk "How do agents communicate?"
Temporal Time-weighted FTS5 "What happened last week?"
Causal Causal DAG traversal "Why did revenue drop?"
Multi-hop All strategies + RRF fusion "How is X related to Y?"

Five Retrieval Interfaces

1. Keyword Search (FTS5)

D1 supports SQLite FTS5, which is fast for exact and prefix matches:

SELECT * FROM kg_entities_fts
WHERE kg_entities_fts MATCH ?
ORDER BY rank
LIMIT 20;
Enter fullscreen mode Exit fullscreen mode

2. Semantic Search (Vectorize)

For queries that need meaning, not just keywords:

const results = await env.VECTORIZE.query(
  embedding, { topK: 20, returnMetadata: true }
);
Enter fullscreen mode Exit fullscreen mode

3. Chunk Read

Full-text passages stored in a kg_chunks table with their own FTS5 index. This is the "read the actual document" interface.

4. Graph Walk

Starting from seed entities, traverse relationships to find connected context:

async function graphWalk(seedIds, depth = 2) {
  let frontier = seedIds;
  const visited = new Set();

  for (let d = 0; d < depth; d++) {
    const neighbors = await db.prepare(`
      SELECT target_id, type, confidence
      FROM kg_relationships
      WHERE source_id IN (${frontier.map(() => '?').join(',')})
      AND target_id NOT IN (${[...visited].map(() => '?').join(',')})
    `).bind(...frontier, ...visited).all();

    frontier = neighbors.results.map(r => r.target_id);
    frontier.forEach(id => visited.add(id));
  }
  return visited;
}
Enter fullscreen mode Exit fullscreen mode

5. Causal DAG

For "why" questions, traverse cause-effect chains:

SELECT * FROM kg_causal_edges
WHERE cause_id = ?
ORDER BY strength DESC;
Enter fullscreen mode Exit fullscreen mode

Self-Improving with Experience Replay

The graph learns which retrieval strategies work best for which queries. After each query, the system logs:

  • Which strategies were used
  • How many results each returned
  • User feedback (if available)

A cron job periodically analyzes these logs and adjusts strategy weights. It's a simple RL-style feedback loop.

Performance

On 182 entities and 214 relationships:

  • Query latency: 1.9-2.3 seconds (including vector search)
  • Ingest latency: ~200ms per entity
  • Cron maintenance: < 1 second (PageRank, belief decay, FTS5 rebuild)

This scales well to thousands of entities. D1 handles millions of rows comfortably.

Deploy Your Own

The entire system is a single Cloudflare Worker with D1 + Vectorize + KV bindings. The full source code is MIT licensed.

If you want the complete system pre-configured, check out the OODA System Blueprint which includes the knowledge graph, all 6 OODA workers, and the Telegram integration.


Have you built knowledge graphs on edge infrastructure? I'm curious about other approaches.

Top comments (0)