We built incredible AI tools. Then we built walls between them, and forgot to lay the road infrastructure.
7 min read - by Vektor Memory · vektormemory.com
How Via solves the context amnesia problem across Claude, Cursor, Windsurf, ChatGPT and every other AI tool in your stack.
The Roman Empire didn't conquer the known world because Roman soldiers were stronger than everyone else's soldiers. They conquered it because they could move faster. Legions reached the frontier in days. Supplies followed. Intelligence flowed back. The roads weren't a luxury - they were the strategic layer that made everything else possible.
Consider what you're looking at right now.
You have Claude Code shipping features. Cursor refactoring the same codebase. Windsurf doing the sweep-and-fix. ChatGPT running the research pass. LangChain wiring the pipeline. Each one genuinely capable. Each one, in isolation, impressive.
And none of them know what the others did.
Claude forgets what you did in Cursor. Cursor forgets what you built in Windsurf. The moment you switch tools - or open a new session - context resets to zero. Every tool is a city-state with its own dialect, its own memory, its own walls. You built the legions. You forgot the roads.
Via is the road infrastructure, boring but necessary.
The Problem Isn't the Tools
This matters to be precise about: the individual AI tools aren't broken. Claude is extraordinary at reasoning. Cursor knows how to stay in the flow of a codebase. Windsurf is surgical. The capability is real.
The broken layer is the connective tissue between them.
When you work across multiple AI tools in a single day - which, if you're building anything serious, you do - you are manually performing a job that should be automated. You are the context bus. You paste the summary from Claude into Cursor's system prompt. You re-explain the architecture to Windsurf that Claude already knows. You tell ChatGPT about the decision you made two tools ago. You are the integration layer, burning cognitive load on plumbing instead of thinking.
The ancient Romans had a word for the network that connected their empire: via. Road. Route. Way through.
That's the gap. And that's why it has that name.
What Via Actually Does
Via is an open source CLI - npm install -g @vektor/via, zero runtime dependencies - that creates a shared memory, task, and context bus across every AI tool in your stack.
It doesn't replace any tool. It connects them all.
npm install -g @vektor/via
via --help
Here's what that looks like in practice.
via init - Wire Everything in One Command
New project. New machine. The friction of standing up a complete AI working environment used to take the better part of an afternoon.
via init # detects Claude Desktop, Cursor, Windsurf - wires them all
via init --dry-run # preview what would change
Via detects what tools you have installed and writes the correct MCP server config for each one automatically. One command. Fully wired. Restart your tools and Via is live.
via memory - Relationship-Aware Knowledge Storage
The simplest interface possible for storing what matters.
via memory add "JWT tokens expire in 1h"
via memory add --file ./src/
via memory search "auth"
via memory graph
The file ingestion is where it gets interesting. Point Via at a codebase and it extracts symbols, function definitions, and import relationships from JS, TypeScript, Python, Go, Rust, and ten other languages - then builds an import graph in local SQLite. No embeddings. No API calls. No external dependencies.
When you search, Via traverses the graph:
via memory search "auth"
Direct matches (2 files)
● auth.js ./src/auth.js
● config.js ./src/config.js
Related via imports (3 files)
○ server.js ./src/server.js
○ middleware.js ./src/middleware.js
○ routes.js ./src/routes/auth.js
You asked about auth. Via returned auth - and everything that imports it. That's the answer a developer actually needs, not a list of files containing the string "auth."
The graph structure is the similarity signal. No vector database required.
via task - One Task Board, Every Tool Can Read It
A persistent task board that lives outside any single tool and is readable by all of them via MCP.
via task add "refactor auth module" --high
via task
via task start
via task done
The MCP server integration is what makes this useful at scale. When Via is running as an MCP server, Claude Desktop and Cursor can call via_task_list and via_task_add natively - without you copy-pasting task state between sessions. The board is the single source of truth every tool reads from.
via diff - Compare AI Tools Side by Side
The feature no other tool has. Ask the same question to Claude and Cursor, then see exactly where they agree, diverge, and what unique concepts each one brought.
via diff "explain microservices"
via diff add claude "Microservices split apps into small independent services..."
via diff add cursor "Microservices are small focused services that communicate via APIs..."
via diff show
┌─ DIFF - explain microservices ────────────────
│ claude 12 words
│ cursor 14 words
│ similarity 21% word overlap
│
│ claude | cursor
│ ────────────────────────────── | ──────────────────────────────
│ Microservices split apps into | Microservices are small focused
│ small independent services... | services that communicate via...
│
│ claude unique terms independent, database
│ cursor unique terms focused, communicate, deployed
└───────────────────────────────────────────────
Similarity score, word count, unique concepts per tool. The output tells you which tool reasoned differently and on what - which is the question worth answering before you decide which answer to trust.
via log - Unified Activity Log
One place for everything that happened across your AI tools.
via log "decided to use postgres" --tool claude
via log --scan # one-shot capture of all Claude Code sessions
via log --watch # live capture as sessions complete
via log --today
via log search "postgres"
The - scan flag reads Claude Code's session files directly and auto-captures session titles, models used, and turn counts - without you logging anything manually. The - watch flag keeps running and captures new sessions as they appear.
via ask - Route a Question to the Right Tool
via ask "should I use postgres or sqlite?" # opens recommended tool
via ask "refactor auth module" --tool cursor # force a specific tool
via ask "explain this architecture" --no-open # recommend only
Via scores the question against capability profiles for each installed tool and opens the best match.
The routing isn't keyword matching pretending to be intelligence - it's a scored capability matrix against what you actually have installed.
via handoff - Transfer Full Working State Between Tools
via handoff --export # saves .vstate.json
via handoff --import ./sprint3.vstate.json # restore on any machine
via handoff --list
The .vstate.json spec is Via's portable state format - a structured snapshot of everything the next tool needs to pick up without asking. Finish a deep architecture session in Claude. Export. Open Cursor. It already knows.
via serve - Run as an MCP Server
via serve # stdio (Claude Desktop, Cursor, Windsurf)
via serve --sse # HTTP+SSE mode
The Architecture Underneath
Via uses SQLite locally for everything. Zero external dependencies for core commands. No embeddings. No API calls for indexing. Your state lives on your machine.
The memory graph is pure SQLite - nodes are files, edges are import relationships, search is a two-hop traversal. The same architecture that graph databases charge enterprise licensing fees for, running in a 35KB npm package.
The .vstate.json handoff format is an open spec. Any tool can read it, any tool can write it. The design principle is intentional: Via doesn't want to be the only thing that understands its own format.
For teams that need semantic search across shared memory, graph traversal of decision history, or multi-machine sync, the upgrade path is Vektor Slipstream - the intelligence layer Via is built to connect to. Local SQLite handles the single-developer case. Slipstream handles the cases that need more.
Via is part of a broader open source ecosystem from Vektor Memory:
ToolWhat it doesViaRoute context and execution across all AI toolsVexMigrate agent memory between vector storesSlipstreamGraph memory, vector search, multimodal
Why This Is the Right Moment
The AI tooling space has optimised hard for individual tool capability. Every major coding assistant is measurably better than it was twelve months ago. The benchmark numbers keep going up.
What hasn't kept pace is the infrastructure layer between the tools.
The implicit assumption was that developers would pick one tool and stay in it. That assumption was wrong from the start and is demonstrably wrong now. Production AI workflows are multi-tool by necessity - different tools are better at different things, different contexts call for different capabilities, and no single provider has everything.
The Roman legions didn't wait for one city to become perfect before they needed roads. They built the roads because the empire was already distributed. The AI stack is already distributed. The roads are overdue.
Getting Started
npm install -g @vektor/via
via init
via memory add "your first fact"
via task add "your first task"
via serve
Requirements: Node.js >= 18. Zero runtime dependencies for core commands.
GitHub: github.com/Vektor-Memory/Via Intelligence upgrade: vektormemory.com
The empire was already large before Rome built the roads. The question was never whether the roads were needed. The question was only who would build them first.ce · Developer Tools · LLM · Claude · Cursor · Agentic AI · MCP · Context Management · Node.js

Top comments (0)