Claude Code has quickly become one of my favorite tools for repo-aware AI workflows. It understands your codebase, navigates files, summarizes diffs, runs tools, and integrates with Git—all through a simple CLI.
But there’s a catch:
The Claude Code CLI expects to speak directly to Anthropic’s hosted backend.
That means if you want to:
use Databricks-hosted Claude models,
route requests through Azure’s Anthropic /anthropic/v1/messages endpoint,
extend Claude Code with local tools and Model Context Protocol (MCP) servers,
add prompt caching,
or simply run your own backend for experimentation…
…you’re out of luck.
So I built Lynkr, a self-hosted Claude Code–compatible proxy that solves this.
👉 GitHub: https://github.com/vishalveerareddy123/Lynkr
🚀 What Lynkr Does
At a high level:
Lynkr is an HTTP proxy that emulates the Claude Code backend, forwards requests to Databricks or Azure Anthropic, and wires in workspace tools, Git helpers, prompt caching, and MCP servers.
You can continue using the regular Claude Code CLI, but point it at your own backend:
Claude Code CLI → Lynkr → Databricks / Azure Anthropic / MCP tools
This lets you keep the familiar development workflow while customizing everything under the hood.
🔧 Core Features
- Provider Adapters
Built-in support for two upstream providers:
Databricks (default)
Azure Anthropic
Requests are normalized so the CLI sees standard Claude-style responses.
- Repo Intelligence
Lynkr builds a lightweight SQLite index of your workspace:
Symbol definitions & references
Framework & dependency hints
Language mix
Lint/test config discovery
It also generates a CLAUDE.md summary that gives the model structured context about your project.
- Git Workflow Integration
Includes Git helpers similar to Claude Code:
status, diff, stage, commit, push, pull
diff review summaries
release-note generation
Plus policy guards:
POLICY_GIT_ALLOW_PUSH
POLICY_GIT_REQUIRE_TESTS
POLICY_GIT_TEST_COMMAND
- Prompt Caching
A local LRU+TTL cache keyed by prompt signature:
speeds up repeated prompts
reduces Databricks/Azure tokens
avoids re-running identical analysis steps
Tool-invoking turns bypass caching to avoid unsafe side effects.
- MCP Orchestration
Lynkr automatically:
discovers MCP manifests
launches servers
wraps them with JSON-RPC
exposes all tools back to the assistant
Optional Docker sandboxing isolates MCP tools when needed.
- Workspace Tools
Includes:
repo indexing
symbol search
diff review
test runner
file I/O tools
lightweight task tracker (TODOs stored in SQLite)
- Full Transparency
Everything is logged (Pino-based structured logs), including:
request/response traces
repo indexer events
prompt cache hits/misses
MCP registry diagnostics
No black boxes.
🧱 Architecture Overview
Claude Code CLI
↓ (HTTP)
Lynkr Proxy (Express)
├─ Orchestrator (agent loop)
├─ Prompt Cache (LRU + TTL)
├─ Session DB (SQLite)
├─ Repo Indexer (Tree-sitter + CLAUDE.md)
├─ Tool Registry (workspace + git + diff + test)
├─ MCP Registry (JSON-RPC bridge)
└─ Provider Adapters (Databricks / Azure Anthropic)
The codebase is intentionally small and hackable—everything lives in src/.
🛠️ Installing Lynkr
Prerequisites
Node.js 18+
npm
Databricks or Azure Anthropic credentials
(Optional) Docker for MCP sandboxing
(Optional) Claude Code CLI
Install from npm
npm install -g lynkr
lynkr start
or via Homebrew:
brew tap vishalveerareddy123/lynkr
brew install vishalveerareddy123/lynkr/lynkr
or from source:
git clone https://github.com/vishalveerareddy123/Lynkr.git
cd Lynkr
npm install
npm start
⚙️ Configuring the Proxy
Databricks
MODEL_PROVIDER=databricks
DATABRICKS_API_BASE=https://.cloud.databricks.com
DATABRICKS_API_KEY=
WORKSPACE_ROOT=/path/to/repo
PORT=8080
Azure Anthropic
MODEL_PROVIDER=azure-anthropic
AZURE_ANTHROPIC_ENDPOINT=https://.services.ai.azure.com/anthropic/v1/messages
AZURE_ANTHROPIC_API_KEY=
AZURE_ANTHROPIC_VERSION=2023-06-01
WORKSPACE_ROOT=/path/to/repo
PORT=8080
🧩 Hooking Up Claude Code CLI
export ANTHROPIC_BASE_URL=http://localhost:8080
export ANTHROPIC_API_KEY=dummy # required by CLI but unused by Lynkr
Then run the CLI normally inside your repo.
Everything—tool calls, chat, diffs, navigation—flows through your proxy.
🔍 Example: calling a tool
curl http://localhost:8080/v1/messages \
-H "Content-Type: application/json" \
-d '{
"model": "claude-proxy",
"messages": [{ "role": "user", "content": "Rebuild the repo index." }],
"tools": [{
"name": "workspace_index_rebuild",
"type": "function",
"input_schema": { "type": "object" }
}],
"tool_choice": {
"type": "function",
"function": { "name": "workspace_index_rebuild" }
}
}'
🐛 Troubleshooting Highlights
Missing path → check your tool arguments
Git commands blocked → check POLICY_GIT_ALLOW_PUSH
MCP server not discovered → check manifest locations
Prompt cache not working → ensure no tools are used in the request
Web fetch returns HTML scaffolding → JS execution is not supported (use JSON APIs)
🗺️ Roadmap
Coming next:
per-file threaded diff comments
risk scoring on diffs
LSP bridging for deeper language understanding
declarative “skills” layer
historical coverage and test dashboards
🎯 Why I Built This
I love the Claude Code UX, but I wanted the ability to:
run everything locally
plug in Databricks and Azure Anthropic
add my own tools and MCP servers
see and debug all internal behavior
experiment quickly without platform constraints
If you’re exploring AI-assisted development on Databricks or Azure—and want more control over your backend—Lynkr might be useful.
👉 GitHub link: https://github.com/vishalveerareddy123/Lynkr
⭐ Contributions, ideas, and issues welcome.
Top comments (0)