I've been using 3+ AI tools daily — Claude, GPT-4, Ollama locally — and I kept running into the same problems:
- No way to verify outputs when AI tools disagree with each other
- No idea what I'm spending across multiple providers
- No unified view of what's running locally vs in the cloud
So I built TrustLayer — an open-source trust layer that sits between you and every AI tool you use.
What it does
Verification Engine — Every AI output gets a 0-100 trust score. Detects overconfident language, hallucination patterns, and unverified claims. Uses your local Ollama to cross-check suspicious claims.
OpenAI-Compatible Proxy — Set one environment variable and all your tools route through TrustLayer:
export OPENAI_BASE_URL=http://localhost:8000/v1
Now Claude Code, Cursor, Aider, and your own scripts all flow through TrustLayer for logging and cost tracking — with zero code changes.
Cost Tracking — See exactly what you're spending per provider, per model, per day. Set budgets. Get alerts.
Smart Routing — Automatically route simple tasks to cheaper models. TrustLayer classifies your prompts and saves you money by not sending "what's 2+2" to GPT-4o when GPT-4o-mini handles it fine.
CLI Tools Detected — Auto-detects Ollama, Claude Code, Gemini CLI, Aider on your machine. Shows them alongside your API providers.
Quick start
pip install trustlayer-ai
trustlayer server
That's it. No account needed. No config files. Ollama gets auto-detected if it's running.
100% local
Your data never leaves your machine. Everything is stored in a local SQLite database. The web UI runs on localhost. Open source under MIT license.
Stack
- Backend: Python, FastAPI, SQLite
- Frontend: React, Tailwind, Recharts
- CLI: Typer + Rich
Links
- GitHub: github.com/acunningham-ship-it/trustlayer
- PyPI: pypi.org/project/trustlayer-ai
- Website: acunningham-ship-it.github.io/trustlayer
Would love feedback — especially on what features would make you use this daily. The proxy and smart routing are brand new in v0.2.
Top comments (0)