I built an open-source AI agent platform that prioritizes transparency and security over "magic". Think Open Claw, but with armor.
The Problem with "Invisible" AI Agents
We've all been there. You fire up an AI agent, give it a task, and... silence. Minutes pass. Is it working? Is it stuck? Did it just rm -rf something important?
Most AI agent tools today operate like black boxes. They promise autonomy but deliver anxiety. You're supposed to trust that the agent is doing the right thing, even when you have zero visibility into what it's actually doing.
I wanted something different.
Enter CachiBot: Visibility = Security
CachiBot is named after the Venezuelan cachicamo (armadillo) — a creature that's armored, deliberate, and doesn't rush into danger. That's exactly the philosophy behind this project.
Core Philosophy
"If you can't see it, you can't secure it."
Every action CachiBot takes is visible in real-time:
- Thinking streams — See the agent's reasoning as it happens
- Tool calls with arguments — Know exactly what's being executed
- Risk analysis — Dangerous code is flagged BEFORE execution
- Approval workflows — You decide what runs, not the AI
What Makes CachiBot Different?
1. Sandboxed Code Execution with Risk Analysis
CachiBot uses AST-based static analysis to detect dangerous operations before they run:
# This triggers a HIGH risk warning:
import subprocess
subprocess.run(["rm", "-rf", "/"])
# Risk Analysis:
# - CRITICAL: subprocess import detected
# - HIGH: Potentially destructive command
# - Action: Requires user approval
The sandbox only allows safe imports (json, math, datetime, etc.) and blocks anything that could compromise your system.
2. Real-Time WebSocket Streaming
No more guessing. Watch everything happen live:
[THINKING] Analyzing the file structure...
[TOOL_START] file_list(path="/workspace")
[TOOL_END] Found 12 files
[THINKING] I should read the config file first...
[APPROVAL_NEEDED] python_execute: Risk level HIGH
└─ Code: os.environ['API_KEY']
└─ Reason: Accessing environment variables
[USER] ✓ Approved
[TOOL_END] Result: sk-xxx...
3. Multi-Bot Management
Create specialized bots for different tasks:
| Bot | Purpose | Model |
|---|---|---|
| CodeReviewer | PR reviews | Claude Sonnet 4 |
| DataAnalyst | CSV/SQL work | GPT-4o |
| LocalHelper | Quick tasks | Ollama (llama3.1) |
| Researcher | Web searches | Kimi K2.5 |
Each bot has its own:
- System prompt and personality
- Enabled tools and capabilities
- Knowledge base (RAG with vector search)
- Platform connections (Telegram, Discord)
4. Built-in Work Management
CachiBot isn't just a chat interface — it's a work orchestration system:
Work Item: "Refactor authentication module"
├── Task 1: Analyze current auth flow [COMPLETED]
├── Task 2: Design new token system [IN_PROGRESS]
├── Task 3: Implement refresh tokens [BLOCKED by Task 2]
├── Task 4: Write unit tests [PENDING]
└── Task 5: Update documentation [PENDING]
Schedule jobs with cron expressions, set up event triggers, and track everything.
5. Platform Connections
Connect your bots to the real world:
[Telegram] @MyAssistantBot
├── Status: Connected
├── Messages: 1,247
└── Last active: 2 minutes ago
[Discord] CachiBot#1234
├── Status: Connected
├── Servers: 3
└── Last active: Just now
Messages route automatically to the right bot based on your configuration.
The Tech Stack
Backend (Python 3.10+)
- FastAPI + async everywhere
- Prompture (my structured LLM library)
- SQLite with sqlite-vec for embeddings
- Sandboxed Python execution
Frontend (React 19 + TypeScript)
- Vite 6 for lightning-fast builds
- Zustand for state management
- Tailwind CSS for styling
- WebSocket for real-time updates
LLM Providers
- Anthropic (Claude)
- OpenAI (GPT-4o)
- Moonshot (Kimi K2.5)
- Ollama (local models)
- Groq (fast inference)
Quick Start
# Install
pip install cachibot
# Start the server
cachibot-server
# Or use the CLI directly
cachibot "Analyze this Python file for security issues"
# Interactive mode
cachibot -i
Then open http://localhost:6392 for the full dashboard experience.
The Approval System in Action
Here's what happens when CachiBot tries to do something risky:
┌────────────────────────────────────────────────────────────┐
│ APPROVAL REQUIRED │
├────────────────────────────────────────────────────────────┤
│ Tool: python_execute │
│ Risk Level: HIGH │
│ │
│ Code: │
│ ┌──────────────────────────────────────────────────────┐ │
│ │ import requests │ │
│ │ response = requests.get(url) │ │
│ │ with open('data.json', 'w') as f: │ │
│ │ f.write(response.text) │ │
│ └──────────────────────────────────────────────────────┘ │
│ │
│ Risks Detected: │
│ • Network request (requests.get) │
│ • File write operation │
│ │
│ [APPROVE] [DENY] [APPROVE ALL FOR THIS SESSION] │
└────────────────────────────────────────────────────────────┘
You stay in control. Always.
Why Not Just Use Open Claw / Open WebUI / etc.?
Great projects! But they solve different problems.
| Feature | Open Claw | CachiBot |
|---|---|---|
| Focus | Chat interface | Agent execution |
| Code execution | Basic | Sandboxed + Risk analysis |
| Approval system | No | Yes |
| Multi-bot | Limited | Full support |
| Work management | No | Built-in |
| Real-time streaming | Partial | Full WebSocket |
CachiBot is for when you need an AI that does things — safely.
What's Next?
- [ ] MCP (Model Context Protocol) support
- [ ] More platform adapters (Slack, WhatsApp)
- [ ] Plugin marketplace
- [ ] Team collaboration features
- [ ] Self-hosted cloud option
Try It Out
pip install cachibot
cachibot-server
GitHub: github.com/jhd3197/CachiBot
Website: cachibot.com
Star the repo if you believe AI agents should be transparent!


Top comments (0)