Most AI tools forget everything the moment you close the tab.
Your scripts do the same. A deploy script cannot remember what happened last week. A project helper does not know your stack. A cron job starts from zero every morning.
Zenii changes that by giving your machine one shared AI memory. Store context from Python, recall it from Bash, ask from Node, or continue from the desktop app. Same memory. Same local backend. No framework, SDK, or hosted service required.
Here's a Python script with a memory that survives restarts, a Bash deploy script that remembers every deployment it ever ran, and a Node.js project assistant that knows your conventions. They're all 10-15 lines of code, and they share the same brain.
Setup (2 minutes)
Install Zenii as a single Rust binary, or download the app for your platform. Zenii is available to download on all platforms from the releases page.
# Linux / macOS
curl -fsSL https://raw.githubusercontent.com/sprklai/zenii/main/install.sh | bash
# Or download for your platform: https://github.com/sprklai/zenii/releases/latest
Start the daemon and add an AI provider key:
zenii-daemon &
# → Listening on 127.0.0.1:18981
# Add an OpenAI key (or Anthropic, Google, Ollama for offline)
curl -X POST localhost:18981/credentials \
-H "Content-Type: application/json" \
-d '{"key":"api_key:openai", "value":"sk-your-key-here"}'
Verify it's running:
curl localhost:18981/health
# → {"status":"ok"}
If the daemon isn't running, you'll get a connection refused error:
curl localhost:18981/health
# → curl: (7) Failed to connect to localhost port 18981: Connection refused
That's the only failure mode. Start the daemon and try again.
What if your machine just... knew things?
Before diving into language-specific examples, here's the core idea:
# Morning: store context from your deploy script
curl -X POST localhost:18981/memory \
-H "Content-Type: application/json" \
-d '{"key":"infra", "content":"Migrated staging to k8s, port 8443"}'
# Afternoon: ask from a completely different tool
curl -X POST localhost:18981/chat \
-H "Content-Type: application/json" \
-d '{"session_id":"ops", "prompt":"How do I connect to staging?"}'
# → "Staging is now on Kubernetes, port 8443..."
The memory persists across restarts, tools, and sessions. Store from Python, recall from Bash. Store from the desktop app, recall from Telegram. Everything shares the same brain.
Example 1: Python script with memory
import requests
BASE = "http://localhost:18981"
# Store something
requests.post(f"{BASE}/memory", json={
"key": "project-config",
"content": "The frontend uses React 19. The API is FastAPI on port 8000. Auth is JWT with RS256."
})
# Later (even days later), ask about it
resp = requests.post(f"{BASE}/chat", json={
"session_id": "dev-helper",
"prompt": "What framework does our frontend use and what auth scheme do we have?"
})
print(resp.json()["response"])
# → "Your frontend uses React 19, and authentication is handled via JWT with RS256 signing."
The memory is semantic — it uses FTS5 full-text search plus vector embeddings. So you don't need exact keyword matches. Ask "what auth do we use" and it'll find the answer even though you stored it as "Auth is JWT with RS256."
Example 2: Bash deploy script that learns
#!/bin/bash
# deploy.sh — a deploy script that remembers past deployments
# Store this deployment
curl -s -X POST localhost:18981/memory \
-H "Content-Type: application/json" \
-d "{\"key\":\"deploy-$(date +%F)\", \"content\":\"Deployed v2.3.1 to prod at $(date). Commit: $(git rev-parse --short HEAD). Duration: 4m22s\"}"
# Ask about deployment history
curl -s -X POST localhost:18981/chat \
-H "Content-Type: application/json" \
-d '{"session_id":"ops", "prompt":"Summarize recent deployments"}' \
| jq -r '.response'
Every time you deploy, the script stores a memory. Over time, Zenii accumulates deployment history that it can summarize, compare, and reason about.
Example 3: Node.js project assistant
const BASE = 'http://localhost:18981';
async function storeContext(key, content) {
await fetch(`${BASE}/memory`, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ key, content })
});
}
async function ask(question) {
const res = await fetch(`${BASE}/chat`, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ session_id: 'project', prompt: question })
});
return (await res.json()).response;
}
// Store your project context once
await storeContext('stack', 'Next.js 15, Prisma, PostgreSQL, deployed on Railway');
await storeContext('conventions', 'We use barrel exports, zod for validation, and server actions for mutations');
// Then ask questions from anywhere
console.log(await ask('How should I structure a new API endpoint based on our conventions?'));
Example 4: Scheduled morning briefing
curl -X POST localhost:18981/scheduler/jobs \
-H "Content-Type: application/json" \
-d '{
"id": "morning-briefing",
"name": "morning-briefing",
"schedule": {"type": "cron", "expr": "0 9 * * 1-5"},
"payload": {
"type": "agent_turn",
"prompt": "Search the web for top tech news today. Cross-reference with what I'\''ve been working on recently. Give me a 5-bullet briefing."
}
}'
Runs at 9 AM on weekdays. The agent searches the web (built-in tool), checks your stored memories for context, and generates a briefing. If you have Telegram or Discord channels configured, it can send the briefing there too.
The pattern
Notice what's happening: the language doesn't matter. Python, Bash, JavaScript, Go, Ruby — anything that can make HTTP requests can store memories and ask questions.
Zenii isn't a library you import. It's infrastructure you call. Like a database, but for AI.
The flow is always:
-
Store context via
POST /memory -
Ask questions via
POST /chat(the agent uses stored memories automatically) -
Schedule recurring tasks via
POST /scheduler/jobs - Connect channels (Telegram, Slack, Discord) for multi-platform access
Advanced: giving your AI a personality
Want the agent to respond in a specific style? Zenii has a configurable identity system:
curl -X PUT localhost:18981/identity/SOUL \
-H "Content-Type: application/json" \
-d '{"content": "You are a senior DevOps engineer who gives concise, practical answers. You prefer command-line solutions over GUI workflows."}'
Now every response — from scripts, the CLI, Telegram, the desktop app — follows this persona. The personality is shared infrastructure, not per-tool configuration.
What you get vs. building it yourself
If you were to build a script with persistent AI memory from scratch, you'd need:
- An AI SDK (openai, anthropic, etc.)
- A database for memory (PostgreSQL, Redis, etc.)
- Memory retrieval logic (embeddings, search, scoring)
- Session management
- Error handling and retry logic
- A hosting solution if you want it always-on
With Zenii, you installed one binary and called curl. Everything else — memory, AI, tools, scheduling, error handling — is built into the daemon.
And if you later want to add Telegram, Slack, or Discord channels, it's the same pattern: configure credentials, register the channel, done. Same brain, new interface.
Full API reference
Everything here uses Zenii's REST API. Full docs: https://docs.zenii.sprklai.com
GitHub: https://github.com/sprklai/zenii | MIT licensed, open source.
If you build something with it, I'd genuinely love to see it. Drop a link in the comments or open a discussion on GitHub.
For the full architecture, see the Zenii architecture docs.
Top comments (0)