I'm Alexander, I'm 13, and I've been coding for a few years across Python, TypeScript, Swift, and Bash. Last month I shipped something I'm actually proud of: Conductor — a TypeScript AI engine that connects Claude, GPT-4o, Gemini, and Ollama to your real-world tools like Gmail, Spotify, GitHub Actions, Notion, and HomeKit.
As of today it has 27 plugins and 150+ tools. Here's the honest story of how it got there.
What Conductor actually is
At its core, Conductor sits between you and your AI model. You say something in plain English, Conductor figures out which tools to call, chains them together, and gives you a result.
Example:
"Find my 3 latest unread emails, add any urgent ones to my calendar, then DM me on Slack."
Conductor handles:
-
gmail_list()→ fetches emails - AI classifies which ones are urgent
-
gcal_create_event()→ creates the event - Sends a Slack message summarizing what it did
And this works whether you're talking to it through Claude Desktop (via MCP), a Telegram bot, or a Slack bot. Same engine, three interfaces.
The architecture that actually worked
I went through a few redesigns before landing on something I was happy with. The final structure has four main parts:
Core (src/core/) — The orchestrator. Initializes config, database, plugins, and AI on startup. Owns the proactive loop.
AI Layer (src/ai/) — Each provider (Claude, OpenAI, Gemini, Ollama, OpenRouter) implements the same interface: complete(), test(), and parseIntent(). Switching models is one command: conductor ai switch gemini.
Plugin System (src/plugins/) — Every plugin exports a Plugin object with a name, description, and array of tools. Each tool declares an input schema (JSON Schema) and an async execute() function. To add a new plugin you just implement the interface and register it in one index file — it automatically shows up across MCP, Slack, and Telegram.
Interfaces — MCP server (stdio mode for Claude Desktop), Slack bot, Telegram bot. All read from the same plugin registry.
The agent loop runs up to 15 tool-calling iterations per conversation turn before halting. It keeps the last 30 messages of history per user in SQLite.
The thing I'm most proud of: Proactive Mode
Proactive Mode is an autonomous reasoning loop that runs on a schedule (every 30 minutes by default) without any user prompts.
Each cycle:
- Gathers context — CPU/RAM/disk stats, unread Gmail count, upcoming calendar events, recent activity
- Sends it to the AI with instructions to identify problems and act on them
- Holds sensitive actions for human approval before executing
- Sends you a summary via Slack or Telegram
So Conductor might notice your disk is 85% full, archive some old logs, and send you a Slack message saying what it did — all without you asking.
The approval gate system was the hardest part to get right. When the AI tries to call a tool marked requiresApproval: true, the agent loop pauses and notifies you. You reply /approve <id> or /deny <id> and it continues. This means proactive mode can be genuinely autonomous without being scary.
What I got wrong the first time
Security. My first version stored API keys in plain JSON in ~/.conductor/config.json. That's obviously bad. I rewrote the credential system to use AES-256-GCM encryption with a key derived from the machine's hardware ID via scrypt. Keys are stored in ~/.conductor/keychain/ with 0700 permissions and never appear in config.json.
The agent loop limit. Early on I didn't cap the tool-calling iterations. The AI would sometimes get into a loop and run indefinitely. 15 iterations turned out to be the right balance — enough to handle complex multi-step tasks, but a clear hard stop.
Persona routing. Originally every request went to the same system prompt. Adding four personas (Coder, Social, Researcher, General) with automatic classification improved response quality noticeably. The AI classifies the request first with a fast call, then routes to the right tool set with the right system prompt.
What's still broken / what I want to fix
Honest list:
- Error messages when API keys are missing are still too cryptic (I actually just opened a
good first issuefor this) -
conductor plugins listdoesn't show tool counts or descriptions in verbose mode (another good first issue) - No
CONTRIBUTING.mdyet, which makes it hard for new contributors to get started - Test coverage is thin outside of unit tests for individual tools
The stack
- TypeScript throughout
- Commander.js for the CLI
- sql.js for SQLite (conversation history, memory, activity logs)
- @slack/bolt for the Slack bot
- telegraf for the Telegram bot
- Model Context Protocol (Anthropic's open standard) for Claude Desktop integration
The installer is a 14-step interactive bash/PowerShell script that sets up everything — AI providers, Google OAuth, Slack/Telegram tokens, Claude Desktop MCP config. Every step is optional and skippable. It's fully idempotent so re-running it is safe.
Install
# macOS / Linux
curl -fsSL https://conductor.thealxlabs.ca/install.sh | bash
# Windows (PowerShell)
irm https://conductor.thealxlabs.ca/install.ps1 | iex
Or check it out on GitHub: thealxlabs/conductor
Looking for contributors
I'm actively looking for people to help build this out. The issues I just opened are genuinely beginner-friendly:
-
#14 — Add
--verboseflag toconductor plugins list -
#15 — Write
CONTRIBUTING.md - #16 — Better error handling for missing API keys on startup
If you're a student or early-career dev who wants to contribute to a real TypeScript project with an actual user base, hit me up. I'm @thealxlabs on X.
I'm 13 and I built this mostly during evenings and weekends in Toronto. If you have questions or feedback, I'm genuinely open to it — drop a comment or open an issue.
Top comments (0)