<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Alexander Wondwossen</title>
    <description>The latest articles on DEV Community by Alexander Wondwossen (@thealxlabs).</description>
    <link>https://dev.to/thealxlabs</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/thealxlabs"/>
    <language>en</language>
    <item>
      <title>I'm in Grade 7 and I Built an AI Integration Hub with Google Gemini (Here's What I Learned)</title>
      <dc:creator>Alexander Wondwossen</dc:creator>
      <pubDate>Tue, 03 Mar 2026 02:42:32 +0000</pubDate>
      <link>https://dev.to/thealxlabs/im-in-grade-7-and-i-built-an-ai-integration-hub-with-google-gemini-heres-what-i-learned-2pa9</link>
      <guid>https://dev.to/thealxlabs/im-in-grade-7-and-i-built-an-ai-integration-hub-with-google-gemini-heres-what-i-learned-2pa9</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a submission for the &lt;a href="https://dev.to/challenges/mlh/built-with-google-gemini-02-25-26"&gt;Built with Google Gemini: Writing Challenge&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;Let me set the scene: I'm a 12-year-old in Toronto, Canada who codes random things for fun between school and photography sessions. I'm not a professional developer. I don't have a CS degree. I learn by experimenting, breaking things, and Googling error messages at weird hours.&lt;/p&gt;

&lt;p&gt;So when I decided to build something real with Google Gemini — not just a chatbot demo, but an actual tool I'd use myself — I had no idea what I was getting into.&lt;/p&gt;

&lt;p&gt;The result is &lt;strong&gt;Conductor&lt;/strong&gt;: an AI integration hub and CLI that connects Gemini (and other AI providers) to real-world tools through a single interface. You install it in one line, configure your provider, and suddenly Gemini can check the weather, look up crypto prices, monitor your system stats, translate text, do DNS lookups, and way more — all from your terminal or even Telegram.&lt;/p&gt;

&lt;p&gt;Here's what that journey actually looked like.&lt;/p&gt;




&lt;h2&gt;
  
  
  What I Built with Google Gemini
&lt;/h2&gt;

&lt;p&gt;The core problem I kept running into: every AI integration felt like starting from scratch. Want Gemini in your terminal? Build a CLI. Want it on Telegram? Build a bot. Want it to actually &lt;em&gt;do&lt;/em&gt; things, not just chat? Add tools one by one. It was repetitive and fragmented.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conductor&lt;/strong&gt; is my answer to that. Think of it like a conductor at an orchestra — it doesn't play the instruments, it coordinates everything so they work together.&lt;/p&gt;

&lt;p&gt;Here's what it can do:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Multi-provider AI in one CLI&lt;/strong&gt; — Switch between Gemini, Claude, GPT-4o, and even local Ollama models with a config change. Conductor abstracts away the differences so you talk to them all the same way.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;13 built-in plugins, zero API keys needed&lt;/strong&gt; — This was important to me. I didn't want something that required you to sign up for 10 different services just to get started. All plugins work out of the box:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Plugin&lt;/th&gt;
&lt;th&gt;What it does&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;weather&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Real-time weather + 7-day forecast via Open-Meteo&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;crypto&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Prices, trending coins, search via CoinGecko&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;system&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;CPU, memory, disk, process monitoring&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;translate&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;50+ languages via MyMemory&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;network&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;DNS lookup, IP geolocation, reverse DNS&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;github&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;User profiles, repos, trending&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;calculator&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Math, unit conversion, date math&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;colors&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Hex/RGB/HSL conversion, WCAG contrast checking&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;text-tools&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;JSON formatting, regex testing, case transforms&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;hash&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;SHA/MD5, Base64, UUID, password generation&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;url-tools&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Link expansion, status checks, header inspection&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;fun&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Jokes, trivia, cat facts (important)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;timezone&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;World clock, timezone conversion&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;MCP server&lt;/strong&gt; — This is where Gemini really shines. Conductor can run as an MCP (Model Context Protocol) server, which means Gemini can call any of these tools as native functions. Instead of Gemini just &lt;em&gt;knowing&lt;/em&gt; about the weather, it can &lt;em&gt;check&lt;/em&gt; the weather. Instead of estimating a math answer, it &lt;em&gt;calculates&lt;/em&gt; it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Telegram bot&lt;/strong&gt; — Run &lt;code&gt;conductor telegram start&lt;/code&gt; and you've got a Gemini-powered bot in your pocket. Chat with it from your phone, ask it to check the weather or convert currencies, get real answers.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Encrypted keychain&lt;/strong&gt; — I took security seriously. API keys are encrypted at rest with AES-256-GCM, and the key is derived from your machine's hardware ID using scrypt. Your credentials only decrypt on the machine that created them.&lt;/p&gt;

&lt;p&gt;Install it yourself:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# macOS / Linux&lt;/span&gt;
curl &lt;span class="nt"&gt;-fsSL&lt;/span&gt; https://raw.githubusercontent.com/thealxlabs/conductor/main/install.sh | bash

&lt;span class="c"&gt;# Windows (PowerShell)&lt;/span&gt;
irm https://raw.githubusercontent.com/thealxlabs/conductor/main/install.ps1 | iex
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;conductor ai setup     &lt;span class="c"&gt;# pick Gemini&lt;/span&gt;
conductor ai &lt;span class="nb"&gt;test&lt;/span&gt;      &lt;span class="c"&gt;# make sure it works&lt;/span&gt;
conductor mcp start    &lt;span class="c"&gt;# expose tools to Gemini&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;🔗 &lt;a href="https://github.com/thealxlabs/conductor" rel="noopener noreferrer"&gt;github.com/thealxlabs/conductor&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Demo
&lt;/h2&gt;

&lt;p&gt;Here's what using Conductor with Gemini actually looks like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;conductor ai setup
✓ Provider &lt;span class="nb"&gt;set &lt;/span&gt;to: Gemini
✓ Model: gemini-2.0-flash

&lt;span class="nv"&gt;$ &lt;/span&gt;conductor ai &lt;span class="nb"&gt;test&lt;/span&gt;
→ Asking Gemini: &lt;span class="s2"&gt;"What's 2+2 and what's the weather like in Tokyo?"&lt;/span&gt;
✓ Gemini used &lt;span class="o"&gt;[&lt;/span&gt;calculator] → 4
✓ Gemini used &lt;span class="o"&gt;[&lt;/span&gt;weather] → Tokyo: 12°C, partly cloudy
Response &lt;span class="nb"&gt;time&lt;/span&gt;: 1.3s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The moment Gemini autonomously chose to call two different tools to answer one question — without me telling it to — was genuinely exciting. That's the power of MCP working properly.&lt;/p&gt;




&lt;h2&gt;
  
  
  What I Learned
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Building for real is completely different from tutorials
&lt;/h3&gt;

&lt;p&gt;Every tutorial I watched made things look clean. Variables always work. APIs always respond. Types always line up. Building Conductor taught me that real projects are messier — and way more satisfying to get right.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;TypeScript is humbling.&lt;/strong&gt; I chose TypeScript for Conductor because I wanted type safety across the whole codebase. What I didn't expect was how opinionated TypeScript gets when you're working with dynamic things like AI API responses, plugin schemas, and streaming data. I spent more time fixing type errors than writing features in the first week. By the end, I understood why it's worth it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Multi-provider abstraction is genuinely hard.&lt;/strong&gt; Gemini, Claude, and OpenAI all have function/tool calling — but their APIs handle it differently. Streaming works differently. Error shapes are different. Creating a single provider interface that felt consistent across all of them meant I had to really understand each one deeply before I could abstract over them. This was the hardest technical challenge in the whole project.&lt;/p&gt;

&lt;h3&gt;
  
  
  Security isn't optional, even in hobby projects
&lt;/h3&gt;

&lt;p&gt;When I first sketched out Conductor, I was going to just store API keys in a plain JSON file. Then I thought about it more. These keys give access to AI accounts with billing attached. I learned about AES-256-GCM encryption, how to derive keys with scrypt from hardware identifiers, and why machine-binding matters. The keychain implementation took longer than expected but I'm proud of it — and it taught me a lot about how real applications handle secrets.&lt;/p&gt;

&lt;h3&gt;
  
  
  Cross-platform development will test your patience
&lt;/h3&gt;

&lt;p&gt;Getting Conductor working identically on macOS, Linux, and Windows was the most underestimated part of the project. Shell scripts behave differently. Path separators behave differently. Process spawning behaves differently. File permissions mean different things. I now have a lot more respect for anyone who ships truly cross-platform software. The Windows PowerShell install script alone taught me more than I expected.&lt;/p&gt;

&lt;h3&gt;
  
  
  MCP is a glimpse at the future of AI
&lt;/h3&gt;

&lt;p&gt;Before this project, I thought of AI as something you &lt;em&gt;talk to&lt;/em&gt;. After building the MCP server integration, I think of it differently — as something that can &lt;em&gt;act&lt;/em&gt;. When Gemini decides on its own to call the &lt;code&gt;weather&lt;/code&gt; plugin and then the &lt;code&gt;timezone&lt;/code&gt; plugin to answer a single question, it's not just chatting. It's reasoning about what tools it needs and using them. That shift in perspective changed how I think about AI's role in software.&lt;/p&gt;

&lt;h3&gt;
  
  
  As a kid who codes, imposter syndrome is real — and you have to build through it
&lt;/h3&gt;

&lt;p&gt;I almost didn't ship this because I kept thinking "who am I to build something like this?" I'm in Grade 7. I don't have internships or a degree. I'm just a kid from Toronto who learned to code by experimenting.&lt;/p&gt;

&lt;p&gt;What helped: remembering that every developer started somewhere, and the only way to get better is to build real things and share them. This project is the most technically complex thing I've built. It has flaws. But it works, it's useful, and I made it.&lt;/p&gt;




&lt;h2&gt;
  
  
  Google Gemini Feedback
&lt;/h2&gt;

&lt;p&gt;I want to be honest here, because I think candid feedback is more useful than just praise.&lt;/p&gt;

&lt;h3&gt;
  
  
  What worked really well
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Tool/function calling is excellent.&lt;/strong&gt; The Gemini function calling API is clean and well-designed. Defining tools as JSON schemas and having the model decide when to invoke them felt intuitive. Once I understood the pattern, integrating new Conductor plugins was straightforward. Gemini's tool selection accuracy was impressive — it almost always picked the right tool for the right job.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Speed on flash models.&lt;/strong&gt; &lt;code&gt;gemini-2.0-flash&lt;/code&gt; is genuinely fast. For a CLI tool where you're waiting for a response, speed matters a lot. Using the flash model for Conductor's day-to-day interactions felt snappy in a way that made the tool feel good to use, not like waiting for an AI.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Quality of reasoning with tool results.&lt;/strong&gt; When Gemini receives tool results back (e.g., raw weather JSON), it synthesizes them into natural, helpful responses rather than just dumping the data. This made the Telegram bot experience feel genuinely conversational rather than robotic.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Gemini handles context well.&lt;/strong&gt; In multi-turn Telegram conversations, Gemini kept track of context naturally. If you asked about the weather and then said "what about tomorrow?", it understood the reference without needing everything repeated.&lt;/p&gt;

&lt;h3&gt;
  
  
  Where I ran into friction
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Streaming + tool use together is tricky.&lt;/strong&gt; This was my biggest technical pain point. When streaming is enabled and Gemini calls a tool mid-stream, the response format gets complex — you get partial tool call deltas that need to be assembled before you can make the tool call. The documentation for this specific case was sparse. I ended up disabling streaming during tool use and re-enabling it for the final response, which works but isn't ideal.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Rate limits hit fast during stress testing.&lt;/strong&gt; When I was testing the Telegram bot with rapid-fire messages, I hit rate limits quickly on the free tier. The error messages weren't always clear about whether I was hitting requests-per-minute or tokens-per-minute limits. Better rate limit error messaging would help developers build more resilient apps.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;No native session/conversation persistence.&lt;/strong&gt; Every Gemini API call is stateless — you have to send the full conversation history each time. For Conductor's Telegram bot, this means the conversation history grows with every message and eventually you're sending a lot of tokens just to maintain context. I'd love a managed session API that handles history server-side. I built my own conversation manager to work around this, but it added complexity.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Model string versioning is confusing.&lt;/strong&gt; Knowing which model string to use (&lt;code&gt;gemini-pro&lt;/code&gt;, &lt;code&gt;gemini-1.5-pro&lt;/code&gt;, &lt;code&gt;gemini-2.0-flash&lt;/code&gt;, etc.) and what the differences are in terms of capability and cost required a lot of reading across different docs pages. A clearer, centralized comparison would save time.&lt;/p&gt;




&lt;h2&gt;
  
  
  What's Next
&lt;/h2&gt;

&lt;p&gt;Conductor started as a personal experiment but I want to keep building it. Things I'm planning:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Voice mode&lt;/strong&gt; — pipe Conductor through a TTS/STT layer for a real Jarvis-style interface&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Plugin marketplace&lt;/strong&gt; — let people write and share their own plugins&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Web dashboard&lt;/strong&gt; — a simple UI to manage providers, plugins, and conversation history&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;More providers&lt;/strong&gt; — Groq, Mistral, Cohere&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Most of all, I want other people to use it. If you install Conductor and find a bug, I want to know. If you have an idea for a plugin, open an issue. This is open source and I'm still learning — feedback from real users is how I get better.&lt;/p&gt;




&lt;p&gt;Building Conductor taught me more than any tutorial ever could. And using Gemini as the AI backbone of a real tool — not just a playground project — gave me a much clearer picture of where the technology is genuinely impressive and where it still has room to grow.&lt;/p&gt;

&lt;p&gt;If a Grade 7 kid from Toronto can build something like this independently, imagine what's possible when more people start treating AI as a tool to extend their own capabilities rather than just something to chat with.&lt;/p&gt;

&lt;p&gt;That's the most exciting thing I learned from this whole project.&lt;/p&gt;

&lt;p&gt;🔗 &lt;a href="https://github.com/thealxlabs/conductor" rel="noopener noreferrer"&gt;github.com/thealxlabs/conductor&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;— Alexander Wondwossen (&lt;a class="mentioned-user" href="https://dev.to/thealxlabs"&gt;@thealxlabs&lt;/a&gt;), Toronto 🇨🇦&lt;/em&gt;&lt;/p&gt;

</description>
      <category>devchallenge</category>
      <category>geminireflections</category>
      <category>gemini</category>
      <category>ai</category>
    </item>
    <item>
      <title>I Built an AI Integration Hub at 13 — Here's What I Learned</title>
      <dc:creator>Alexander Wondwossen</dc:creator>
      <pubDate>Tue, 03 Mar 2026 02:29:46 +0000</pubDate>
      <link>https://dev.to/thealxlabs/i-built-an-ai-integration-hub-at-13-heres-what-i-learned-5gj8</link>
      <guid>https://dev.to/thealxlabs/i-built-an-ai-integration-hub-at-13-heres-what-i-learned-5gj8</guid>
      <description>&lt;p&gt;I'm Alexander, I'm 13, and I've been coding for a few years across Python, TypeScript, Swift, and Bash. Last month I shipped something I'm actually proud of: &lt;strong&gt;Conductor&lt;/strong&gt; — a TypeScript AI engine that connects Claude, GPT-4o, Gemini, and Ollama to your real-world tools like Gmail, Spotify, GitHub Actions, Notion, and HomeKit.&lt;/p&gt;

&lt;p&gt;As of today it has 27 plugins and 150+ tools. Here's the honest story of how it got there.&lt;/p&gt;




&lt;h2&gt;
  
  
  What Conductor actually is
&lt;/h2&gt;

&lt;p&gt;At its core, Conductor sits between you and your AI model. You say something in plain English, Conductor figures out which tools to call, chains them together, and gives you a result.&lt;/p&gt;

&lt;p&gt;Example:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"Find my 3 latest unread emails, add any urgent ones to my calendar, then DM me on Slack."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Conductor handles:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;code&gt;gmail_list()&lt;/code&gt; → fetches emails&lt;/li&gt;
&lt;li&gt;AI classifies which ones are urgent&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;gcal_create_event()&lt;/code&gt; → creates the event&lt;/li&gt;
&lt;li&gt;Sends a Slack message summarizing what it did&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;And this works whether you're talking to it through Claude Desktop (via MCP), a Telegram bot, or a Slack bot. Same engine, three interfaces.&lt;/p&gt;




&lt;h2&gt;
  
  
  The architecture that actually worked
&lt;/h2&gt;

&lt;p&gt;I went through a few redesigns before landing on something I was happy with. The final structure has four main parts:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Core&lt;/strong&gt; (&lt;code&gt;src/core/&lt;/code&gt;) — The orchestrator. Initializes config, database, plugins, and AI on startup. Owns the proactive loop.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AI Layer&lt;/strong&gt; (&lt;code&gt;src/ai/&lt;/code&gt;) — Each provider (Claude, OpenAI, Gemini, Ollama, OpenRouter) implements the same interface: &lt;code&gt;complete()&lt;/code&gt;, &lt;code&gt;test()&lt;/code&gt;, and &lt;code&gt;parseIntent()&lt;/code&gt;. Switching models is one command: &lt;code&gt;conductor ai switch gemini&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Plugin System&lt;/strong&gt; (&lt;code&gt;src/plugins/&lt;/code&gt;) — Every plugin exports a &lt;code&gt;Plugin&lt;/code&gt; object with a name, description, and array of tools. Each tool declares an input schema (JSON Schema) and an async &lt;code&gt;execute()&lt;/code&gt; function. To add a new plugin you just implement the interface and register it in one index file — it automatically shows up across MCP, Slack, and Telegram.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Interfaces&lt;/strong&gt; — MCP server (stdio mode for Claude Desktop), Slack bot, Telegram bot. All read from the same plugin registry.&lt;/p&gt;

&lt;p&gt;The agent loop runs up to 15 tool-calling iterations per conversation turn before halting. It keeps the last 30 messages of history per user in SQLite.&lt;/p&gt;




&lt;h2&gt;
  
  
  The thing I'm most proud of: Proactive Mode
&lt;/h2&gt;

&lt;p&gt;Proactive Mode is an autonomous reasoning loop that runs on a schedule (every 30 minutes by default) without any user prompts.&lt;/p&gt;

&lt;p&gt;Each cycle:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Gathers context — CPU/RAM/disk stats, unread Gmail count, upcoming calendar events, recent activity&lt;/li&gt;
&lt;li&gt;Sends it to the AI with instructions to identify problems and act on them&lt;/li&gt;
&lt;li&gt;Holds sensitive actions for human approval before executing&lt;/li&gt;
&lt;li&gt;Sends you a summary via Slack or Telegram&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;So Conductor might notice your disk is 85% full, archive some old logs, and send you a Slack message saying what it did — all without you asking.&lt;/p&gt;

&lt;p&gt;The approval gate system was the hardest part to get right. When the AI tries to call a tool marked &lt;code&gt;requiresApproval: true&lt;/code&gt;, the agent loop pauses and notifies you. You reply &lt;code&gt;/approve &amp;lt;id&amp;gt;&lt;/code&gt; or &lt;code&gt;/deny &amp;lt;id&amp;gt;&lt;/code&gt; and it continues. This means proactive mode can be genuinely autonomous without being scary.&lt;/p&gt;




&lt;h2&gt;
  
  
  What I got wrong the first time
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Security.&lt;/strong&gt; My first version stored API keys in plain JSON in &lt;code&gt;~/.conductor/config.json&lt;/code&gt;. That's obviously bad. I rewrote the credential system to use AES-256-GCM encryption with a key derived from the machine's hardware ID via scrypt. Keys are stored in &lt;code&gt;~/.conductor/keychain/&lt;/code&gt; with 0700 permissions and never appear in config.json.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The agent loop limit.&lt;/strong&gt; Early on I didn't cap the tool-calling iterations. The AI would sometimes get into a loop and run indefinitely. 15 iterations turned out to be the right balance — enough to handle complex multi-step tasks, but a clear hard stop.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Persona routing.&lt;/strong&gt; Originally every request went to the same system prompt. Adding four personas (Coder, Social, Researcher, General) with automatic classification improved response quality noticeably. The AI classifies the request first with a fast call, then routes to the right tool set with the right system prompt.&lt;/p&gt;




&lt;h2&gt;
  
  
  What's still broken / what I want to fix
&lt;/h2&gt;

&lt;p&gt;Honest list:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Error messages when API keys are missing are still too cryptic (I actually just opened a &lt;code&gt;good first issue&lt;/code&gt; for this)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;conductor plugins list&lt;/code&gt; doesn't show tool counts or descriptions in verbose mode (another good first issue)&lt;/li&gt;
&lt;li&gt;No &lt;code&gt;CONTRIBUTING.md&lt;/code&gt; yet, which makes it hard for new contributors to get started&lt;/li&gt;
&lt;li&gt;Test coverage is thin outside of unit tests for individual tools&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  The stack
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;TypeScript&lt;/strong&gt; throughout&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Commander.js&lt;/strong&gt; for the CLI&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;sql.js&lt;/strong&gt; for SQLite (conversation history, memory, activity logs)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;@slack/bolt&lt;/strong&gt; for the Slack bot&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;telegraf&lt;/strong&gt; for the Telegram bot&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Model Context Protocol&lt;/strong&gt; (Anthropic's open standard) for Claude Desktop integration&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The installer is a 14-step interactive bash/PowerShell script that sets up everything — AI providers, Google OAuth, Slack/Telegram tokens, Claude Desktop MCP config. Every step is optional and skippable. It's fully idempotent so re-running it is safe.&lt;/p&gt;




&lt;h2&gt;
  
  
  Install
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# macOS / Linux&lt;/span&gt;
curl &lt;span class="nt"&gt;-fsSL&lt;/span&gt; https://conductor.thealxlabs.ca/install.sh | bash

&lt;span class="c"&gt;# Windows (PowerShell)&lt;/span&gt;
irm https://conductor.thealxlabs.ca/install.ps1 | iex
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Or check it out on GitHub: &lt;strong&gt;&lt;a href="https://github.com/thealxlabs/conductor" rel="noopener noreferrer"&gt;thealxlabs/conductor&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Looking for contributors
&lt;/h2&gt;

&lt;p&gt;I'm actively looking for people to help build this out. The issues I just opened are genuinely beginner-friendly:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;#14&lt;/strong&gt; — Add &lt;code&gt;--verbose&lt;/code&gt; flag to &lt;code&gt;conductor plugins list&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;#15&lt;/strong&gt; — Write &lt;code&gt;CONTRIBUTING.md&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;#16&lt;/strong&gt; — Better error handling for missing API keys on startup&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you're a student or early-career dev who wants to contribute to a real TypeScript project with an actual user base, hit me up. I'm &lt;a href="https://x.com/thealxlabs" rel="noopener noreferrer"&gt;@thealxlabs&lt;/a&gt; on X.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;I'm 13 and I built this mostly during evenings and weekends in Toronto. If you have questions or feedback, I'm genuinely open to it — drop a comment or open an issue.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>typescript</category>
      <category>ai</category>
      <category>opensource</category>
      <category>beginners</category>
    </item>
  </channel>
</rss>
