Situation
AI coding assistants are powerful — but they're trapped. Locked inside your terminal or IDE. Step away from your desk and you lose access to your AI agent, your workspace context, your tools.
I work across multiple environments — laptop, phone, sometimes just a quick check between meetings. My AI assistant (Kiro CLI) knows my codebase, has access to MCP tools (web search, AWS docs), and can read/write files. But only when I'm sitting at my terminal.
The question: Can I take my AI agent mobile without rebuilding it from scratch?
Task
Build a bridge that connects Kiro CLI to Telegram — so I can message my AI assistant from my phone and get the same capabilities as the terminal: file access, tool execution, MCP servers, streamed responses.
Constraints:
- No modifications to Kiro CLI itself — use its public protocol
- Channel-agnostic architecture — Telegram today, Slack/Discord tomorrow
- Real tool access — not just a chatbot wrapper, but actual file reads, terminal commands, web search
- Production-ready — auth, error handling, message splitting, typing indicators
Action
Discovering ACP
Kiro CLI implements the Agent Client Protocol (ACP) — an open standard for AI agent communication, similar to how LSP standardized language servers. It uses JSON-RPC 2.0 over stdio.
This means any process that can spawn kiro-cli acp and pipe JSON through stdin/stdout becomes an ACP client. Not just IDEs — anything.
Architecture
Three layers, cleanly separated using the Adapter Pattern:
- Telegram Adapter — Auth, typing indicators, message splitting. Knows nothing about ACP.
- ACP Client — JSON-RPC transport. Manages protocol lifecycle. Knows nothing about Telegram.
- Kiro CLI — AI agent runtime. Model inference, tool execution, MCP servers.
Swap Telegram for Slack by writing a new adapter. The ACP client stays identical.
Implementing the ACP Handshake
ACP has a clear lifecycle — initialize, create session, prompt, stream:
// Step 1: Spawn Kiro CLI as ACP agent
const proc = spawn("kiro-cli", ["acp", "--trust-all-tools"], {
stdio: ["pipe", "pipe", "pipe"],
cwd: "/path/to/workspace",
});
// Step 2: Initialize — exchange capabilities
const init = await request("initialize", {
protocolVersion: 1,
clientCapabilities: {
fs: { readTextFile: true, writeTextFile: true },
terminal: true,
},
clientInfo: { name: "my-bot", version: "1.0.0" },
});
// Step 3: Create session
const session = await request("session/new", {
cwd: "/path/to/workspace",
mcpServers: [],
});
The clientCapabilities declaration is the key insight — it tells Kiro what your client can handle. Declare fs.readTextFile: true and Kiro will send file read requests to your client:
// Kiro sends requests TO us when it needs tools
async function handleServerRequest(msg) {
switch (msg.method) {
case "fs/readTextFile":
return { content: fs.readFileSync(msg.params.path, "utf-8") };
case "terminal/execute":
return { output: execSync(msg.params.command, { encoding: "utf-8" }) };
}
}
Streaming Responses
ACP streams responses via session/update notifications — each containing a text chunk:
function prompt(text) {
return new Promise((resolve) => {
const chunks = [];
acp.on("notification", (method, params) => {
if (params.update?.sessionUpdate === "agent_message_chunk") {
chunks.push(params.update.content.text);
}
});
request("session/prompt", {
sessionId,
prompt: [{ type: "text", text }],
}).then(() => resolve(chunks.join("")));
});
}
Adding MCP Tools
MCP servers extend the agent with external capabilities. One JSON config gives the bot web search:
{
"name": "my-agent",
"tools": ["*"],
"mcpServers": {
"web-search": {
"command": "npx",
"args": ["-y", "duckduckgo-mcp-server"]
}
}
}
Ask the bot "What's the weather in Tokyo?" — Kiro automatically invokes the search tool, processes results, and responds. All within a single session/prompt call.
Wiring Telegram
The adapter is the simplest layer — receive, forward, respond:
bot.on("message", async (msg) => {
if (!allowedUsers.has(String(msg.from.id))) return;
await bot.sendChatAction(msg.chat.id, "typing");
const response = await acp.prompt(msg.text);
for (const part of splitMessage(response, 4096)) {
await bot.sendMessage(msg.chat.id, part);
}
});
Result
~200 lines of code. A fully functional mobile AI assistant that:
- ✅ Connects to Kiro CLI via ACP (JSON-RPC 2.0 over stdio)
- ✅ Full workspace access — reads files, writes files, runs terminal commands
- ✅ MCP tool integration — web search, AWS docs, custom tools
- ✅ Streamed responses — no waiting for the full response to generate
- ✅ Auth, typing indicators, automatic message splitting for Telegram's 4096 char limit
- ✅ Channel-agnostic — adding Slack or Discord means writing one new adapter file
Patterns That Made It Work
| Pattern | Why It Matters |
|---|---|
| Adapter Pattern | Telegram adapter is decoupled from ACP. New channels = new adapter, same core. |
| JSON-RPC 2.0 / stdio | Same transport as LSP. Battle-tested, no HTTP overhead, works with any language. |
| Capability Negotiation | Client declares what it supports during initialize. Forward-compatible. |
| Bidirectional Requests | Client sends prompts, agent sends tool requests back. Both sides are client and server. |
| Streaming via Notifications | Agent streams chunks as generated. No polling, no buffering. |
Key Learnings
ACP is underutilized. Most people use Kiro in the terminal or IDE. But ACP is a general-purpose protocol — any client that speaks JSON-RPC can connect.
MCP servers are the real power. The bot isn't just a chatbot — it's an agent with tools. Web search, AWS docs, file operations. MCP turns text generation into action.
stdio is elegant. No HTTP servers, no WebSocket complexity, no ports. Spawn a process, pipe JSON, done.
The Adapter Pattern pays off immediately. Started with Telegram. Slack would take an afternoon because the ACP client is completely separate.
What's Next
- [ ] Multi-channel — Slack and WhatsApp adapters using the same ACP client
- [ ] Voice messages — Speech-to-text → prompt → response → voice note
- [ ] Serverless — Lambda + API Gateway with Telegram webhooks
- [ ] Session persistence — Resume conversations across restarts
Try It
The full source is open: kiro-acp-telegram-bot
MIT-0 licensed. Fork it, extend it, build your own channels.
git clone https://github.com/ajitnk-lab/kiro-acp-telegram-bot.git
cd kiro-acp-telegram-bot
npm install
cp .env.example .env # add your Telegram token
npm start
I'd love your feedback:
- What channels would you connect next?
- Would a serverless (Lambda) deployment be useful?
- What MCP servers would you want on your phone?
Drop a comment or open an issue on the repo. If you build something with this — a Slack bot, a Discord integration, a voice assistant — let me know. 🚀
Built with Kiro CLI · Agent Client Protocol · MCP
References
- Kiro CLI Documentation — Installation, authentication, usage
- Kiro ACP Integration Guide — Supported methods, session updates, Kiro extensions
- Agent Client Protocol Specification — The open standard for AI agent-editor communication
- Kiro MCP Configuration — Setting up MCP servers (stdio + HTTP)
- Kiro Custom Agents — Agent JSON config, hooks, MCP server selection
- Model Context Protocol — Anthropic's open standard for AI tool integration
- JSON-RPC 2.0 Specification — The transport protocol ACP is built on
- Telegram Bot API — Bot creation, message handling, polling vs webhooks
- ACP Changelog — Kiro 1.25 — ACP launch, MCP tools, session management
- Exploring the latest features of Amazon Q Developer CLI — Background on CLI + MCP architecture


Top comments (1)
Quick ELI5 for anyone wondering what this actually does:
I have an AI coding assistant (Kiro CLI) running on a cloud server. It can read my files, run commands, search the web — but only from a terminal.
I wanted to talk to it from my phone. So I built a small Node.js bridge that connects it to Telegram.
Now I message my Telegram bot, it forwards to the AI on my server, the AI does its thing, and I get a reply on my phone. No laptop needed — everything runs in the cloud.
The article explains the protocol (ACP) and patterns behind it, but the end result is simple: AI assistant in your pocket.
Happy to answer questions! 🙏