DEV Community

chinecherem rose
chinecherem rose

Posted on

CodeMate - An AI Coding Agent for developers

Built CodeMate, An AI Coding Assistant using Mastra & integrated into Telex.im

How I built an AI-powered coding assistant that helps developers write, debug, and refactor code, and integrated it with Telex using the A2A (Agent-to-Agent) protocol.


🎯 The Challenge

As a backend developer, I often find myself jumping between projects, testing APIs, debugging async logic, and rewriting code snippets. I wanted a lightweight assistant that could:

  • Understand code context

  • Explain and refactor snippets

  • Suggest improvements

  • Integrate seamlessly into communication platforms (like Telex.im)

  • Run automatically and respond in real time

That is how CodeMate was born, an AI coding agent powered by Mastra and integrated via Telex’s A2A protocol.


🛠️ Tech Stack

Here’s what powers CodeMate under the hood:

  • Mastra: AI agent framework for building & orchestrating intelligent agents

  • Telex.im: Communication platform that supports A2A protocol

  • Railway: Simple, fast hosting for the agent

  • OpenAI GPT-4o-mini: The model behind CodeMate’s intelligence

  • Node.js + Express: Backend server handling A2A communication


🧩 Architecture Overview

Here’s how the system works end-to-end:

System Flow

User message (via Telex)
↓
JSON-RPC 2.0 request
↓
CodeMate server on Railway (Node.js)
↓
Mastra Agent (with OpenAI integration)
↓
Generate, refactor, or explain code
↓
A2A-formatted response
↓
Telex.im Display
Enter fullscreen mode Exit fullscreen mode

💻 Building the Agent

Step 1: Setting Up Mastra

Mastra made it easy to define an intelligent agent with clear instructions:

import { Agent } from '@mastra/core/agent';
import { openai } from '@ai-sdk/openai';

export const codeMateAgent = new Agent({
  name: 'CodeMate',
  instructions: `
    You are a helpful coding assistant.
    You help developers write, debug, and refactor code efficiently.
    Always explain your reasoning clearly and show examples when needed.
  `,
  model: openai('gpt-4o-mini'),
});
Enter fullscreen mode Exit fullscreen mode

Step 2: Handling the A2A Protocol

Telex sends messages using JSON-RPC 2.0. I had to ensure the server could parse and respond in the exact format expected.

if (data.jsonrpc === '2.0' && data.params) {
  const textPart = data.params.message?.parts?.find(p => p.kind === 'text');
  userMessage = textPart?.text;
}

const response = {
  jsonrpc: '2.0',
  id: data.id,
  result: {
    message: {
      role: 'agent',
      parts: [
        {
          kind: 'text',
          text: aiResponse,
        },
      ],
    },
  },
};

res.end(JSON.stringify(response));
Enter fullscreen mode Exit fullscreen mode

This ensures Telex recognizes and displays the response from CodeMate properly.

Step 3: Testing Locally

Before deploying, I tested via postman and also created a testAgent.js script to simulate A2A calls:

const axios = require('axios');

(async () => {
  const payload = {
    jsonrpc: '2.0',
    id: 'bot-test',
    params: {
      message: {
        parts: [{ kind: 'text', text: 'Explain async/await in JavaScript' }],
      },
    },
  };

  const res = await axios.post('http://localhost:3000/a2a/agent/codeMateAgent', payload);
  console.log('Agent Response:\n', res.data);
})();
Enter fullscreen mode Exit fullscreen mode

Result:
CodeMate returned a clean, well-formatted explanation with code examples.

Step 4: Deploying to Railway

Railway simplified the deployment process:

  • Pushed the code to GitHub

  • Linked it to Railway

  • Set environment variables:

OPENAI_API_KEY=sk-...
PORT=3000
Enter fullscreen mode Exit fullscreen mode
  • Deployed automatically via Railway’s CI/CD pipeline

Step 5: Connecting to Telex

Finally, I registered my workflow in Telex.im Once active, Telex connected instantly, and messages started flowing 🎉


⚙️ Example Interaction

User:

“Refactor this async function for better readability.”

CodeMate Agent:

Here’s a cleaner version using async/await and error handling:

async function fetchUserData() {
  try {
    const res = await fetch('/api/user');
    if (!res.ok) throw new Error('Network error');
    const data = await res.json();
    return data;
  } catch (err) {
    console.error('Error:', err);
  }
}
Enter fullscreen mode Exit fullscreen mode

Simple, elegant, and readable — just what a developer needs.


🚧 Challenges & Fixes

1. Empty LLM Responses

Cause: Missing tool invocation in Mastra

Fix: Verified model configuration and enforced structured responses

2. A2A Format Mismatch

Telex expects specific JSON-RPC structure

  • Fixed by wrapping all responses in the jsonrpc + result.message structure

3. Railway Connection Errors

Solved by explicitly setting the PORT environment variable

4. Missing Metadata

  • Fix: Added contextId and taskId fields for debugging within Telex

🎓 Key Learnings

1. Protocol compliance is everything

Understanding Telex’s A2A JSON-RPC flow was crucial.

2. Mastra simplifies agent creation

It abstracts away most of the complexity.

3. Railway is perfect for quick deployments

Fast, automatic, and great for projects like this.

4. Good logs save hours

Added console logs for each request and response and debugging became a breeze.


💡 Tips for Building Your Own

  1. Test your A2A endpoint locally before deploying

  2. Use environment variables for API keys

  3. Keep your agent instructions simple and focused

  4. Use Mastra’s Agent abstraction. It saves tons of time

  5. Log everything in development mode


🤝 Open Source & Community

This project is open source. Contributions are welcomed!
Feel free to fork, improve, or integrate your own AI workflows.

GitHub Repo: https://github.com/NecheRose/CodeMate-AI-Agent.git


🎯 Conclusion

Building CodeMate with Mastra and Telex.im was an exciting journey. It taught me a lot about AI agent communication, JSON-RPC protocols, and real-world integration between intelligent systems.

If you are an AI developer exploring how to build and connect your own agents, i highly recommend giving Mastra a try.


Built with ❤️ using Mastra, Telex.im, and Railway

Top comments (0)