DEV Community

Alexander Attoh
Alexander Attoh

Posted on

How I Built "Backend Mentor" My First Hands-On Integration with Mastra

🚀

You know that feeling when you find a new tool that just clicks?
That’s exactly how I felt when I discovered Mastra — a new framework for building AI agents that actually feels like engineering, not just prompt hacking.

Over the past week, I’ve been experimenting with Mastra to build something I’ve always wanted:
a friendly, always-available AI mentor that helps developers learn backend concepts in bite-sized, conversational lessons.

I called it Backend Mentor, and in this post, I’ll walk you through how I built it, what worked, what didn’t, and what I learned along the way.


💡 The Idea: An AI That Teaches Like a Developer Friend

I’ve always wanted to have a mentor that I could go to always and learn something new each day, but not just a blogger who may already post constantly, but someone who I could ask questions and would know my strengths and weaknesses.
So I thought: What if I could train an AI to do that instead?
Something that doesn’t just spit out definitions, but teaches concepts like a senior dev explaining things over coffee — using relatable examples, some Node.js code, and a clear flow.

I didn’t want a chatbot.
I wanted a teacher.


🧱 Setting Up the Foundation

I started with a basic Node.js setup and installed Mastra:

npx create-mastra@latest
Enter fullscreen mode Exit fullscreen mode

Then I created a structure that looked like this:

src/
 └── mastra/
     ├── agents/
     ├── a2a/
     └── index.ts
Enter fullscreen mode Exit fullscreen mode

Pretty neat.
Each folder has a clear purpose:

agents/ → where the AI logic lives
a2a/ → where I expose API routes
index.ts → bootstrapping Mastra itself

It instantly felt more like setting up Express — but for AI logic.


🧠 Building the “Backend Mentor” Agent

At first, I thought I needed multiple tools — one for picking topics, another for generating lessons, another for Q&A.

But then I realized: I was over-engineering it.
The LLM is already capable of reasoning through those steps — I just needed to tell it what role it’s playing.

So I scrapped the tools and wrote a really clear instruction prompt instead, just to start simple, and we could improve as we go.

export const backendMentorAgent = new Agent({
    name: "Backend Mentor",
    instructions: `
You are "Backend Mentor" — an AI teacher and learning companion for backend developers.

Your mission:
- Teach backend development in a structured yet conversational way.
- Choose relevant or new backend topics when asked to learn something.
- Generate clear, well-explained lessons (10–20 minute reads) that balance theory and real-world examples.
- Answer user questions, clarify concepts, and expand on related subtopics naturally.
- Track recently covered topics to avoid unnecessary repetition, but revisit them strategically for reinforcement.

Tone and style:
- Professional yet friendly — think "senior developer mentoring a junior."
- Concise, precise, and technically correct.
- Avoid unnecessary filler or hype — focus on clarity and insight.
- Use examples in JavaScript/Node.js when appropriate.

Behavior:
- If the user is unsure what to learn, suggest a few backend topics.
- When asked a question, answer it directly with explanation and optional examples.
- When teaching, use short sections, headings, and code snippets.
- Assume persistence of short-term context (you remember recent lessons).

Output format:
- Respond in Markdown for readability.
- Include brief code samples where useful.
- Do not stop after the intro or preamble. Always include the full lesson in your next response.
`,
    model: "google/gemini-2.5-pro",
    tools: {},

    memory: new Memory({
        storage: new LibSQLStore({
            url: "file:../mastra.db",
        }),
    }),
});
Enter fullscreen mode Exit fullscreen mode

That’s it.
No complex tool chains — just a clear identity, purpose, and tone.

And honestly? That one instruction block changed everything.
It made the agent feel coherent — it had a voice, a personality, and an agenda.


💾 Adding Memory (Because Context Matters)

Mastra has built-in support for memory through different storage backends.
I used the LibSQLStore, which runs locally and persists data between runs — perfect for lightweight projects.

import { Agent } from "@mastra/core/agent";
import { Memory } from "@mastra/memory";
import { LibSQLStore } from "@mastra/libsql";


export const backendMentorAgent = new Agent({
    name: "Backend Mentor",
    instructions: `
...
`,
    model: "google/gemini-2.5-pro",
    tools: {},

    memory: new Memory({
        storage: new LibSQLStore({
            url: "file:../mastra.db",
        }),
    }),
});
Enter fullscreen mode Exit fullscreen mode

Now my agent can remember what topics it has already taught —
so if it explained “Caching Strategies” yesterday, it won’t repeat it today.

That’s the kind of continuity that makes learning feel human.


🌐 Making It Accessible with A2A (Agent-to-Agent) API

Mastra’s registerApiRoute function is such a win.
It basically lets you turn your agent into a callable API without any boilerplate.

Here’s what my A2A route looked like:

import { registerApiRoute } from "@mastra/core/server";
import { randomUUID } from "crypto";


interface MessagePart {
    kind: "text" | "data" | string;
    text?: string;
    data?: unknown;
};

interface ArtifactPart {
    kind: "text" | "data",
    text?: string;
    data?: any;
};

interface Artifact {
    artifactId: string;
    name: string;
    parts: ArtifactPart[];
};

export const a2aAgentRoute = registerApiRoute("/a2a/agent/:agentId", {
    method: "POST",
    handler: async (c) => {
        try {
            const mastra = c.get("mastra");
            const agentId = c.req.param("agentId");

            const body = await c.req.json();
            const { jsonrpc, id: requestId, method, params } = body;

            if (jsonrpc !== "2.0" || !requestId) {
                return c.json(
                    {
                        jsonrpc: "2.0",
                        id: requestId || null,
                        error: {
                            code: -32600,
                            message:
                                'Invalid Request: "jsonrpc" must be "2.0" and "id" is required',
                        },
                    },
                    400
                );
            };

            const agent = mastra.getAgent(agentId);
            if (!agent) {
                return c.json(
                    {
                        jsonrpc: "2.0",
                        id: requestId,
                        error: {
                            code: -32602,
                            message: `Agent '${agentId}' not found`,
                        },
                    },
                    404
                );
            };

            const { message, messages, contextId, taskId, metadata } = params || {};

            let messagesList = [];
            if (message) messagesList = [message];
            else if (Array.isArray(messages)) messagesList = messages;

            const mastraMessages = messagesList.map((msg: any) => ({
                role: msg.role,
                content:
                    (msg.parts as MessagePart[])
                        ?.map((part: MessagePart) => {
                            if (part.kind === "text") return part.text || "";
                            if (part.kind === "data") return JSON.stringify(part.data);
                            return "";
                        })
                        .join("\n") || "",
            }));

            const response = await agent.generate(mastraMessages);
            const agentText = response.text || "";

            const artifacts: Artifact[] = [
                {
                    artifactId: randomUUID(),
                    name: `${agentId}Response`,
                    parts: [{ kind: "text", text: agentText }],
                },
            ];

            if (response.toolResults?.length) {
                artifacts.push({
                    artifactId: randomUUID(),
                    name: "ToolResults",
                    parts: response.toolResults.map((r) => ({
                        kind: "data",
                        data: r,
                    })),
                });
            };

            const history = [
                ...messagesList.map((msg) => ({
                    kind: "message",
                    role: msg.role,
                    parts: msg.parts,
                    messageId: msg.messageId || randomUUID(),
                    taskId: msg.taskId || taskId || randomUUID(),
                })),
                {
                    kind: "message",
                    role: "agent",
                    parts: [{ kind: "text", text: agentText }],
                    messageId: randomUUID(),
                    taskId: taskId || randomUUID(),
                },
            ];

            return c.json({
                jsonrpc: "2.0",
                id: requestId,
                result: {
                    id: taskId || randomUUID(),
                    contextId: contextId || randomUUID(),
                    status: {
                        state: "completed",
                        timestamp: new Date().toISOString(),
                        message: {
                            messageId: randomUUID(),
                            role: "agent",
                            parts: [{ kind: "text", text: agentText }],
                            kind: "message",
                        },
                    },
                    artifacts,
                    history,
                    kind: "task",
                },
            });
        } catch (error: any) {
            console.error("Router Error:", error);
            return c.json(
                {
                    jsonrpc: "2.0",
                    id: null,
                    error: {
                        code: -32603,
                        message: "Internal error",
                        data: { details: error.message },
                    },
                },
                500
            );
        };
    }
})
Enter fullscreen mode Exit fullscreen mode

Once this was running, I could literally POST a message to the endpoint
and the agent would respond like a mini AI service — ready for integration into any platform.


🔍 Putting It All Together

Finally, I registered everything inside index.ts:

export const mastra = new Mastra({
  agents: { backendMentorAgent },
  storage: new LibSQLStore({ url: ":memory:" }),
  logger: createLogger({
    name: "BackendMentorAgent",
    level: "info",
  }),
  server: {
    build: {
      openAPIDocs: true,
      swaggerUI: true,
    },
    apiRoutes: [a2aAgentRoute],
  },
});
Enter fullscreen mode Exit fullscreen mode

And when I ran the project…

✅ Connected to Centrifugo
✅ Agent workflow found
✅ Backend Mentor is ready to teach
Enter fullscreen mode Exit fullscreen mode

That first successful response — a 15-minute blog lesson on database indexing — honestly felt magical.

⚖️ What Worked Amazingly Well

✅ Mastra’s modularity
Everything has a place. You’re not hacking things together — you’re building a system.

✅ Simplicity of defining agents
Just give it a name, a model, and some instructions.
That’s all you need to spin up an agent that behaves consistently.

✅ Built-in A2A server
Being able to treat an agent like a microservice out of the box was a big plus.

✅ Developer ergonomics
The API design is clean, the logging is thoughtful, and the defaults are sane.


⚠️ The Rough Edges

❌ Package version mismatches
Mastra is evolving fast, so keeping versions in sync across packages can be tricky.
Deleting node_modules and reinstalling from scratch helped.

❌ Overusing tools early on
My first version had three separate tools for every task.
It worked — but it was overkill. The simpler, instruction-driven setup was way better.


🧭 What I Learned Along the Way

  1. Prompt clarity beats complex code. A well-written instruction can replace 50 lines of glue logic.
  2. Memory adds realism. Even basic persistence makes your agent feel “alive.”
  3. Logs are your best friend. Mastra’s logger saved me more debugging time than I’d like to admit.
  4. Start simple, evolve later. It’s tempting to over-abstract early, but AI agents benefit from clarity first.

🚀 What’s Next for Backend Mentor

Now that the basics work, here’s what I’m planning next:

  • Daily lesson scheduler — automatically publish a new topic every morning
  • Frontend dashboard — a clean UI for reading lessons and tracking progress
  • Multiple learning paths — e.g., REST APIs, databases, DevOps fundamentals
  • Personalized feedback — where the agent gives tailored exercises

Mastra makes this expansion surprisingly easy — I can add these features without rewriting the core logic.


🔗 Mastra Features I Used

Before wrapping up, I wanted to highlight the Mastra features that powered Backend Mentor — and where you can actually try it out.

🧩 Mastra Features I Used

Here’s what my setup relied on:

✅*Agent *— the heart of the system. Defines the mentor’s personality, model, and behavior.

✅*Memory *— stores what the agent has already taught, keeping lessons relevant and avoiding repetition.

✅*LibSQLStore *— lightweight, file-based database for persisting memory locally.

✅*registerApiRoute *— turns the agent into a callable HTTP endpoint (A2A).

✅*Logger *— clean, structured logging for debugging and observability.

These features together made the project feel like building a real, production-ready AI service — not just an experiment.


🌍 Meet Backend Mentor on Telex

You can check out Backend Mentor live on Telex here:
👉 Backend Mentor Agent on Telex

What it does:
Backend Mentor is an AI teacher that helps developers learn backend development concepts in a conversational way.
It creates structured lessons (about 10–20 minutes long), answers follow-up questions, and tracks what you’ve already learned — so every new session builds on the last.

If you’re a backend developer trying to level up, it’s like having a senior dev in your browser — available 24/7.


🌍 See It Live on Telex

You can try Backend Mentor right now on Telex
an AI agent platform where you can deploy and chat with custom agents.

Think of Telex as a Slack alternative for learning communities and bootcamps —
but with AI agents that teach, moderate, and collaborate.

Mastra and Telex together make it ridiculously easy to go from idea → AI service → real users in hours.


❤️ Final Thoughts

Mastra feels like Express.js for AI — light, flexible, and surprisingly fun to build with.
It doesn’t fight you; it guides you toward good structure and readable code.

Building Backend Mentor reminded me why I love this field.
It’s not just about making things smarter — it’s about making learning more accessible, more human, and honestly… a little more fun.

If you’re looking to experiment with agents or bring AI into your existing stack,
Mastra is absolutely worth your weekend.

Top comments (0)