DEV Community

Cover image for Building AI Co-workers That Actually Think With You
abisoyeo
abisoyeo

Posted on

Building AI Co-workers That Actually Think With You

I've been working on something that started simple and turned into something more interesting than I expected.

It began with a text summarizer. You know the kind — paste in a long article, get back the main points. Useful, but not exactly revolutionary. The goal was to build it as an agent using Mastra (a TypeScript agent framework), deploy it, and connect it to Telex.im (an AI agent platform like n8n, but designed as a Slack alternative for communities and bootcamps) as an AI coworker. Clean, straightforward, done.

But then I kept thinking about what else an agent could do if it wasn't just processing text in one shot. What if it could actually think through problems with you? Not just answer questions, but ask them back. Help you see angles you hadn't considered. Act less like a tool and more like someone you'd grab coffee with to talk through a decision.

That's how the Strategic Advisor agent happened.

How I Built This: The Complete Process

Let me walk you through exactly how I went from idea to deployed AI coworker.

Step 1: Setting Up Mastra

Mastra's CLI setup - from zero to agent framework in under a minute

Mastra's CLI setup — from zero to agent framework in under a minute

I started by creating a new folder for the project and opening it in Visual Studio Code. Then I installed Mastra:

npm create mastra@latest
Enter fullscreen mode Exit fullscreen mode

The CLI walked me through setup with a few simple prompts:

  • Project name
  • Default AI provider (I chose Google)
  • API key for Google Generative AI

In under a minute, Mastra had created the entire project structure, installed dependencies, and initialized everything I needed.

_*Clean project structure generated by Mastra - everything organized and ready to go*_<br>

Clean project structure generated by Mastra - everything organized and ready to go

What I love about Mastra is how it handles the boring infrastructure so you can focus on agent logic. It gives you:

  • Agent orchestration - the core framework for building conversational agents with personality
  • Memory management using LibSQLStore - persistent conversation history that survives restarts
  • A2A API routes - JSON-RPC 2.0 compliant endpoints automatically generated
  • Development server with hot reload for instant testing

Step 2: Building the Summarizer Agent

*The complete summarizer agent - just 31 lines of code for production-ready text summarization*<br>

The complete summarizer agent - just 31 lines of code for production-ready text summarization

I created my first agent in the agents folder:

export const summarizerAgent = new Agent({
  name: "Text Summarizer",
  instructions: `
    You are an expert summarization assistant.

    Main Idea: (A single sentence capturing the core thesis)
    Key Points: (2-3 bullet points with supporting details)
  `,
  model: "google/gemini-2.5-flash",
  tools: {},
  memory: new Memory({
    storage: new LibSQLStore({
      url: "file:../mastra.db"
    })
  })
});
Enter fullscreen mode Exit fullscreen mode

The key Mastra features I'm using here:

  • Agent class - Mastra's core abstraction with built-in prompt management
  • Memory with LibSQLStore - persistent storage with zero configuration
  • Model abstraction - swap providers without changing code
  • Instruction templates - clear prompts that define agent behavior

I tested it locally by running:

npm run dev
Enter fullscreen mode Exit fullscreen mode

_*Testing the summarizer in Mastra Studio - instant feedback on agent responses*_<br>

Testing the summarizer in Mastra Studio - instant feedback on agent responses

The agent worked perfectly, taking long text and returning clean, structured summaries.

Step 3: Deploying to Mastra Cloud

To make this accessible from Telex, I needed to deploy it. Mastra Cloud made this dead simple:

  1. Signed up at Mastra Cloud using GitHub
  2. Connected my GitHub repository
  3. Added environment variables (my Google API key)
  4. Hit "Deploy"

Within minutes, my agent was live at:

https://limited-gray-house-e6958.mastra.cloud/a2a/agent/summarizerAgent
Enter fullscreen mode Exit fullscreen mode

The A2A endpoints were automatically generated by Mastra's routing system. I didn't write any API boilerplate - Mastra's registerApiRoute function handles all the JSON-RPC 2.0 protocol details.

Step 4: Creating the Strategic Advisor

After the summarizer was working, I built the Strategic Advisor using the same Mastra patterns:

export const strategicAdvisorAgent = new Agent({
  name: "Strategic Advisor",
  instructions: `
    You are a strategic business advisor...

    # Your Three Core Capabilities
    1. Competitor Snapshot Analysis
    2. Decision Support Engine
    3. Idea Feasibility Evaluator

    [Detailed instructions for each capability...]
  `,
  model: "google/gemini-2.5-flash",
  tools: {},
  memory: new Memory({
    storage: new LibSQLStore({
      url: "file:../mastra.db"
    })
  })
});
Enter fullscreen mode Exit fullscreen mode

Both agents share the same LibSQLStore, which means they access the same conversation history database. This is powerful for future multi-agent workflows.

Step 5: Integrating With Telex

Telex's AI Coworkers section - where agents become team members

Telex's AI Coworkers section - where agents become team members

Now comes the fun part - turning these deployed agents into AI coworkers.

I logged into Telex.im and navigated to the AI Coworkers section:

  1. Click "Add New AI Coworker"

Creating an AI coworker

Creating an AI coworker - Telex even generates custom avatars

  1. Generate an avatar - Telex has a built-in image generator. I described what I wanted and it created custom avatars for both agents.

  2. Fill in the details:

    • Name: "Text Summarizer" and "Strategic Advisor"
    • Title: "Expert Summarization Specialist" and "Business Strategy & Decision Support Specialist"
    • Description: Clear explanation of what each agent does
  3. Add the workflow JSON - This is where Mastra and Telex connect:

Workflow configuration

Workflow configuration - pointing Telex to the deployed Mastra agent

{
  "active": true,
  "category": "custom-agents",
  "name": "text_summarizer_agent_delegate",
  "nodes": [
    {
      "type": "a2a/mastra-a2a-node",
      "url": "https://limited-gray-house-e6958.mastra.cloud/a2a/agent/summarizerAgent"
    }
  ]
}
Enter fullscreen mode Exit fullscreen mode

This tells Telex: "When someone messages this AI coworker, forward the request to this Mastra endpoint using the A2A protocol."

That's it. The agents were live. Now I can engage either one in their specific agent chat.

What Worked (and What Didn't)

What worked:

Mastra made the integration process feel effortless. The API routes worked right out of the box, and testing agents through them was straightforward. Swapping agents or endpoints took seconds — it felt like having a local playground for experimentation before deployment.

What didn't:

The Telex integration gave me more trouble than expected. The workflow.json setup wasn't always consistent, and occasionally the Telex UI wouldn't update with responses from the A2A agent even though the backend call succeeded. These bugs were random and hard to reproduce, but they slowed things down a bit. Once configured, it worked — but getting there took some trial and error.

What the Strategic Advisor Actually Does

The Strategic Advisor has three core capabilities:

1. Competitor Analysis

You tell it "analyze competitors for fintech salary-advance startups in Nigeria," and instead of generic advice, it asks:

  • "Give me 3-5 competitor names and what each does"
  • "What's your differentiation?"
  • "What customer segment are you targeting?"

Then it builds you a structured breakdown with strengths/weaknesses tables, market positioning maps, and strategic gaps.

Structured competitor analysis

Structured competitor analysis - clear insights instead of generic advice

2. Decision Support

You say "should we raise seed funding now or wait 6 months?" and it asks what matters to you - speed, control, runway, risk tolerance. Then it builds a weighted decision matrix based on your priorities.

Decision framework

Caption: Decision framework tailored to your priorities, not generic startup advice

3. Idea Feasibility

You say "I'm thinking of building a WhatsApp bot for team feedback," and it asks about target customers, pricing, and go-to-market. Then it evaluates market fit, complexity, monetization, and risk/opportunity ratio.

Idea validation

Idea validation before you invest time and resources

Mastra Features That Made This Possible

Let me highlight the specific Mastra capabilities that made this project smooth:

1. Agent Orchestration

The Agent class handles prompt management, model selection, message history, and response generation. You define behavior through instructions - Mastra handles execution.

2. Memory Management

LibSQLStore provides persistent conversation history with zero configuration. Both agents share the same database, enabling cross-session context and future multi-agent workflows.

3. A2A API Routes

registerApiRoute automatically generates A2A-compliant endpoints. No API boilerplate needed - Mastra handles JSON-RPC 2.0 parsing, error handling, and response formatting.

4. Model Abstraction

Swap google/gemini-2.5-flash for anthropic/claude-3 or openai/gpt-4 without changing other code. Mastra normalizes the interface across providers.

5. Development Experience

Hot-reload server and built-in Studio UI make testing instant. Iterate on prompts and see results in seconds.

What I Learned

  • Conversation quality beats speed. The advisor takes longer to respond, but when you're making real decisions you need good answers, not fast ones.
  • Asking questions is more valuable than giving answers. The advisor's superpower isn't what it knows - it's what it asks. By forcing you to clarify what matters, it helps you see your own thinking more clearly.
  • Specialization scales better than generalization. Two focused agents beat one bloated agent. This is true in software, and it's true in AI.
  • Standards reduce friction. A2A compliance meant Telex integration took 10 minutes instead of 10 hours. Boring standards are good.
  • Memory turns tools into partners. When agents remember context across sessions, people stop treating them like search engines and start treating them like collaborators.
  • Frameworks like Mastra let you focus on what matters. I spent 90% of my time on agent logic and 10% on infrastructure. That ratio is backwards in most AI projects.

What's Next

Right now both agents work well for what they do. But there are obvious next steps:

  • Streaming responses for the advisor, so you see it thinking in real-time
  • Artifact generation - export decision matrices and reports
  • Tool calling - let the advisor pull in real data when needed
  • Multi-agent workflows - one agent delegating to another

Try It Yourself

The code is open source on GitHub. If you want to build your own version, the README has everything you need.


Built with Mastra, deployed on Mastra Cloud, integrated with Telex.im.

Top comments (0)