So I wanted to build an AI agent that could help me generate content, and I ended up creating something pretty cool - a conversational AI coworker that lives in Telex.im and generates blog posts.
But here's the thing - it started as a newsletter generator. Yep, completely different use case. Let me tell you how it evolved.
The Newsletter Pivot 🔄
Originally, I was building this to generate newsletters. The idea was simple: give the AI a title and content, and it spits out a nicely formatted newsletter. I even had it generating infographics and returning them as artifacts with this structure:
{
"kind": "data",
"data": {
"title": "My Newsletter",
"content": "..."
}
}
But when I tried integrating it with Telex.im, the artifacts weren't rendering correctly. The platform just didn't know what to do with that data format, or I was probably doing it wrong (that's more likely the case). So I thought, "Why not just make it simpler?"
I pivoted to blog posts and changed the artifact format to plain text:
{
"kind": "text",
"text": "Here's your blog post content..."
}
Way cleaner, and Telex.im loved it. Sometimes the simplest solution is the best one!
What I Built
An AI agent that:
- Has natural conversations (not just one-shot queries)
- Asks for blog title and topic step-by-step
- Generates professional blog posts using Groq's fast groq/compound API
- Remembers the entire conversation
- Works seamlessly in Telex.im workflows
The Stack
Go: Because I wanted something fast and concurrent
LangChain Go: For managing conversations and LLM interactions
Groq: Lightning-fast inference with Llama 3.1 (seriously, 750+ tokens/second!)
A2A JSON RPC 2.0: Standard protocol for communication
Telex.im: Where the magic happens
How the Conversation Flows
The agent doesn't just take a prompt and generate. It actually guides you through:
Agent: "Want to create a blog post?"
You: "yes"
Agent: "What's the title?"
You: "Building AI Agents"
Agent: "What's the topic?"
You: "Go and LangChain"
Agent: "Should I proceed?"
You: "yes"
Agent: ✅ [generates full blog post]
This conversational approach makes it feel less like using a tool and more like working with a teammate.
The Code Structure
I kept it clean with separation of concerns:
blog-generator/
├── main.go # Entry point
├── config/ # API keys & settings
├── requests/ # Request/Response types
├── services/
│ ├── agent_service.go # Conversation logic
│ └── blog_service.go # Content generation
└── controllers/ # HTTP handlers
Key Components
1. Configuration
First, load your Groq API key (grab one free at console.groq.com):
package config
import "os"
type Config struct {
GroqAPIKey string
Port string
}
func Load() *Config {
groqAPIKey := os.Getenv("GROQ_API_KEY")
if groqAPIKey == "" {
panic("GROQ_API_KEY not set")
}
port := os.Getenv("PORT")
if port == "" {
port = "8080"
}
return &Config{
GroqAPIKey: groqAPIKey,
Port: port,
}
}
2. Blog Service - The Content Generator
type BlogPostService struct {
llm llms.Model
}
func NewBlogPostService(groqAPIKey string) *BlogPostService {
llm, err := openai.New(
openai.WithToken(groqAPIKey),
openai.WithBaseURL("https://api.groq.com/openai/v1"),
openai.WithModel("groq/compound"),
)
if err != nil {
panic(fmt.Sprintf("failed to create blogpost service: %v", err))
}
return &BlogPostService{llm: llm}
}
func (s *BlogPostService) getSystemPrompt() string {
return `GENERATE_NEWSLETTER|Title: `
}
func (s *BlogPostService) Generate(ctx context.Context, session *requests.SessionData, title string) (string, error) {
messages := []llms.MessageContent{
llms.TextParts(llms.ChatMessageTypeSystem, s.getSystemPrompt()),
llms.TextParts(llms.ChatMessageTypeSystem, "You are a blog writer. Create engaging, well-structured blog posts."),
llms.TextParts(llms.ChatMessageTypeHuman, fmt.Sprintf(`Create a cool blog post based on the following:
Title and Content: %s
Generate a well-structured blogpost with:
- A catchy headline
- An engaging introduction
-if you notice any code snippets, include it in the blog explanation
- 3-4 key points or sections with clear headings (use format "## Heading")
- A conclusion or call-to-action
Keep it concise and professional.`, title)),
}
memoryVars, err := session.Memory.LoadMemoryVariables(ctx, map[string]any{})
if err == nil {
if history, ok := memoryVars["history"].(string); ok && history != "" {
messages = append(messages, llms.TextParts(llms.ChatMessageTypeHuman, history))
}
}
messages = append(messages, llms.TextParts(llms.ChatMessageTypeHuman, title))
response, err := s.llm.GenerateContent(ctx, messages, llms.WithTemperature(0.7))
if err != nil {
return "", err
}
return response.Choices[0].Content, nil
}
3. Agent Service - The Brain
This is where the conversation magic happens:
type AgentService struct {
llm llms.Model
blogSvc *BlogService
sessions map[string]*models.SessionData
mu sync.RWMutex // Thread safety!
}
Session management:
func (s *AgentService) getOrCreateSession(taskID string) *models.SessionData {
s.mu.Lock()
defer s.mu.Unlock()
if session, exists := s.sessions[taskID]; exists {
return session
}
// Create new session with LangChain memory
session := &models.SessionData{
ContextID: uuid.New().String(),
History: []models.HistoryMessage{},
Memory: memory.NewConversationBuffer(),
}
s.sessions[MessageID] = session
return session
}
func (s *AgentService) processAIResponse(ctx context.Context, session *requests.SessionData, MessageID, userText, taskID string) *requests.TaskResult {
var artifacts []requests.Artifact
var finalResponse string
state := "completed"
finalResponse, artifacts = s.handleBlogPostGeneration(ctx, session, userText)
if s.isConversationComplete(finalResponse) {
state = "completed"
}
return s.buildTaskResult(session, state, finalResponse, taskID, artifacts)
}
func (s *AgentService) handleBlogPostGeneration(ctx context.Context, session *requests.SessionData, userText string) (string, []requests.Artifact) {
title := userText
session.Title = title
blogpost, err := s.blogpostSvc.Generate(ctx, session, title)
if err != nil {
return fmt.Sprintf("I apologize, but I encountered an error generating the blogpost: %v\n\nWould you like to try again?", err), nil
}
session.BlogPost = blogpost
blog := fmt.Sprintf("\nTitle: %s \n\n Content: %s", title, blogpost)
artifact := requests.Artifact{
ArtifactID: uuid.New().String(),
Name: "blogpost",
Parts: []requests.ResponsePart{
{
Kind: "text",
Text: blog,
},
},
}
response := "BlogPost generated successfully!"
return response, []requests.Artifact{artifact}
}
This separates the AI's decision-making from the implementation. Clean and modular!
Integrating with Telex.im 🚀
This is where things got really fun. I wanted to turn my agent into a proper AI coworker in Telex.im.
I followed the docs at docs.telex.im and checked out Pheonix's blog for some integration patterns. Both were super helpful!
My Telex.im Workflow
Here's the workflow config I'm using:
{
"active": true,
"category": "utilities",
"description": "A workflow that creates blogposts",
"id": "iceu-blogpost-001",
"name": "blogpost_agent",
"long_description": "You are a helpful assistant that provides sample BlogPosts.\n\nYour primary function is to help users generate BlogPosts from title and description they send. When responding:\n- Always ask for the blog title if none is provided\n- Ask for the content/topic description\n- Confirm the details and ask if they want to proceed\n- Keep responses concise but informative\n- Use the blogpostgeneration tool with title and content",
"short_description": "creates blogpost from title and content given",
"nodes": [
{
"id": "blogpost_agent",
"name": "blogpost agent",
"parameters": {},
"position": [816, -112],
"type": "a2a/mastra-a2a-node",
"typeVersion": 1,
"url": "https://newsletter-ai-coworker-i-ceu4383-amxl8p89.leapcell.dev/agent"
}
],
"pinData": {},
"settings": {
"executionOrder": "v1"
}
}
The workflow is pretty straightforward - it's a single node that points to my agent endpoint. The long_description gives Telex.im context about what my agent does and how to interact with it.
How It Works in Telex
- User messages the workflow in Telex.im
- Telex sends a A2A JSON-RPC request to my endpoint
- My agent processes it and responds conversationally
- Telex displays the response to the user
- The conversation continues naturally
The cool part is that Telex maintains the taskId across messages, so my agent can track the full conversation and remember context.
Running It
# Set your API key
export GROQ_API_KEY="gsk_your_key_here"
# Run
go run main.go
Deploy it anywhere (I used Leapcell), and you've got yourself an AI coworker!
What I Learned
1. Start simple - I spent time building infographic generation, but for my use case, plain text was all I needed.
2. Architecture matters - Separating the blog service from the agent service made it super easy to test and iterate.
3. Conversation > Commands - Making it conversational instead of a one-shot API feels way more natural.
4. LangChain is awesome - The built-in memory management saved me from writing a bunch of boilerplate.
5. Groq is fast - Seriously, the speed difference compared to other providers is wild.
The Result
Now I have an AI coworker living in Telex.im that I can message anytime like:
Me: "hey, need a blog post"
Agent: "Sure! What's the title?"
Me: "Building with Go"
Agent: "Great! What should it be about?"
Me: "Microservices patterns"
Agent: "Got it! Should I proceed?"
Me: "yes"
Agent: ✅ [Full blog post appears]
It's like having a content writer on standby. Pretty neat for when I need to quickly draft technical content!
I'll hopefully work on fully integrating it into platforms like here or hashnode so it creates the blogs automatically.
Might be a fun next step.
Resources
Want to build your own?
Check out Theo my AI coworker. It generates blog posts ready for publication.
Also check out Telex an AI agent platform like Make and a Slack alternative for communities or bootcamps.
Have you built any AI agents? What are you using them for? Drop a comment below - I'd love to hear about your projects! 🚀
Tags: #go #ai #langchain #groq #telex
Top comments (0)