DEV Community

iamprecieee
iamprecieee

Posted on

Building an A2A-Compatible Agent in Rust: My Telex Integration Journey

So for HNG Stage 3, my task was building an AI agent. Pretty straightforward, huh? After all, I've built APIs. How different could an Agent-to-Agent (A2A) protocol implementation be?

Turns out, quite different. And that's exactly what made it a great learning experience.

The Task

My goal was an A2A-compatible agent that integrates with Telex for discovering trending GitHub repositories. Users should be able to ask natural language questions like "What's trending in Rust?" or "Show me AI projects from this week", and get relevant repository recommendations.

Simple enough on paper. But the real challenge wasn't building the agent. It was getting it to work with Telex's A2A implementation.

Choosing Rust

This was my first time building an A2A integration, and I decided to do it in Rust just to challenge myself and learn something new. Rust's type system, error handling, and performance characteristics seemed like a good fit for a service that needed to be reliable and fast.

Plus, I figured if I was going to build something from scratch, I might as well use a language that would force me to think carefully about every decision.

Understanding the A2A Protocol

The A2A protocol is built on JSON-RPC 2.0, which provides a standardized way for agents to communicate. At its core, it's about:

  • Standardized message formats
  • Task and context management
  • Conversation history
  • Structured artifacts

The request structure looks like this:

{
  "jsonrpc": "2.0",
  "id": "request-id",
  "method": "message/send",
  "params": {
    "message": {
      "kind": "message",
      "role": "user",
      "parts": [{"kind": "text", "text": "query"}],
      "messageId": "message-id",
      "taskId": "task-id"
    },
    "configuration": {
      "blocking": true
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

And the response needs to follow a specific structure with artifacts, history, and status information. This was new territory for me, different from the REST APIs I was used to building.

The First Hurdle: Extracting User Queries

The biggest challenge came from Telex's request structure constantly changing during development. The user's query could be in different places:

  1. Sometimes in message.parts[0].text (for text parts)
  2. Sometimes in message.parts[0].data (for data parts)
  3. Sometimes wrapped in HTML (<p> tags)
  4. Sometimes nested in different structures

I had to build a robust parser that could handle all these variations:

pub fn extract_user_query(request: &A2ARequest) -> Option<String> {
    // First, try to find data parts
    let data_part = request
        .params
        .message
        .parts
        .iter()
        .find(|part| matches!(part, MessagePart::Data { .. }));

    if let Some(MessagePart::Data { data, .. }) = data_part {
        for entry in data.iter().rev() {
            if let Some(text) = entry.get("text").and_then(|v| v.as_str()) {
                let trimmed = text.trim();
                let is_user_query = trimmed.starts_with("<p>") || trimmed.len() > 0;

                if is_user_query {
                    let cleaned = trimmed
                        .replace("<p>", "")
                        .replace("</p>", "")
                        .replace("<br />", "")
                        .trim()
                        .to_string();

                    if !cleaned.is_empty() {
                        return Some(cleaned);
                    }
                }
            }
        }
    }

    // Fallback to text parts
    for part in &request.params.message.parts {
        if let MessagePart::Text { text, .. } = part {
            let cleaned = text.trim();
            if !cleaned.is_empty() {
                return Some(cleaned.to_string());
            }
        }
    }

    None
}
Enter fullscreen mode Exit fullscreen mode

This parser evolved through multiple iterations as Telex's structure changed. What worked one week wouldn't work the next. I found myself constantly updating it, testing, and updating again. It was frustrating at times, but it taught me the value of defensive programming and handling edge cases.

The Workflow JSON Puzzle

Configuring the workflow JSON for Telex was another challenge. The documentation was minimal, and I had to make a lot of calculated guesses based on sample configurations.

The core structure needed:

{
  "nodes": [
    {
      "id": "gitpulse_agent",
      "name": "GitPulse Agent",
      "type": "a2a/generic-a2a-node",
      "typeVersion": 1,
      "url": "http://localhost:8000/trending",
      "parameters": {},
      "position": [816, -112]
    }
  ],
  "settings": {
    "executionOrder": "v1"
  }
}
Enter fullscreen mode Exit fullscreen mode

But getting the right combination of fields took trial and error. Some fields were required, others optional. Some affected functionality, others were just metadata. Without clear documentation, I had to experiment and observe what worked.

The url field was crucial as it tells Telex where to send A2A requests. The parameters field could be empty, but the structure mattered. Getting all these details right required patience and careful testing.

What Worked Well

Despite the challenges, several things worked smoothly:

1. Rust's Type System

Rust's strong typing helped catch errors at compile time. Defining the A2A request/response structures as types meant I couldn't accidentally miss a required field:

#[derive(Debug, Deserialize, ToSchema)]
pub struct A2ARequest {
    pub jsonrpc: String,
    pub id: String,
    pub method: String,
    pub params: RequestParams,
}

#[derive(Debug, Serialize, Deserialize, ToSchema)]
pub struct A2AResponse {
    pub jsonrpc: String,
    pub id: String,
    #[serde(skip_serializing_if = "Option::is_none")]
    pub result: Option<TaskResult>,
    #[serde(skip_serializing_if = "Option::is_none")]
    pub error: Option<ErrorDetail>,
}
Enter fullscreen mode Exit fullscreen mode

2. Error Handling

Rust's Result type forced me to handle errors explicitly. This led to better error messages and more graceful degradation:

let repos = match state.github_client.search_with_params(&params).await {
    Ok(repos) => repos,
    Err(e) => {
        tracing::error!("GitHub API error: {}", e);
        return A2AResponse::error(
            request.id,
            -32600,
            "Failed to fetch trending repositories. Try again later".to_string(),
        )
        .into_response();
    }
};
Enter fullscreen mode Exit fullscreen mode

3. Caching Strategy

I implemented a two-tier caching system:

  • LLM query parsing cache (to avoid repeated API calls)
  • GitHub results cache (to respect rate limits)

This was straightforward with Rust's DashMap for thread-safe caching, and it significantly reduced API costs.

4. The LLM Integration

Using Gemini to parse natural language queries into structured parameters worked well. The system prompt approach meant I could guide the LLM to return exactly the JSON structure I needed:

let query_parser = QueryParser::new(
    &config.llm_provider,
    &config.llm_api_key,
    &config.llm_model,
    system_prompt.as_str(),
)
.await?;
Enter fullscreen mode Exit fullscreen mode

What Didn't Work (At First)

1. Request Structure Assumptions

I initially assumed the request structure would be stable. It wasn't. Telex was actively being developed, and breaking changes were common. I had to build flexibility into my parser.

2. Documentation Gaps

The A2A protocol documentation existed, but Telex-specific implementation details were sparse. I spent a lot of time reading source code and experimenting.

3. Testing Without Telex

It was difficult to test the full integration locally. I had to manually construct A2A requests and verify responses matched what Telex expected. Having a local Telex instance would have helped, but that wasn't available.

4. Error Messages

Early error messages weren't helpful for debugging. I improved them by including more context:

return A2AResponse::error(
    request.id,
    -32700,
    "Unable to process your query. Please try rephrasing.".to_string(),
);
Enter fullscreen mode Exit fullscreen mode

The Architecture

Here's how GitPulse works:

  1. User Query arrives via A2A request
  2. Query Parser extracts user text (handling various formats)
  3. LLM Parser converts natural language to structured parameters
  4. Cache Check for both parsed parameters and GitHub results
  5. GitHub API search (if cache miss)
  6. Response Formatter creates A2A-compliant response with artifacts
  7. Return to Telex

The entire flow is async, uses proper error handling, and implements caching to respect rate limits.

Key Learnings

1. Protocols Matter

Working with a standardized protocol (A2A) meant I had clear rules to follow. But implementations can vary, and you need to be flexible.

2. Defensive Programming

Building robust parsers that handle multiple input formats saved me from constant updates. Anticipating changes helped.

3. Rust's Learning Curve

Rust has a steep learning curve, but the compiler errors are actually helpful. The borrow checker forced me to think about ownership and lifetimes, which led to better code.

4. Documentation Through Code

When external documentation is lacking, well-structured code serves as its own documentation. Clear types, helpful function names, and meaningful error messages matter.

5. Iteration is Part of Development

Working with an actively developed platform (Telex) meant constant updates. Instead of fighting it, I built flexibility into my code. This is probably good practice for any integration work.

The Result

After multiple iterations and a lot of debugging, I ended up with a working A2A-compatible agent that:

  • Parses natural language queries about GitHub trends
  • Uses LLM to extract structured parameters
  • Searches GitHub's API intelligently
  • Caches results to respect rate limits
  • Returns properly formatted A2A responses
  • Integrates seamlessly with Telex workflows

The code is production-ready with proper error handling, logging, and test coverage. It's performant, maintainable, and handles edge cases gracefully.

The code is open source, so feel free to check it out and use it as a reference for your own A2A implementations.

Top comments (0)