DEV Community

Ugoeze Eluchie
Ugoeze Eluchie

Posted on

Building a Fact-Checking AI Agent with A2A Protocol Integration

Introduction

For HNG Internship Stage 3, I built an AI-powered fact-checking agent that integrates with Telex.im using the A2A (Agent-to-Agent) protocol. This post walks through my journey, challenges, solutions, and lessons learned.

What I Built: An intelligent agent that verifies claims and statements, providing verdicts, confidence scores, explanations, and credible sources—all accessible through Telex.im's messaging platform.

The Problem: Misinformation spreads faster than my ability to fact-check it.
The Solution: An AI agent that:

Takes any claim
Verifies it
Provides sources
Tells you if it's BS or not

Basically, I wanted to build a digital BS detector. Challenge accepted.

Live Demo: https://hng-a2a-ugoeze-hub7559-2ozzo1oj.leapcell.dev/a2a/factchecker


Technology Stack

  • Backend: FastAPI (Python)
  • AI Model: Google Gemini 2.5 Flash
  • Protocol: A2A (JSON-RPC 2.0)
  • Language: Python 3.12

Why These Choices?

  • FastAPI: Fast, modern, async support, automatic API docs
  • Gemini 2.5 Flash: Free tier, fast responses, good reasoning capabilities
  • A2A Protocol: Required for Telex integration, JSON-RPC 2.0 standard

Architecture Overview

User on Telex.im
    ↓
Telex Server (JSON-RPC 2.0)
    ↓
My FastAPI Agent (Leapcell)
    ↓
Gemini AI (Google)
    ↓
Formatted Response
    ↓
Back to User via Telex
Enter fullscreen mode Exit fullscreen mode

Simple and straightforward—no overengineering.


Implementation Journey

Phase 1: Understanding A2A Protocol

The first hurdle was understanding the A2A (Agent-to-Agent) protocol. Unlike typical REST APIs, A2A uses JSON-RPC 2.0 format:

{
  "jsonrpc": "2.0",
  "id": "request-id",
  "method": "message/send",
  "params": {
    "message": {
      "role": "user",
      "parts": [
        {
          "kind": "text",
          "text": "Is Python the most popular language?"
        }
      ]
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

Key Learnings:

  • The id field must be echoed back in responses
  • Messages have parts which can be text, images, or data
  • Telex sends conversation history in data parts (which can be ignored)

Phase 2: Setting Up FastAPI

I started with a basic FastAPI application:

from fastapi import FastAPI
from pydantic import BaseModel
from typing import List, Optional

app = FastAPI(title="Fact Checker Agent")

@app.get("/health") #although I used multiple routes for healthcheck because of leapcell
async def health_check():
    return {"status": "healthy"}

@app.post("/a2a/factchecker")
async def fact_checker(request: JsonRpcRequest):
    # Agent logic here
    pass
Enter fullscreen mode Exit fullscreen mode

Challenge 1: Schema Definition

The A2A protocol has specific schema requirements. I used Pydantic models:

from pydantic import BaseModel, Field
from typing import List, Optional, Dict, Any, Literal
from uuid import uuid4
from datetime import datetime, timezone

class MessagePart(BaseModel):
    kind: Literal["text", "image", "file"]
    text: Optional[str] = None
    data: Optional[ Any] = None
    url: Optional[str] = None
    file_url: Optional[str] = None

class A2AMessage(BaseModel):
    kind: Literal["message"] = "message"
    role: str
    parts: List[MessagePart]
    messageId: Optional[str] = None 
    taskId:  Optional[str] = None 

class MessageConfiguration(BaseModel):
    blocking: bool = True

class MessageParams(BaseModel):
    message: A2AMessage
    configuration: Optional[MessageConfiguration] = Field(default_factory=MessageConfiguration)

class JsonRpcRequest(BaseModel):
    jsonrpc: Literal["2.0"] = "2.0"
    id: str
    method: str
    params: MessageParams

class ResponseStatus(BaseModel):
    state: Literal["completed", "failed", "working"]
    timestamp: str = Field(default_factory=lambda: datetime.now(timezone.utc).isoformat())

class Artifact(BaseModel):
    artifactId: str = Field(default_factory=lambda: str(uuid4()))
    name: str
    parts: List[MessagePart]

class ResponseMessage(BaseModel):
    kind: Literal["task"] = "task"
    id : str
    contextId: Optional[str] = None 
    status: ResponseStatus
    message: A2AMessage
    artifacts: List[Artifact] = []
    history: List[A2AMessage] = []



class JsonRpcResponse(BaseModel):
    jsonrpc: Literal["2.0"] = "2.0"
    id: str
    result: Optional[ResponseMessage] = None

class JsonRpcError(BaseModel):
    jsonrpc: Literal["2.0"] = "2.0"
    id: str
    error: Optional[Dict[str, Any]] = None

Enter fullscreen mode Exit fullscreen mode

What Went Wrong:
Initially, I only accepted "text" kind, but Telex also sends "data" kind with conversation history. This caused validation errors.

Fix:

# Added "data" to accepted kinds
kind: Literal["text", "image", "file", "data"]
Enter fullscreen mode Exit fullscreen mode

Phase 3: Integrating Gemini AI

import google.generativeai as genai
import os
from dotenv import load_dotenv

load_dotenv()

class FactCheckerAgent:
    def __init__(self):
        api_key = os.getenv("GEMINI_API_KEY")
        if not api_key:
            raise ValueError("GEMINI_API_KEY not sey")

        genai.configure(api_key=api_key)

        self.model = genai.GenerativeModel('gemini-2.5-flash')

    async def check_fact(self, claim: str):
        prompt = f"""You are an expert fact-checker. Analyze this claim using web search:

            Claim: "{claim}"

            Please provide:

            1. **Verdict**: 
            - TRUE
            - FALSE  
            - PARTIALLY TRUE
            - UNVERIFIABLE

            2. **Confidence Level**: High / Medium / Low

            3. **Explanation** (2-4 sentences):
            - What does the evidence show?
            - Why is this the verdict?

            4. **Key Context** (if relevant):
                - Important nuances
            - Common misconceptions
            - Date/time relevance

            5. **Sources**: List credible sources you found

            Be objective, factual, and cite your sources.

            Make sure you are recent, be in tune with the time and date of the response and give updated responses that are factual

            Reciprocate the same energy as the user, be it happy, rude or non-chalant"""

        try:
            response = self.model.generate_content(prompt)
            analysis = response.text

            final_response = f"## 🔍 Fact Check Results\n\n"
            final_response += f"**Claim:** _{claim}_\n\n"
            final_response += "---\n\n"
            final_response += analysis

            return final_response

        except Exception as e:
            return f"**Error during fact-checking:** {str(e)}\n\nPlease try again or rephrase your claim."

Enter fullscreen mode Exit fullscreen mode

Challenge 2: Async vs Sync

Gemini SDK is synchronous, but FastAPI works best with async. Initially got this error:

TypeError: can only concatenate str (not 'coroutine') to str
Enter fullscreen mode Exit fullscreen mode

What I Learned:

  • When calling async functions, you MUST use await
  • FastAPI endpoints can be async even if some internal functions are sync

Fix:

# Wrong
result = fact_checker.check_fact(claim)  # Returns coroutine

# Right
result = await fact_checker.check_fact(claim)  # Returns actual string
Enter fullscreen mode Exit fullscreen mode

Phase 4: Handling Message Extraction

Telex sends messages with conversation history. I needed to extract only the current claim:

claim = ""
for part in user_messages.parts:
    if part.kind != "text":
        continue  # Skip data/history parts

    if not part.text:
        continue

    claim += part.text + " "

claim = claim.strip()
Enter fullscreen mode Exit fullscreen mode

Why This Matters:
Without filtering, I was trying to concatenate lists (conversation history) to strings, causing crashes.

Phase 5: Response Formatting

The agent returns structured responses:

response_message = A2AMessage(
    role="agent",
    parts=[MessagePart(kind="text", text=result)],
    messageId=str(uuid4()),
    taskId=request.id
)

artifact = Artifact(
    artifactId=str(uuid4()),
    name="factCheckerAgentResponse",
    parts=[MessagePart(kind="text", text=result)]
)

return JsonRpcResponse(
    id=request.id,  # Must echo back the request ID!
    result=ResponseMessage(
        id=request.id,
        contextId=str(uuid4()),
        status=ResponseStatus(state="completed"),
        message=response_message,
        artifacts=[artifact],
        history=[request.params.message, response_message]
id=request.id, # although history could be kept empty to avoid clutter
    )
)
Enter fullscreen mode Exit fullscreen mode

Key Learning:
Always echo back the request id in JSON-RPC responses. This is how clients match requests to responses.


Challenge 3: Method Name Mismatch

Telex sends "method": "message/send", but I initially only accepted "method": "chat".

Fix:

# Before
method: Literal["chat"]  # Too strict

# After
method: str  # Accept any method string
Enter fullscreen mode Exit fullscreen mode

Connect to Telex

Go to Telex, create an account and create an AI coworker. In you newly created AI worker, click the “view full profile” button and click configure workflow. Paste the following into it and click save.

{
  "active": false,
  "category": "utilities",
  "description": "AI-powered fact-checking agent that verifies claims and provides detailed analysis with sources",
  "id": "qTH0n2f6rUzAnn8K",
  "long_description": "You are a Fact Checker AI Agent. Your core function is to analyze claims and statements sent by users and provide accurate fact-checking results.\n\nWhen operating:\n- You receive user claims via the A2A protocol\n- You analyze the claim using AI and available knowledge\n- You provide a verdict (TRUE, FALSE, PARTIALLY TRUE, or UNVERIFIABLE)\n- You include confidence levels (High, Medium, Low)\n- You explain your reasoning with 2-4 sentences\n- You provide context about misconceptions or nuances\n- You cite credible sources when possible\n\nYour responses are objective, factual, and help users understand the truth behind claims. You prioritize accuracy and clarity to combat misinformation\n\n      Use the factCheckerTool to verify claims.\n",
  "name": "fact_checker_agent",
  "nodes": [
    {
      "id": "fact_checker_node",
      "name": "Fact Checker AI",
      "parameters": {},
      "position": [816, -112],
      "type": "a2a/generic-a2a-node",
      "typeVersion": 1,
      "url": "https://hng-a2a-ugoeze-hub7559-2ozzo1oj.leapcell.dev/a2a/factchecker"
    }
  ],
  "pinData": {},
  "settings": {
    "executionOrder": "v1"
  },
  "short_description": "Verifies claims with AI-powered analysis and source citations"
}
Enter fullscreen mode Exit fullscreen mode

Testing on Telex

Once deployed, I tested on Telex.im:

Results:
Agent responds to greetings
Fact-checks single claims
Handles multiple claims in one message
Provides detailed verdicts with sources
Even understood Nigerian political context!

Example Exchange:

User: "The Earth is flat"

Agent:
## 🔍 Fact Check Results

**Claim:** The Earth is flat

**Verdict**: FALSE
**Confidence Level**: High

**Explanation**: Overwhelming scientific evidence, including 
photographs from space, satellite data, and observable curvature, 
proves Earth is an oblate spheroid...

**Sources**:
- NASA: https://www.nasa.gov/
- National Geographic: ...
Enter fullscreen mode Exit fullscreen mode

What Worked Well

  1. FastAPI: Rapid development, automatic validation, great async support
  2. Gemini 2.5 Flash: Fast, accurate, good at reasoning
  3. Leapcell: One-click deployment, auto-redeploy on push
  4. Pydantic: Type safety caught many bugs early
  5. Simple Architecture: No unnecessary complexity

What Didn't Work

  1. Initial Schema Design: Too strict, didn't handle Telex's data format
  2. Missing await: Async confusion led to early bugs

Key Lessons Learned

1. Read the Protocol Spec Carefully

Don't assume the A2A protocol has specific requirements. I wasted time debugging schema issues that were clearly documented.

2. Test Early, Test Often

I should have tested the deployed version earlier instead of assuming localhost behavior would match production.

3. Handle Edge Cases

  • Empty messages
  • Conversation history
  • Network failures
  • API rate limits

4. Keep It Simple

My first instinct was to build a fun game for colleagues on telex to play (still working on it though), then to add web scraping, multiple LLMs, caching, etc to the then decided factchecker. But for a 3-day deadline, simplicity wins. A working simple solution beats a broken complex one.

5. Documentation Matters

Good README and clear code comments saved me hours when debugging.


Future Improvements

If I had more time, I'd add:

  1. Real Web Search: Instead of just LLM knowledge, scrape actual sources
  2. Caching: Store recent fact-checks to save API calls
  3. Source Quality Scoring: Rate source credibility
  4. Conversation Memory: Remember previous fact-checks in same conversation
  5. Image Fact Checking: Verify claims in images/screenshots
  6. Multi-language Support: Fact-check in multiple languages
  7. A crazier idea

Conclusion

Building this fact-checking agent taught me:

  • How to work with new protocols (A2A/JSON-RPC)
  • Integrating AI into production systems
  • Handling async operations in Python
  • Deploying and debugging remote services
  • The importance of good error handling

The hardest part wasn't the AI, it was understanding the protocol and handling edge cases. But once those were sorted, everything clicked into place.

Would I do anything differently?

Yes, I'd start with proper testing of the A2A protocol format before writing any agent logic. Understanding the contract first would have saved hours of debugging.

Was it worth it?

Absolutely. I now understand how agent-to-agent communication works, and I have a working project to show for it.


Resources


Try It Yourself

Maybe join HNG's next cohort
Or deploy your own:

git clone https://github.com/ugoeze-hub/hng-a2a
pip install -r requirements.txt
# Add GEMINI_API_KEY to .env
python main.py
Enter fullscreen mode Exit fullscreen mode

Thanks for reading! If you have questions or suggestions, drop a comment below or reach out on Twitter @ugo_the_xplorer.

Tags: #HNG #HNGInternship #AI #FastAPI #Python #A2A #TelexIM #FactChecking #AgentDevelopment


*This project was built as part of HNG Internship Stage 3. Learn more at hng.tech

Top comments (0)