Planning to relocate internationally? The visa research process can be overwhelming, with countless pathways, varying costs, complex requirements, and constantly changing policies. I built an AI agent to solve this: a real-time migration advisor that analyzes your profile and returns the single best visa pathway, complete with costs, processing times, and actionable next steps.
In this post, I'll walk through how I built this agent using Google's Gemini LLM, the A2A (Agent-to-Agent) protocol, and deployed it on Heroku for seamless integration with platforms like Telex.im.
First of all, Telex is a collaboration workspace like Slack that allows developers to create AI Agents called AI Coworkers. The AI coworkers will work like a real human colleague. I might go in-depth in another article. For now, let's jump into the step-by-step process I took in building this migration pathways AI Agent.
What Does It Do?
The Migration Pathways Agent takes a simple natural language query like:
"I'm a software engineer from Nigeria wanting to move to Canada with a $10,000 budget"
And returns a structured response:
# Best Migration Option: Express Entry (Federal Skilled Worker)
This pathway is often the most straightforward option for skilled professionals seeking permanent residency in Canada, especially if they possess strong English or French skills, relevant work experience, and educational qualifications.
**Key Details:**
- Processing time: 6–12 months
- Cost: $2,300 - $3,000 (including application fees, language testing, ECA, and settlement funds)
- Success rate: Medium to High (dependent on CRS score relative to current cut-off scores)
- Main requirements: Minimum of one year of skilled work experience, Language proficiency (CLB 7 or higher in English or French), Educational Credential Assessment (ECA) for foreign education
Next step: Take an IELTS or CELPIP test and get your educational credentials assessed by a recognized organization (e.g., World Education Services - WES).
Architecture Overview
The system consists of three main components:
┌─────────────────────┐
│ CLIENT/USER │
└─────────┬───────────┘
 │ HTTP
┌─────────▼───────────┐
│ A2A PROTOCOL │
│ ┌─────────────────┐ │
│ │ Agent Card │ │
│ │ JSON-RPC 2.0 │ │
│ └─────────────────┘ │
└─────────┬───────────┘
 │
┌─────────▼───────────┐
│ MIGRATION AGENT │
│ ┌─────────────────┐ │
│ │ Query Parser │ │
│ │ Task Manager │ │
│ │ Gemini Client │ │
│ └─────────────────┘ │
└─────────────────────┘
1. A2A Protocol Layer
The A2A (Agent-to-Agent) protocol standardizes how AI agents communicate. I implemented two key endpoints:
Agent Card (GET /.well-known/agent.json):
Returns metadata about the agent's capabilities, supported methods, and configuration.
Planner Endpoint (POST /a2a/planner):
Accepts JSON-RPC 2.0 requests with methods:
- 
message/send- Send a migration query - 
tasks/get- Retrieve task results - 
tasks/send- Legacy task submission ### 2. The Go Server I chose Go for its simplicity, performance, and excellent Heroku support. The server structure: 
type MigrationAgent struct {
 gemini *GeminiClient
 tasks map[string]*Task
 mu sync.RWMutex
}
Key design decisions:
- In-memory task storage: Simple map for task management.
 - 
Embedded agent card: Using Go's 
embedpackage to serve the agent.json - Flexible message parsing: Supports both Telex's format and standard A2A ### 3. Gemini Integration The magic happens in the Gemini client. Here's how I structured the prompt:
 
func (gc *GeminiClient) buildPrompt(userQuery, _, _ string, budget int) string {
 prompt := `You are a migration planning expert.
CRITICAL BEHAVIOR RULES:
- Never ask the user for additional information
- Extract profession, origin country, and destination country from the query
- If information is unclear, make reasonable assumptions
- Output exactly ONE best migration option
USER QUERY:
"` + userQuery + `"
`
 // … rest of prompt
}
Why this approach?
Initially, I tried parsing professions, countries, and origins with hardcoded maps. Bad idea. Users said "USA" but the agent suggested Canada because my parsing was too rigid.
The fix: Let Gemini extract ALL information from natural language. This supports any country, any profession, any language - no maintenance required.
Implementation Deep Dive
Step 1: Setting Up the Gemini Client
type GeminiClient struct {
 APIKey string
 BaseURL string
 Model string
}
func NewGeminiClient() *GeminiClient {
 return &GeminiClient{
 APIKey: os.Getenv("GEMINI_API_KEY"),
 BaseURL: "https://generativelanguage.googleapis.com/v1beta",
 Model: "gemini-2.0-flash-exp",
 }
}
I used gemini-2.0-flash-exp for its speed and strong reasoning capabilities - critical for extracting structured information from unstructured queries.
Step 2: Task Processing Flow
func (a *MigrationAgent) ProcessTask(taskID string, message Message) (*Task, error) {
 // 1. Create task with "working" state
 task := &Task{
 ID: taskID,
 Kind: "task",
 Status: TaskStatus{
 State: "working",
 Timestamp: time.Now().UTC().Format(time.RFC3339),
 },
 }
 
 // 2. Extract user query from message parts
 var userQuery string
 for _, part := range message.Parts {
 if part.Type == "text" {
 userQuery += part.Text + " "
 }
 }
 
 // 3. Query Gemini
 responseText, err := a.gemini.GetMigrationPathways(
 userQuery, "", "", profile.Budget,
 )
 
 // 4. Update task with results
 task.Status = TaskStatus{
 State: "completed",
 Message: &StatusMessage{
 Kind: "message",
 Role: "agent",
 Parts: []Part{{Kind: "text", Text: responseText}},
 MessageID: messageID,
 TaskID: taskID,
 },
 }
 
 return task, nil
}
Step 3: A2A Response Format
The A2A spec requires specific response structure:
{
 "jsonrpc": "2.0",
 "id": "task-id",
 "result": {
 "id": "task-id",
 "kind": "task",
 "status": {
 "state": "completed",
 "timestamp": "2025–11–03T…",
 "message": {
 "kind": "message",
 "role": "agent",
 "parts": [{"kind": "text", "text": "…"}],
 "messageId": "msg-id",
 "taskId": "task-id"
 }
 },
 "artifacts": […]
 }
}
This took iteration - initially I missed messageId and taskId in the status message, causing Telex integration to fail.
Deployment on Heroku
1. Procfile:
web: bin/server
2. Dynamic Port Binding:
port := os.Getenv("PORT")
if port == "" {
 port = "8080"
}
addr := ":" + port
3. Environment Variables:
heroku config:set GEMINI_API_KEY=your_key_here
4. Deploy:
git push heroku main
The Heroku Go buildpack automatically runs go install, compiles the binary to bin/server, and the Procfile starts it.
Key Learnings
1. Don't Over-Engineer Parsing
My first version had elaborate country/profession dictionaries. Gemini handles this better than any regex or dictionary lookup.
2. A2A Protocol Details Matter
Small spec violations (missing messageId, wrong JSON structure) break integrations. Test with actual A2A clients early.
3. Prompt Engineering Is Critical
The difference between "suggest migration options" and "provide THE SINGLE BEST option without asking questions" is massive. The second version eliminates back-and-forth.
4. LLM Temperature Matters
For structured output, low temperature (default) works better than high creativity settings.
Testing
I created .http files for VS Code REST Client:
POST {{host}}/a2a/planner
Content-Type: application/json
{
 "jsonrpc": "2.0",
 "method": "message/send",
 "params": {
 "message": {
 "role": "user",
 "parts": [{
 "type": "text",
 "text": "I am a nurse from Philippines wanting to move to UK"
 }]
 }
 },
 "id": 1
}
This made iteration fast - no need to rebuild integrations for every test.
Try It Yourself
Live API: https://migration-pathways-agent-ca4e1c945e86.herokuapp.com/a2a/planner
Agent Card: https://migration-pathways-agent-ca4e1c945e86.herokuapp.com/.well-known/agent.json
Example cURL:
curl -X POST https://migration-pathways-agent-ca4e1c945e86.herokuapp.com/a2a/planner \
 -H "Content-Type: application/json" \
 -d '{
 "jsonrpc": "2.0",
 "method": "message/send",
 "params": {
 "message": {
 "role": "user",
 "parts": [{
 "type": "text",
 "text": "I am a data scientist from India wanting to move to USA with $10000"
 }]
 }
 },
 "id": 1
 }'
Screenshot of the AI Agent on Telex
Conclusion
Building an AI agent isn't just about calling an LLM API - it's about:
- Choosing the right protocol (A2A for interoperability)
 - Crafting precise prompts that eliminate ambiguity
 - Structuring responses for both humans and machines
 - Deploying reliably with proper error handling The Migration Pathways Agent demonstrates that with the right architecture, you can build practical, production-ready AI tools that solve real problems.
 
Repository: github.com/brainox/immigration-pathways-agent
 
Questions? Feedback? Hit me up on X @ObinnaAguwa or try the agent and let me know what visa pathway it suggests for you!
              
    
Top comments (0)