DEV Community

Alberto Castillo
Alberto Castillo

Posted on • Originally published at meetcyber.net

From Chatbots to Agentic Systems: Why We Migrated ANTICIPA from Dialogflow to Mastra AI

Building AI agents that handle simple FAQs is a solved problem. Building AI agents that generate remote sensing maps, query humanitarian databases, and maintain deep context over long periods? That's an entirely different beast.

At ANTICIPA, an AI-driven platform focused on risk management and resilience, our goal is to empower vulnerable communities with real-time data. To achieve this, we need an architecture that doesn't just "talk," but executes complex, long-running tasks. More information about ANTICIPA here: Global Projects

We started our journey using Google's Conversational Agents (formerly Dialogflow CX). It allowed us to move fast, but as our use cases grew more complex, we hit an architectural wall. Here is why we decided to migrate our core agentic infrastructure to Mastra AI, and what software architects should consider when building robust AI workflows.

The Honeymoon Phase: Why Conversational Agents are Great to Start

Don't get me wrong: Conversational Agents (Dialogflow) is a fantastic tool. If you need to spin up an agentic system in minutes, enjoy usage credits, and quickly define Playbooks, it's highly effective.

Credits available

Connecting tools via APIs, setting up a Knowledge Base for simple RAG, or pulling data from BigQuery is remarkably straightforward. For deterministic, short-lived interactions, it works perfectly. But we were building something much heavier.

The Bottleneck: The 30-Second Wall

In ANTICIPA, our agents act as analytical engines. One of our core features involves integrating with Google Earth Engine (GEE) to provide communities with customized remote sensing maps. We also run heavy Graph Retrieval-Augmented Generation (GraphRAG) pipelines across extensive humanitarian data sources.

These processes take time. And this is where Dialogflow broke our flow.

Dialogflow imposes a strict 30-second timeout for webhook responses. If your process takes 31 seconds, the agent fails. To bypass this, we implemented asynchronous workarounds using Google Cloud Pub/Sub. While it worked, it was incredibly challenging to orchestrate, maintain, and scale. We were building complex scaffolding just to fight the framework's limitations.

The "30-Minute Amnesia" and UX Friction

The second major roadblock was state management. Dialogflow's session context expires after 30 minutes. In disaster risk management, a user might request an analysis, go handle an emergency, and return an hour later expecting the agent to remember the conversation. With Dialogflow, the agent suffered from "30-minute amnesia."

Duration Limits

Duration Limits

Furthermore, delivering rich UX components was painful. Sending WhatsApp Flows or interactive buttons required highly specific, complex webhook calls. Even worse, once a map or a WhatsApp Flow was sent, it was extremely difficult to pass that context back to the agent so it knew what the user was looking at. The agents lacked situational awareness.

Our tools

The Pivot: Taking Control with Mastra AI

We needed a framework designed for true multi-step reasoning, long-running tasks, and persistent memory. We migrated to Mastra AI, and the architectural shift resolved our biggest pain points:

1. Zod, TypeScript, and the power of "Vibe Coding"
We leaned into TypeScript over Python for our agent engineering. In the era of "Vibe Coding" (using LLMs to generate code rapidly), Python's strict indentation can often lead to brittle logic when generated on the fly. Mastra AI's deep integration with Zod gives us incredibly strong typing and structural guarantees. When we define a tool or a data schema, we have absolute control over the input and output structures.

2. Persistent Memory via PostgreSQL
Mastra AI allowed us to ditch the 30-minute session limit. We now use PostgreSQL for memory and history management. We can explicitly specify the context window by the number of messages, giving our agents a robust, long-term memory of everything that is happening in the system, including which WhatsApp Flows or maps the user has interacted with.

3. Flexible Workflows and Native Evaluations
Transitioning from rigid, deterministic flows to truly generative, agentic workflows was a challenge in our previous stack. Mastra AI's Workflows provided the flexibility we needed to orchestrate complex reasoning steps without timing out. Additionally, having native evaluations built into the system allows us to test the accuracy of our humanitarian RAG outputs seamlessly.

Native Evaluations
Native evaluations

The Verdict for Architects

If you are building an AI assistant for quick customer service or basic transactional routing, Google's Conversational Agents will get you to production in record time.

However, if your system requires executing high-latency API calls (like satellite imagery generation), demands persistent long-term memory, and relies on strict data typing for agent-to-agent communication, you need to decouple from the 30-second webhook constraint. For us at ANTICIPA, migrating to Mastra AI was the strategic move that finally let our agents think and act without artificial limitations.

Top comments (0)