DEV Community

Guillermo Mejia
Guillermo Mejia

Posted on

Built with Google Gemini: Humanizing the Helpdesk with Warm AI

Built with Google Gemini: Writing Challenge

This is a submission for the Built with Google Gemini: Writing Challenge

What I Built with Google Gemini

I built a modular agent orchestration platform designed to humanize automated support. While many AI bots feel transactional and cold, my project focuses on "warmth"—using Gemini 2.5 Flash to create empathetic, context-aware interactions that feel like talking to a helpful peer.

The platform uses a specialized adapter system to deploy versioned conversational flows. Whether it is a car dealership agent qualifying a lead or an internal assistant helping a developer navigate complex node-based logic, Gemini handles the reasoning and tone. The "warmth" is evident in the transitions; instead of a generic "I still need a few more details: car mileage, general condition," the bot asks, "What is the mileage and condition of your 2020 Elantra?"—directly referencing the user's specific context with a natural touch.

Demo

The demo showcases two distinct Telegram-integrated use cases:

  1. Automotive Sales Lead: Intelligently qualifies leads by asking for missing details (mileage, condition) in a conversational manner.
  2. Platform Navigator: A RAG-powered assistant that explains internal flows, nodes, and edges (using ng-diagrams) to help users understand their agent structures.

AI Agent Builder Bot prototype Dashboard and Bots

AI Agent Buiilder Assistant close-up

What I Learned

  • MFE Architecture: Implementing Angular 21 with Native Federation v2, PrimeNG, and Tailwind CSS allowed for a highly modular frontend where different agent types are managed independently.
  • Vectorized RAG: Integrating Supabase (pgvector) for long-term memory and restricted content delivery within agent flows, managed through Vercel's AI SDK.
  • The "Warmth" Prompt: I learned that achieving a human-centric tone is about using Gemini’s reasoning to bridge the gap between structured data (like car specs) and natural prose.
  • Edge Performance: Deploying on Vercel Edge Functions provided the low latency required for real-time Telegram webhooks.

Google Gemini Feedback

  • FinOps & Efficiency: A major highlight was the cost-to-performance ratio. Using text-embedding-001 with 3072 dimensionality provided high semantic accuracy, yet running over 170+ requests (Gemini Flash + Embeddings) during an 8-hour development sprint cost only about $0.10.
  • Developer Experience: The Google AI Studio integration was incredibly straightforward. The reasoning capabilities of Gemini 2.5 Flash allowed the bot to handle complex lead qualification without losing its "warm" persona.
  • Friction Points: Balancing empathy with brevity on platforms like Telegram requires significant prompt-tuning. More native support for "persona" weighting within the SDK would be a welcome addition.

Top comments (0)