DEV Community

Day 11: My Grandfather's Rasam Moment - And The Hard Problem of Language in Health AI

My earliest memory of medicine isn't sterile. It's my grandfather, a quiet man with kind eyes, gently crushing his diabetes pill into a spoonful of warm rasam before taking it. He believed it made the medicine easier to swallow, more digestible. Nobody - not his doctor, not a pharmacist, not even a family member - ever told him that mixing it could affect absorption, or potentially create an interaction with specific food enzymes in the rasam itself. This wasn't ignorance; it was simply how he navigated health in his world.

This memory, from decades ago, still anchors what we're building at GoDavaii. Today is Day 11 of our sprint, and while we're still building in public, the problem we're solving is deeply rooted in these everyday, unspoken healthcare realities.

The English-First Blind Spot of Global Health AI

Look, the global health AI landscape is dominated by English-first solutions. Companies like Epocrates, drugs.com, or even the newer wave of AI chatbots from the US and Europe are incredible technical feats. They process vast amounts of medical literature, flag drug interactions, and help clinicians with complex cases. But their datasets, their language models, and their underlying assumptions are almost exclusively English.

This creates a massive blind spot, especially for countries like India. My grandfather's rasam story is just one example. What about the "Desi Ilaaj" - the home remedies passed down through generations, often involving specific herbs or preparations? How does an English-only AI interpret a patient describing symptoms in Marathi as "pot dukhta aahe ani thakava" (stomach ache and tiredness) versus a clinical "abdominal pain and fatigue"? The nuances are lost. The context is invisible.

Most health AI systems struggle with this. ChatGPT's Hindi output for medical queries, for instance, often misses the mark. It's syntactically correct, perhaps, but semantically and culturally adrift. It won't understand the implied meaning of "garam paani aur haldi" (hot water and turmeric) in the context of a sore throat unless specifically trained on that indigenous knowledge and its potential interactions.

Building for the "Next Billion" in Their Mother Tongue

At GoDavaii, we started with a core conviction: health AI needs to speak the language people actually think in. Not just translate, but understand the cultural and linguistic context. This is why our AI Health Chat supports 22+ Indian languages. It's why we're building out AI-verified Desi Ilaaj - cross-referencing traditional remedies with modern pharmacological data to flag potential contraindications or confirm efficacy, without inventing cures.

From a development perspective, this is a much harder problem than simply wrapping an English LLM. It involves:

  • Low-resource NLP: Many Indian languages don't have the vast, clean, medical text corpuses that English does. We're building custom fine-tuning layers on frontier models like Gemini 2.5 Flash, specifically training them on vernacular medical content, symptom descriptions, and common health queries.
  • Contextual Understanding: Our system needs to parse descriptions like "shareeram sariyaagilla" in Tamil not as a vague complaint, but as a symptom indicating general malaise, a precursor to fever or body ache. It's about training the model to recognize the intent and implied medical state behind colloquial expressions.
  • Integration of Diverse Knowledge Bases: Our Drug Interaction Checker isn't just about allopathic medicines. We're building a graph architecture that can reason across traditional remedies and modern drugs, flagging potential conflicts that no English-only database would even consider. It's a complex multi-modal knowledge representation problem.

This isn't about simply adding a translation layer. It's about fundamentally rethinking how health AI processes information for a truly diverse user base. We believe this is a genuine global differentiator, a real moat that sets us apart from systems like Epocrates or Medscape that remain English-only. This focus on deep cultural and linguistic integration is partly why we were a Top 14 Global Finalist at Startup Flight Vietnam 2025 - the judges saw the scale of the problem and the uniqueness of our approach.

A Thinking Assistant, Not a Substitute

It's crucial to state this clearly: GoDavaii isn't designed to substitute for your doctor. We are a thinking assistant for families, built to help you prepare for consultations. Our goal is to help you ask better-targeted questions at your next appointment, and to surface information that a hurried check-up might have overlooked. We're building a pre-doctor checklist, an intelligent companion that empowers you with information, rather than diagnosing or prescribing.

We're trying to close the gap between medical knowledge and everyday life, especially for the "next billion" people who are coming online in their mother tongues, often with health questions that English-first AI simply cannot answer. We want to empower them to navigate their health, just as my grandfather deserved to know if his rasam was impacting his medication.

Try GoDavaii at godavaii.com - curious what cultural health practices you've seen that a purely English AI would completely miss.

Top comments (0)