Every morning, as the sun rises over Mumbai, I wake up with a quiet, persistent dread. It's not the fear of missing a sprint target (though we're publicly tracking our 379 users to hit 100,000, so that's always in the back of my mind). My deepest, most visceral fear is that GoDavaii, our AI health platform, will miss a critical drug interaction, and an Indian family, perhaps like mine, will suffer because of it. This isn't abstract; it's the raw, vulnerable truth that became the steel frame for GoDavaii's most crucial feature: our AI cross-verification layer.
The Unseen Risks in Indian Family Medicine
I founded GoDavaii in 2025 because I saw this challenge firsthand. My own grandmother takes five prescription medicines every day. For years, I watched her manage them, trusting the system. But the system is often fragmented. A cardiologist prescribes one thing, a diabetic specialist another, and nobody-not the doctors, not the local pharmacist rushing through a busy day-has a holistic view of the combinations. Now, imagine this amplified across millions of Indian families, many living in multi-generational homes where medical history is passed down verbally, where language barriers complicate understanding, and where traditional 'Desi Ilaaj' (home remedies) are often used alongside modern medicine without a second thought.
This complexity is our reality. Our families don't just deal with a single doctor; they juggle multiple specialists, traditional practices, and the everyday realities of fasting during festivals like Karva Chauth, which dramatically alters medicine timing and effectiveness. This is why our interaction checker isn't just about listing known interactions; it's about context. We need to catch what a rushed consultation might miss, to help families ask sharper questions.
Building AI Trust: The Cross-Verification Imperative
When we started building GoDavaii, the initial temptation was to lean heavily on large language models (LLMs) like Gemini 2.5 Flash for quick interaction checks. They are powerful, yes, and brilliant at generating human-like text across our 22+ Indian languages. But for critical medical information, raw LLM output is a gamble. They can hallucinate, present outdated information, or simply miss nuances. For something as high-stakes as medicine interactions, 'good enough' is not good enough.
This fear-the fear that our AI might mistakenly clear a dangerous combination-drove us to invest heavily in our cross-verification layer. Here's how it works at a high level:
- Initial LLM Pass: When a user inputs medicines (in English, Hindi, Marathi, etc.), the LLM first processes the input, identifying drugs and potential interactions. This gives us a baseline, and crucially, allows us to immediately offer context in the user's preferred language.
- Proprietary Knowledge Graph Check: The LLM's output is then routed through our own proprietary knowledge graph. This graph is meticulously curated with known drug-drug, drug-food, and even drug-disease interactions, specifically tailored to the Indian pharmaceutical landscape and common traditional practices. This is where we catch the factual inaccuracies or omissions that an LLM might produce.
- Desi Ilaaj Integration: This is a particularly tricky, and uniquely Indian, aspect. When a user mentions a Desi Ilaaj alongside allopathic medicines, our system cross-references traditional remedy components against known allopathic drug pathways. This required building a distinct dataset and set of algorithms, as no global competitor even attempts this intersection.
- Anomaly Flagging & Human-in-the-Loop: If the LLM output significantly deviates from our knowledge graph, or if it involves complex multi-drug scenarios, it's flagged. These edge cases are then routed for review. While we aim for AI autonomy, for safety-critical functions, a human-in-the-loop review by our network of pharmacists and medical professionals is our ultimate safety net before an answer is finalized.
This multi-stage process, though computationally more intensive, is non-negotiable. It's how we ensure that our AI doesn't just 'speak' 22+ languages, but genuinely 'understands' and verifies medical information in a way that respects the nuances of Indian health realities.
Why This Fear Is My North Star
Building in public, as we are doing with GoDavaii, means exposing not just our wins but our anxieties. This fear of a missed interaction, the drive to build an AI that can be truly trusted with family health, is the reason we prioritized verification over speed, accuracy over mere suggestion. It's why we're building a thinking assistant for families, not a replacement for their doctor.
We currently have 379 users who trust us to be a second pair of eyes before their next appointment, to help them ask sharper questions. That number grows daily, and with every new user, the responsibility weighs heavier. This isn't just about lines of code or clever algorithms; it's about the trust of families like yours and mine.
What's your biggest concern when you think about AI helping with your family's health? How do you ensure the information you get online is truly reliable?
Explore GoDavaii and our interaction checker at godavaii.com.
Follow our journey: https://www.godavaii.com/speed-run
Top comments (0)