What I Built
Sanjeevani is a multilingual AI-powered virtual doctor that accepts voice, text, or image inputs and responds with realistic, human-like diagnosis and remedy using Groq's LLM and Murf AI.
It addresses the problem of language barriers and accessibility in digital healthcare. Whether a patient speaks Hindi, French, Spanish, or Chinese β Sanjeevani listens, understands, and speaks back like a real doctor.
Demo
π₯ Watch Sanjeevani in action:
π Code Repository:
https://github.com/this-is-rachit/Sanjeevani
π Try it live:
https://sanjeevani-6vck.onrender.com/
π How I Used Murf API
Murf AI powers the voice and translation layer in Sanjeevani:
- β Text-to-Speech: Converts Groq-generated medical advice into lifelike speech using Murfβs voice models.
- π Multilingual Translation: Automatically translates diagnosis into the selected language before speech synthesis.
- ποΈ Voice Mapping: Used Murf's voice IDs to customize the sound per language (e.g., Hindi, Japanese, German).
This brings a human warmth to AI conversations β vital for a healthcare app.
π‘ Use Case & Impact
Real-World Applications
- π₯ Rural/Remote Healthcare: For patients who canβt read or write, Sanjeevani offers voice-based, language-native assistance.
- π Global Accessibility: With 16+ language support, itβs usable from India to Italy.
- πΌοΈ Image Support: Users can upload a rash or wound image for visual diagnosis via LLM.
Impact
Sanjeevani enhances digital healthcare accessibility, especially for non-English speaking and underserved populations. Itβs a step toward inclusive AI in medicine.
π§ͺ Tech Stack
- Murf AI β Text-to-speech and multilingual translation
- Groq LLaMA 4 β Medical advice generation
- Groq Whisper β Voice-to-text transcription
- Gradio β Web-based interface
- Python, langdetect, PyDub, SpeechRecognition
π§ Built solo by @rachit_bansal
Top comments (0)