**
This is a submission for the Murf AI Coding Challenge 2
**
What I Built
I built Read2Recap – a document summarizer with AI-powered voice narration. The app allows users to upload any document (notes, articles, reports), get a clean, concise summary, and then listen to it in different voices using AI-generated audio.
It solves the problem of time-consuming reading and enhances accessibility for students, professionals, and visually impaired users by turning long documents into listenable audio summaries.
Technologies used: LangChain, GeminiAI API, and Murf API.
Demo
🎥 [https://drive.google.com/file/d/1LjXJDawT1HISSgMrKzP943zOuOZapblj/view?usp=sharing]
Code Repository:
🔗 [https://github.com/Lovish-Singlaa/Read2Recap]
How I Used Murf API
Once the document is summarized using OpenAI's API via LangChain, I pass the result to the Murf Text-to-Speech API to generate natural, human-like voiceovers.
With Murf, users can:
- Choose between multiple voice options (e.g., male/female, different accents)
- Stream or download the audio summary
- Learn or revise content without staring at screens
Murf’s high-quality voices add engagement, accessibility, and a professional touch to otherwise plain summaries.
Use Case & Impact
🎓 Students & Exam Takers: Turn textbook content or class notes into voice summaries for revision on the go.
📊 Working Professionals: Summarize long reports, articles, or papers and listen while commuting or during breaks.
🧑🦯 Visually Impaired Learners: Makes educational content accessible in audio format.
Read2Recap combines AI summarization with Murf's realistic voices to make reading faster, learning easier, and education more inclusive.
Top comments (0)