This is a submission for the World's Largest Hackathon Writing Challenge: Building with Bolt.
What I Built
Hey everyone! I'm excited to share AI VoiceCoach - an English learning app I built during the World's Largest Hackathon. The idea was simple: help people practice English conversation with an AI tutor that actually listens and responds to their voice.
🔗 Live Demo: https://aivoicecoach.netlify.app/
📂 Code: https://github.com/shivas1432/AI_VoiceCoach
Key Features
- Voice chat with AI - Speak English, get instant feedback
- Real-time speech recognition - No typing needed!
- Smart corrections - Grammar and pronunciation tips
- Beautiful dark theme - Modern neumorphic design
- Works on any device - Responsive and fast
Why Bolt.new Was a Game Changer
Honestly, Bolt.new saved me so much time! Instead of spending hours setting up React + TypeScript configuration, I jumped straight into building the actual features.
What I loved about Bolt:
⚡ Super Fast Setup
I just told Bolt what I wanted, and boom - proper React app with TypeScript ready to go.
🎨 Design Made Easy
When I wanted a neumorphic dark design with glowing blue effects, I simply described it:
"Create a black background with shining blue neumorphic design"
And Bolt generated this beautiful CSS that would've taken me hours:
boxShadow: `
inset 12px 12px 24px #1a1a1a,
inset -12px -12px 24px #2a2a2a,
0 0 40px rgba(59, 130, 246, 0.3)
`
🔧 Complex Features, Simple Prompts
The voice recognition part was tricky, but Bolt helped me structure everything properly - from microphone handling to API integrations.
Technical Challenges I Faced
Speech Recognition Drama
The browser speech APIs can be quite finicky! Different browsers behave differently, and I had to handle various error cases.
// Had to create proper error handling
this.recognition.onerror = (event) => {
if (event.error === 'interrupted') {
// This is normal, not an actual error
resolve();
} else {
reject(new Error(`Speech error: ${event.error}`));
}
};
Gemini API Overload Issues
Google's Gemini API sometimes gets overloaded (especially during hackathons 😅). So I built retry logic:
// Retry when servers are busy
do {
response = await fetch(geminiAPI);
if (response.status === 503) {
// Wait and try again
await new Promise(resolve => setTimeout(resolve, 2000));
}
} while (retries <= maxRetries);
TypeScript Headaches
Browser speech APIs don't have proper TypeScript definitions. Had to create my own interfaces to make everything work smoothly.
My Development Process
- Started with Bolt.new - Got the basic React app structure
- Added voice features - Integrated Web Speech API for listening
- Connected Gemini AI - For intelligent responses
- Polished the UI - Made it look professional with neumorphic design
- Deployed on Netlify - One-click deployment!
What Makes This App Special
Real Conversations: Unlike other language apps, this actually listens to your voice and responds naturally. It feels like talking to a real tutor!
Instant Feedback: Grammar mistakes? Pronunciation issues? The AI catches them and gives helpful tips without being harsh.
No Downloads: Everything works in the browser. No app store hassles!
How Bolt.new Helped Me Win
Without Bolt.new, this project would've taken weeks. But because Bolt handled all the boring setup stuff, I could focus on the cool AI features that make the app actually useful.
Time saved on:
- Project configuration
- Component architecture
- Responsive design patterns
- TypeScript setup
- Build optimization
Time spent on:
- AI integration logic
- Voice recognition features
- User experience design
- Error handling
- Performance tuning
Real Impact
This isn't just a demo app - it actually helps people! I've already had friends try it and they love how natural the conversations feel. One friend said it's like having a patient English teacher available 24/7.
Technical Stack
- Frontend: React + TypeScript (thanks to Bolt's template!)
- AI: Google Gemini API
- Speech: Web Speech API
- Styling: Tailwind CSS with custom neumorphic design
- Deployment: Netlify (super smooth!)
Lessons Learned
Bolt.new is incredible for rapid prototyping. When you have a solid idea but limited time, it's a lifesaver. The AI understands what you want and generates production-quality code.
Voice interfaces are the future. People love talking more than typing. Making technology respond to natural speech feels magical.
Error handling is crucial. Real users will break your app in ways you never imagined. Good error handling makes the difference between a demo and a product.
What's Next?
I'm planning to add:
- Multiple language support (Hindi, Tamil, etc.)
- Progress tracking and analytics
- Different conversation topics (job interviews, casual chat, etc.)
- Social features for learning with friends
Final Thoughts
Building AI VoiceCoach was an amazing experience. Bolt.new made the technical stuff easy so I could focus on creating something that actually helps people learn English better.
The combination of AI and voice technology is incredibly powerful. When done right, it feels like the future is already here!
Thanks to the hackathon organizers and Bolt.new for making this possible. Can't wait to see what everyone else built! 🎉
Try AI VoiceCoach yourself: https://aivoicecoach.netlify.app/
Top comments (0)