The story of building KarmSakha's most powerful feature—and what we learned about AI, interviews, and the future of job preparation.
The Problem That Wouldn't Let Me Sleep
It was 2 AM, and I was staring at my laptop screen, watching our user metrics from the beta launch. The data told a troubling story.
73% of our users were getting rejected at the interview stage.
Not because they lacked skills. Not because they weren't qualified. They were getting rejected because they simply weren't prepared for the interview itself.
One user's message hit particularly hard:
"I'm a great developer. I've built three production apps. But I freeze up in interviews. I applied to 47 companies. Got 8 interviews. Zero offers. I just need practice, but how do I practice interviews without... interviews?"
That's the paradox of interview preparation. You need practice to succeed, but you can't practice until you get interviews. And if you fail those interviews because you're unprepared, you're back to square one.
Traditional solutions weren't helping:
Mock interview services: ₹2000-5000 per session, need scheduling, limited availability
Friends/family practice: Awkward, not realistic, no structured feedback
YouTube videos: Passive learning, no real interaction
Interview books: Theory without practice
We needed something different. Something scalable. Something available 24/7. Something that could give realistic practice with instant feedback.
We needed AI.
The 30-Day Challenge
On October 1st, I gathered our small team of three engineers and made a bold commitment:
"We're going to build an AI mock interview system. And we're launching it in 30 days."
My co-founder thought I was crazy. "AI interviews? In a month? That's impossible."
But we had no choice. Our users needed this. Our competitors (the giants like Naukri) weren't innovating. This was our opportunity to build something truly different.
Here's how we did it.
Week 1: Research & Architecture (Days 1-7)
Understanding What Makes Interviews Hard
Before writing a single line of code, we interviewed 50 job seekers who'd recently gone through the interview process. We asked one simple question:
"What makes interviews difficult?"
The answers revealed three core challenges:
- Unpredictability (68% of respondents)
"I never know what they'll ask"
"Questions come from nowhere"
"Each company asks different things"
- Pressure & Nervousness (82% of respondents)
"I know the answer but freeze up"
"My mind goes blank under pressure"
"I sound stupid when nervous"
- No Feedback Loop (91% of respondents)
"I never know what went wrong"
"Did I talk too much or too little?"
"Was my answer even good?"
Our AI system needed to solve all three.
The Technical Architecture
We spent days 3-5 designing the system architecture. Here's what we landed on:
User Input (Voice/Text)
↓
Speech Recognition (if voice)
↓
Natural Language Processing
↓
Intent Classification
↓
Answer Evaluation Engine
↓
Feedback Generation
↓
Next Question Selection (Adaptive)
↓
Comprehensive Scoring
Tech Stack Decisions
After evaluating multiple options, we chose:
For Speech Recognition:
Primary: Web Speech API (free, browser-native)
Fallback: Google Cloud Speech-to-Text (for complex accents)
Why: Real interviews involve speaking, not typing
For Natural Language Processing:
Primary: OpenAI GPT-4 (via API)
Why: Best in class for understanding context and nuance
Cost consideration: ~₹2 per mock interview (acceptable)
For Question Database:
Custom-built: PostgreSQL with vector embeddings
Initial dataset: 5,000 curated interview questions
Categories: Behavioral, technical, situational, company-specific
For Evaluation:
Multi-model approach:
GPT-4 for content quality
Custom ML model for communication skills
Rule-based system for structure
Sentiment analysis for confidence
For Frontend:
React for web interface
Real-time audio streaming for smooth experience
WebRTC for low-latency communication
Week 2: MVP Development (Days 8-14)
Day 8-10: The Question Engine
The first challenge: How does the AI know what to ask?
We couldn't just randomly select questions. The interview needed to feel natural, progressive, and adaptive.
Our approach:
Step 1: Job Role Categorization
javascript
// User selects their target role
const roles = {
softwareEngineer: {
categories: ['technical', 'behavioral', 'problem-solving'],
difficulty: ['easy', 'medium', 'hard'],
topics: ['algorithms', 'system-design', 'coding', 'teamwork']
},
dataScientist: {
categories: ['technical', 'statistical', 'business'],
difficulty: ['easy', 'medium', 'hard'],
topics: ['ml-models', 'statistics', 'python', 'communication']
},
// ... more roles
}
Step 2: Adaptive Question Selection
The AI starts with easier questions and adapts based on performance:
javascript
function selectNextQuestion(currentScore, answeredQuestions, userProfile) {
if (currentScore > 80) {
// User is doing well, increase difficulty
return getQuestion({
difficulty: 'hard',
notAsked: answeredQuestions,
role: userProfile.targetRole
});
} else if (currentScore < 50) {
// User is struggling, provide medium questions
return getQuestion({
difficulty: 'medium',
notAsked: answeredQuestions,
role: userProfile.targetRole
});
} else {
// Balanced progression
return getQuestion({
difficulty: 'medium-hard',
notAsked: answeredQuestions,
role: userProfile.targetRole
});
}
}
Step 3: Natural Flow
We programmed conversational elements to make it feel human:
AI: "Great answer! I can see you have strong technical skills.
Let me ask you something about teamwork now.
Tell me about a time when you disagreed with a team member.
How did you handle it?"
Day 11-12: The Evaluation Engine
This was the hardest part. How do you objectively evaluate a subjective interview answer?
Our Multi-Dimensional Scoring System:
Dimension 1: Content Quality (40%)
Does the answer address the question?
Is the information accurate?
Are examples specific and relevant?
Is there a clear structure (STAR method for behavioral)?
python
def evaluate_content(question, answer):
prompt = f"""
Question: {question}
Answer: {answer}
Evaluate this interview answer on:
1. Relevance (0-10)
2. Specificity (0-10)
3. Structure (0-10)
4. Completeness (0-10)
Provide scores and brief reasoning.
"""
response = openai.ChatCompletion.create(
model="gpt-4",
messages=[{"role": "system", "content": prompt}]
)
return parse_scores(response)
Dimension 2: Communication Skills (30%)
Clarity of expression
Confidence level (from speech analysis)
Pace of speaking
Use of filler words ("um", "like", "you know")
javascript
function analyzeCommunication(audioTranscript, audioFeatures) {
const fillerWords = countFillerWords(audioTranscript);
const clarity = assessClarity(audioTranscript);
const confidence = analyzeTone(audioFeatures);
const pace = calculateSpeakingRate(audioTranscript, duration);
return {
fillerWordScore: Math.max(0, 10 - fillerWords * 0.5),
clarityScore: clarity,
confidenceScore: confidence,
paceScore: pace,
overallCommunication: (clarity + confidence + pace) / 3
};
}
Dimension 3: Answer Length & Timing (15%)
Too short = incomplete
Too long = rambling
Sweet spot: 45-120 seconds for most answers
Dimension 4: Technical Accuracy (15%, for technical roles)
Factual correctness
Use of proper terminology
Depth of technical knowledge
Example Evaluation:
json
{
"question": "Tell me about yourself",
"userAnswer": "I'm a software engineer with 3 years of experience...",
"evaluation": {
"contentQuality": {
"score": 8.5,
"feedback": "Strong structure covering background, skills, and goals. Could add more specific achievements."
},
"communication": {
"score": 7.0,
"feedback": "Good clarity, but detected 6 filler words. Try to reduce 'um' and 'like'."
},
"timing": {
"score": 9.0,
"duration": "82 seconds",
"feedback": "Perfect length. Concise yet comprehensive."
},
"overallScore": 8.2,
"confidence": "Medium-High"
}
}
Day 13-14: The Feedback System
Evaluation scores mean nothing without actionable feedback.
We created three types of feedback:
- Immediate Micro-Feedback After each answer, users get:
Overall score (1-10)
What they did well
One specific improvement
Example of a better answer
- Mid-Interview Adjustment At the halfway point:
Progress summary
Trend analysis
Encouragement or course correction
- Comprehensive Report At the end:
Overall performance score
Strength areas
Improvement areas
Specific action items
Recommended resources
Example Feedback:
Question: "What's your biggest weakness?"
Your Answer Score: 6.5/10
✅ What You Did Well:
- Honest and authentic response
- Mentioned steps you're taking to improve
- Good self-awareness
⚠️ What to Improve:
- Your weakness was too generic ("I'm a perfectionist")
- Missed opportunity to show growth mindset
- Answer was too short (32 seconds, aim for 60-90)
💡 Better Approach:
"In my previous role, I struggled with delegating tasks
because I wanted everything done my way. I realized this
was slowing down the team. I've been actively working on
this by clearly defining expectations and trusting my
teammates' abilities. Last quarter, I successfully
delegated a major feature and it was delivered ahead
of schedule."
🎯 Action Items:
- Use the STAR method (Situation, Task, Action, Result)
- Choose a real, specific weakness
- Always include what you're doing to improve
- Aim for 60-90 second answers
Practice this again? [Start New Mock Interview]
Week 3: Testing & Refinement (Days 15-21)
The Beta Testing Nightmare
Day 15: We launched internal beta with our team and 20 close users.
It was a disaster.
Problem 1: The AI Was Too Robotic
Users reported:
"Feels like I'm talking to a machine"
"No warmth or encouragement"
"Makes me more nervous, not less"
Our fix: We added personality to the AI interviewer:
Before: "Provide your answer to: Tell me about a time you faced a challenge."
After: "Great! I can see you're doing well. Let's dive into a scenario-based question now.
Tell me about a time you faced a significant challenge at work. Take your time, and walk me through what happened."
We programmed encouraging interjections:
"Interesting approach..."
"I like where you're going with this..."
"That's a great example..."
"Tell me more about that..."
Problem 2: Speech Recognition Failed for Indian Accents
Our Indian users (80% of our market) struggled:
South Indian accents misunderstood
Hindi-English code-switching caused errors
Regional pronunciations not recognized
Our fix:
Implemented accent detection
Fine-tuned Google Speech API for Indian English
Added option to type instead of speak
Preprocessed audio to normalize regional variations
javascript
// Accent normalization
function normalizeIndianEnglish(transcript) {
const commonReplacements = {
'vill': 'will',
'vork': 'work',
'aksed': 'asked',
'fillup': 'fill up'
};
return transcript.replace(/\b(\w+)\b/g, (word) => {
return commonReplacements[word.toLowerCase()] || word;
});
}
Problem 3: Users Wanted Company-Specific Prep
Generic questions weren't enough. Users asked:
"Can it ask Google-style questions?"
"What about Amazon's leadership principles?"
"Does it know Microsoft's interview process?"
Our fix: We added company-specific modes:
Interview Modes:
├── General (Any company)
├── FAANG (Google, Amazon, Meta, Apple, Netflix)
├── Startups (Culture fit, adaptability)
├── Indian Unicorns (Flipkart, Swiggy, Zomato, etc.)
├── Consulting (Case interviews, frameworks)
└── Government (UPSC, SSC style)
Each mode had custom question banks and evaluation criteria.
The Breakthrough Moment
Day 18: A user named Priya sent us a message that changed everything.
"I just did 5 mock interviews with your AI. I got rejected by TCS last month. Today I cleared their final round. Your AI pointed out that I was rambling. I practiced being concise. It worked. Thank you."
This was it. The system was working.
Week 4: Polish & Launch (Days 22-30)
The Final Features
Feature 1: Performance Analytics Dashboard
We built a comprehensive dashboard showing:
Interview history
Score trends over time
Strengths vs weaknesses
Category-wise performance
Improvement rate
Your Interview Performance
Total Interviews: 12
Average Score: 7.2 → 8.4 (improving!)
Time Practiced: 6.3 hours
Top Strengths:
✅ Technical Knowledge (9.1/10)
✅ Problem Solving (8.7/10)
✅ Clarity (8.5/10)
Need Improvement:
⚠️ Behavioral Questions (6.8/10)
⚠️ Confidence (7.1/10)
⚠️ Conciseness (7.3/10)
Recommendation: Practice 3 more behavioral interviews
Next: "Tell me about a conflict at work" →
Feature 2: Video Interview Mode
For roles requiring video interviews:
Records video while practicing
Analyzes body language
Checks eye contact
Monitors facial expressions
Provides video-specific feedback
Feature 3: Real-time Hints
If a user is stuck for more than 15 seconds:
AI: "Take your time. If you're stuck, think about:
- What was the situation?
- What action did you take?
- What was the result?"
Feature 4: Interview Warm-up
Quick 5-minute session before real interviews:
3 easy questions
Confidence booster
Pace practice
Nerves management
Launch Day: October 30th
We launched with:
✅ 5,000 interview questions across 20 job categories
✅ Support for 15 programming languages (for technical roles)
✅ 50 company-specific interview styles
✅ Voice + text input
✅ Comprehensive feedback system
✅ Performance tracking
✅ Mobile responsive design
Pricing:
Free: 2 mock interviews per month
Premium: ₹299/month - unlimited interviews
Pro: ₹499/month - unlimited + video analysis + resume review
The Results (First 30 Days Post-Launch)
The data exceeded our expectations:
User Metrics:
1,247 signups in first month
3,891 mock interviews conducted
Average session: 28 minutes (users are engaged!)
73% completion rate (most finish the full interview)
4.6/5 star rating from users
Success Stories:
312 users reported getting job offers after using our tool
65% improvement in average interview performance (from 6.4 to 8.4)
Users practice 4.7 interviews before their real one
82% said they felt more confident after using our AI
Revenue:
₹89,000 MRR from premium subscriptions
28% conversion rate from free to paid
<5% churn rate (people stick with us)
Most Popular Features:
FAANG company-specific prep (43% of sessions)
Behavioral question practice (31%)
Technical interview prep (26%)
What We Learned
Technical Lessons
- AI isn't magic—it's about the right prompts
We went through 47 iterations of our evaluation prompt before finding one that worked consistently.
Bad prompt:
"Evaluate this interview answer and give a score."
Good prompt:
"You are an experienced hiring manager conducting an interview for a {role} position at a {company_type} company. The candidate answered: {answer}
Evaluate their response on:
- Relevance to the question
- Use of specific examples
- Structure (STAR method if applicable)
- Communication clarity
- Technical accuracy (if technical question)
Provide:
- Overall score (1-10)
- 2 specific strengths
- 1 key improvement area
- A concrete example of how to improve
Be encouraging but honest."
- Context is everything
The same answer can be great or terrible depending on:
The role (startup vs corporate)
Experience level (fresher vs senior)
Interview stage (screening vs final round)
Company culture (conservative vs innovative)
We had to build context into every evaluation.
- Users need encouragement, not just scores
Our initial feedback was too harsh:
"Your answer was vague and lacked structure. Score: 4/10"
We softened it:
"Good start! You shared relevant experience. To make this stronger, try using the STAR method to add more structure. With practice, this could easily be an 8 or 9 out of 10. Score: 6/10"
Result: User retention increased by 40%.
Product Lessons
- Free tier is essential
We initially wanted premium-only. Our advisor pushed for a free tier.
He was right:
Free tier drives signups
Users try before they buy
Word of mouth spreads faster
28% eventually convert to paid
- Mobile is crucial
43% of our users practice on mobile devices:
On commute to interview
In coffee shops
Late at night in bed
We had to optimize for mobile from day one.
- Integration > Standalone
Users wanted:
"Can this connect to my calendar?"
"Can I share results with mentors?"
"Can this sync with LinkedIn?"
Integrations are now our Q1 priority.
Business Lessons
- Solve a painful problem, not a nice-to-have
Job seekers NEED interview practice. It's not optional. This created urgency to pay.
- Compete on experience, not price
We could've undercut competitors by being cheaper. Instead, we focused on being better.
Result: Higher prices, lower churn, better brand.
- Build in public
Sharing our journey on Twitter, LinkedIn, Medium attracted:
Early users
Advisor interest
Media coverage
Investor curiosity
Transparency builds trust.
The Future: What's Next
We're not stopping here. The AI mock interview is just the beginning.
Coming in Q1 2025:
- Multi-person Panel Interviews
Practice with 2-3 AI interviewers simultaneously
More realistic, more pressure
Better preparation for senior roles
- Live Interview Co-pilot
Real-time suggestions during actual interviews
Subtle prompts in your ear
Post-interview analysis
- Interview Recordings Analysis
Upload recordings of real interviews (with permission)
AI analyzes what went wrong
Personalized improvement plan
- Industry-Specific Scenarios
Healthcare interview simulations
Teaching demonstration lessons
Sales role-playing
Customer service scenarios
- Peer Practice Matching
Connect with other job seekers
Practice together
AI moderates and evaluates both sides
Try It Yourself
If you're preparing for interviews, I invite you to try our AI mock interview system.
What you'll get:
Realistic interview practice
Instant, detailed feedback
Improvement tracking
Confidence building
Better job outcomes
Start your first free mock interview: karmsakha.com/mock-interview
Technical Deep Dive (For Developers)
Want to build something similar? Here's the architecture:
System Components
- Frontend (React + TypeScript)
typescript
// Core interview component
interface InterviewSession {
id: string;
userId: string;
role: JobRole;
difficulty: 'easy' | 'medium' | 'hard';
questions: Question[];
currentQuestion: number;
answers: Answer[];
overallScore: number;
}
// Real-time audio handling
const AudioRecorder = () => {
const [isRecording, setIsRecording] = useState(false);
const mediaRecorderRef = useRef(null);
const startRecording = async () => {
const stream = await navigator.mediaDevices.getUserMedia({ audio: true });
const mediaRecorder = new MediaRecorder(stream);
mediaRecorder.ondataavailable = (event) => {
// Send audio chunks to backend for transcription
sendAudioChunk(event.data);
};
mediaRecorder.start(1000); // Capture every second
setIsRecording(true);
};
return (
{isRecording ? 'Stop Recording' : 'Start Answer'}
);
};
- Backend (Node.js + Express)
javascript
// Interview engine
class InterviewEngine {
async conductInterview(userId, role, difficulty) {
const session = await this.createSession(userId, role);
const questions = await this.selectQuestions(role, difficulty);
for (const question of questions) {
await this.askQuestion(session, question);
const answer = await this.captureAnswer(session);
const evaluation = await this.evaluateAnswer(question, answer);
await this.provideRealTimeFeedback(session, evaluation);
await this.adaptDifficulty(session, evaluation.score);
}
return this.generateFinalReport(session);
}
async evaluateAnswer(question, answer) {
// Call OpenAI API
const response = await openai.chat.completions.create({
model: "gpt-4",
messages: [
{
role: "system",
content: this.getEvaluationPrompt(question.type, question.role)
},
{
role: "user",
content: Question: ${question.text}\nAnswer: ${answer.transcript}
}
],
temperature: 0.7,
max_tokens: 500
});
return this.parseEvaluation(response.choices[0].message.content);
}
}
- Database Schema (PostgreSQL)
sql
-- Core tables
CREATE TABLE interview_sessions (
id UUID PRIMARY KEY,
user_id UUID REFERENCES users(id),
role VARCHAR(100),
difficulty VARCHAR(20),
started_at TIMESTAMP,
completed_at TIMESTAMP,
overall_score DECIMAL(3,1),
metadata JSONB
);
CREATE TABLE interview_questions (
id UUID PRIMARY KEY,
category VARCHAR(50),
role VARCHAR(100),
difficulty VARCHAR(20),
question_text TEXT,
ideal_answer TEXT,
evaluation_criteria JSONB,
tags TEXT[]
);
CREATE TABLE interview_answers (
id UUID PRIMARY KEY,
session_id UUID REFERENCES interview_sessions(id),
question_id UUID REFERENCES interview_questions(id),
transcript TEXT,
audio_url TEXT,
duration_seconds INTEGER,
score DECIMAL(3,1),
evaluation JSONB,
created_at TIMESTAMP
);
-- Vector embeddings for semantic search
CREATE TABLE question_embeddings (
question_id UUID REFERENCES interview_questions(id),
embedding vector(1536),
PRIMARY KEY (question_id)
);
-- Indexes for performance
CREATE INDEX idx_questions_role ON interview_questions(role);
CREATE INDEX idx_sessions_user ON interview_sessions(user_id);
CREATE INDEX idx_embeddings_vector ON question_embeddings
USING ivfflat (embedding vector_cosine_ops);
- AI Integration
python
Evaluation service (Python microservice)
from openai import OpenAI
from typing import Dict, List
import json
class AnswerEvaluator:
def init(self):
self.client = OpenAI(api_key=os.getenv("OPENAI_API_KEY"))
def evaluate(self, question: str, answer: str, context: Dict) -> Dict:
"""
Evaluate interview answer using GPT-4
"""
prompt = self._build_prompt(question, answer, context)
response = self.client.chat.completions.create(
model="gpt-4-turbo-preview",
messages=[
{"role": "system", "content": prompt["system"]},
{"role": "user", "content": prompt["user"]}
],
temperature=0.3, # Lower temperature for consistent evaluation
response_format={"type": "json_object"}
)
evaluation = json.loads(response.choices[0].message.content)
# Post-process and validate
return self._validate_evaluation(evaluation)
def _build_prompt(self, question, answer, context):
system_prompt = f"""
You are an expert interviewer for {context['role']} positions
at {context['company_type']} companies. You have 15 years of
hiring experience.
Evaluate interview answers fairly but rigorously. Consider:
- Candidate's experience level: {context['experience_level']}
- Interview stage: {context['stage']}
- Cultural fit for: {context['company_culture']}
Be encouraging but honest. Provide actionable feedback.
"""
user_prompt = f"""
Question Asked: {question}
Candidate's Answer: {answer}
Evaluate this answer and return JSON with:
{{
"score": <1-10>,
"strengths": [<list 2-3 specific strengths>],
"improvements": [<list 1-2 key areas to improve>],
"better_answer_example": "<show how to improve>",
"confidence_level": "<low/medium/high>",
"content_score": <1-10>,
"communication_score": <1-10>,
"relevance_score": <1-10>
}}
"""
return {"system": system_prompt, "user": user_prompt}
- Performance Optimization
javascript
// Caching layer for common questions
const redis = require('redis');
const client = redis.createClient();
async function getQuestionWithCache(questionId) {
// Check cache first
const cached = await client.get(question:${questionId}
);
if (cached) return JSON.parse(cached);
// Fetch from database
const question = await db.questions.findById(questionId);
// Cache for 1 hour
await client.setex(
question:${questionId}
,
3600,
JSON.stringify(question)
);
return question;
}
// Rate limiting to prevent API abuse
const rateLimit = require('express-rate-limit');
const interviewLimiter = rateLimit({
windowMs: 60 * 60 * 1000, // 1 hour
max: 5, // Max 5 interviews per hour for free users
message: 'Interview limit reached. Upgrade to premium for unlimited access.',
skip: (req) => req.user.isPremium // Skip rate limit for premium users
});
app.post('/api/interview/start', interviewLimiter, startInterview);
Key Metrics We Track
javascript
// Analytics events
const trackEvent = (event, properties) => {
analytics.track({
userId: properties.userId,
event: event,
properties: {
...properties,
timestamp: new Date().toISOString()
}
});
};
// Interview funnel
trackEvent('interview_started', { role, difficulty });
trackEvent('question_answered', { questionId, score, duration });
trackEvent('interview_completed', { finalScore, questionsAnswered });
trackEvent('premium_upgraded', { plan, price, trigger: 'interview_limit' });
// Business metrics
const metrics = {
// User engagement
averageInterviewsPerUser: 4.7,
completionRate: 73,
averageSessionDuration: 28, // minutes
// Performance
userScoreImprovement: 32, // percentage
averageInitialScore: 6.4,
averageAfter5Interviews: 8.4,
// Business
freeToPayConversion: 28, // percentage
monthlyChurnRate: 4.8,
averageRevenuePerUser: 299, // INR
// Technical
apiResponseTime: 420, // ms
speechRecognitionAccuracy: 94.2, // percentage
evaluationConsistency: 89 // percentage (same answer scored similarly)
};
The Human Element
Despite all the technology, the most important lesson was this:
Technology should enable human connection, not replace it.
Our AI isn't trying to replace human interviewers. It's trying to make humans better prepared for when they meet real interviewers.
One of our most touching success stories came from Rajesh, a 42-year-old developer from Pune who had been laid off after 15 years at the same company.
"I hadn't interviewed in 15 years. The job market had completely changed. I was terrified. Your AI let me practice 20 times before my first real interview. I got the job. But more importantly, I got my confidence back."
That's what this is really about. Not the technology. Not the AI. It's about giving people confidence.
Common Challenges We Face
Building this system wasn't just about writing code. We encountered challenges we never anticipated:
Challenge 1: Cultural Sensitivity
Indian job seekers face unique challenges:
Language diversity:
Users speak in English with Hindi words mixed in
South Indian accents differ from North Indian
Some think in regional languages, translate to English
Solution: We added multilingual support and accent-aware speech recognition. Now the AI understands when someone says "I did the kaam (work)" or uses Indian English phrases like "I'm having two years experience."
Cultural context:
Family considerations in career decisions
Different workplace hierarchies
Importance of job stability vs growth
Solution: We trained our evaluation model on Indian workplace scenarios. The AI now understands that answers like "I need to consider my family's opinion" are culturally appropriate, not indecisive.
Challenge 2: The "Coaching" Problem
Some users tried to game the system:
Memorizing "perfect" answers
Reading from scripts
Using the same answer for multiple questions
The issue: This defeats the purpose. Real interviews require authentic, adaptive responses.
Our solution:
javascript
// Detect scripted/memorized answers
function detectScriptedAnswer(answer, userHistory) {
const checks = {
// Check for exact repetition
repetitionScore: checkRepetition(answer, userHistory.previousAnswers),
// Check for unnatural perfection
perfectionScore: analyzeNaturalness(answer),
// Check for reading patterns (constant pace, no pauses)
readingPattern: analyzeDeliveryPattern(answer.audioFeatures),
// Check for copy-paste indicators (in text mode)
pasteDetection: checkTypingPattern(answer.metadata)
};
if (checks.repetitionScore > 0.8 || checks.readingPattern === 'reading') {
return {
isScripted: true,
message: "We noticed this answer seems memorized. Real interviews require adaptive thinking. Try responding naturally to the specific question."
};
}
return { isScripted: false };
}
We also added randomized follow-up questions:
AI: "That's interesting. Can you tell me more about the specific technical challenge you mentioned?"
AI: "You said you led a team. How many people? What were their roles?"
AI: "Walk me through your exact thought process when you made that decision."
Result: Users can't rely on scripts. They have to think and respond authentically.
Challenge 3: Mental Health Considerations
We noticed something unexpected in our data: some users were doing 10+ practice interviews per day.
Initial reaction: "Great engagement!"
Reality check: This was anxiety, not preparation.
One user wrote: "I've done 47 mock interviews. My score went from 6 to 9. But I'm still terrified. I can't stop practicing."
This was a problem we had created.
Our response:
- Built-in limits:
javascript
// Prevent over-practice
if (userInterviewsToday >= 5) {
showMessage({
type: 'gentle_warning',
message: You've done 5 interviews today. That's excellent preparation!
Take a break and come back tomorrow. Over-practicing can
increase anxiety rather than reduce it. You're ready. 🌟
});
// Still allow access, but encourage restraint
showContinueOption();
}
- Confidence tracking:
javascript
// Monitor confidence trends
const confidenceAnalysis = {
practiceCount: 47,
scoreProgress: '+32%',
confidenceLevel: 'decreasing', // RED FLAG
recommendation: 'speak_with_mentor'
};
if (confidenceAnalysis.confidenceLevel === 'decreasing') {
recommendHumanSupport();
}
- Human support option:
"Your scores are great! But we notice you're still anxious.
Would you like to connect with a career mentor?
Sometimes talking to a human helps more than more practice."
We partnered with career counselors to offer:
Free 15-minute confidence calls
Mentor matching
Community support groups
Learning: Technology can prepare you technically, but it can't always address the emotional side. We needed humans for that.
Unexpected Use Cases
Users found creative ways to use our AI that we never imagined:
Use Case 1: English Speaking Practice
Many users weren't even applying for jobs. They were using our AI to practice English:
"I'm comfortable with technical concepts, but my English is weak. Your AI helps me practice speaking without judgment."
We leaned into this:
Added "English Conversation Practice" mode
Slower speech from AI for language learners
Vocabulary building features
Grammar correction in feedback
Use Case 2: Sales Pitch Practice
Salespeople started using it to practice their pitches:
"I treat it like I'm pitching to a client. The feedback on clarity and confidence helps me refine my pitch."
We built a feature for this:
"Sales Pitch Mode"
Customer objection handling
Persuasion technique analysis
Closing strategies
Use Case 3: Public Speaking Training
College students preparing for presentations:
"I have a thesis defense next week. Your AI helps me practice answering tough questions."
New mode unlocked:
"Presentation Q&A Practice"
Aggressive questioning mode
Defense of arguments
Handling criticism
Use Case 4: Dating Practice (Yes, Really)
Someone wrote: "I used this to practice talking to people I'm attracted to. It helped my social anxiety."
We didn't build a dating mode (that would be weird), but it showed us:
The core value wasn't job interviews. It was building confidence in high-pressure conversations.
The Business Model Evolution
We initially planned: Free tier → Premium upsell.
Reality taught us to add more layers:
Current Pricing Tiers:
Free Forever:
2 mock interviews/month
Basic feedback
1,000+ questions
Perfect for casual users
Premium (₹299/month):
Unlimited interviews
Advanced feedback
Video analysis
Performance tracking
Company-specific prep
28% of free users convert here
Pro (₹499/month):
Everything in Premium
1-on-1 human mentor session/month
Resume review
LinkedIn profile optimization
Career roadmap
15% of Premium users upgrade
Enterprise (Custom pricing):
For companies training employees
Custom question banks
Bulk licenses
Analytics dashboard
Integration with HRMS
12 companies signed in first 2 months
Additional Revenue Streams:
B2B Partnerships:
Coaching institutes license our AI (₹50K-2L/month)
Universities use it for placement prep (₹1L-5L/year contracts)
HR consultancies white-label our tool
Affiliate Revenue:
Recommend interview coaching courses
Resume writing services
Career counseling
Professional photography for LinkedIn
Data Insights (Anonymous):
Publish "State of Interviews 2025" report
Industry benchmarking for recruiters
Skill gap analysis for government
Technical Costs & Infrastructure
Monthly Infrastructure Costs (at 1,200 users):
OpenAI API (GPT-4): ₹45,000
- Average: ₹2 per interview
- 3,500 interviews/month
Google Cloud Speech-to-Text: ₹12,000
- ₹3.5 per interview (voice mode)
Database (PostgreSQL on AWS RDS): ₹8,000
- db.t3.medium instance
- 100GB storage
Application Hosting: ₹6,000
- 2 EC2 t3.small instances
- Load balancer
CDN & Storage (for audio/video): ₹5,000
- CloudFront + S3
- 500GB audio storage
Monitoring & Analytics: ₹3,000
- DataDog
- Mixpanel
Total: ₹79,000/month
Revenue: ₹2,89,000/month
Gross Margin: 73%
Cost Optimization Strategies:
- Caching:
javascript
// Cache GPT-4 evaluations for similar answers
const answerHash = generateHash(answer);
const cached = await redis.get(eval:${questionId}:${answerHash}
);
if (cached && similarity(answer, cached.originalAnswer) > 0.85) {
return cached.evaluation; // Save API call
}
Savings: ~₹8,000/month
- Batch Processing:
javascript
// Process multiple evaluations in one API call
const batchEvaluations = await openai.chat.completions.create({
model: "gpt-4",
messages: [
{
role: "system",
content: "Evaluate these 5 interview answers..."
}
]
});
Savings: ~₹5,000/month
- Tiered API Usage:
GPT-4 for paying users (highest quality)
GPT-3.5-turbo for free users (70% cheaper, 90% quality)
Rule-based evaluation for simple questions (free)
Savings: ~₹12,000/month
Metrics That Matter
After 3 months of operation, here's what we track religiously:
North Star Metric: User Success Rate
Definition: % of users who report getting a job offer
within 60 days of using our platform
Current: 23%
Goal: 35%
This matters more than revenue. If users succeed, everything else follows.
Supporting Metrics:
Engagement:
Average interviews per user: 4.7
Completion rate: 73%
Week 1 retention: 45%
Month 1 retention: 28%
Learning:
Average score improvement: +2.1 points (6.3 → 8.4)
Time to improvement: 3.2 interviews
Success correlation: Users who practice 5+ times
are 3.2x more likely to get offers
Business Health:
Free → Premium conversion: 28%
Monthly churn: 4.8%
LTV:CAC ratio: 3.2:1
Payback period: 4.2 months
Product Quality:
AI evaluation accuracy: 89% (vs human evaluators)
Speech recognition accuracy: 94%
Average response time: 420ms
Uptime: 99.7%
What's Working (Doubling Down)
- Company-Specific Interview Modes
Our fastest-growing feature. Users love:
"Google-style" system design questions
Amazon's leadership principles questions
Startup culture-fit scenarios
Action: Expanding to 100 companies by Q2 2025.
- Video Interview Analysis
Only 15% of users try it, but:
67% of those users convert to Premium
They practice 2x more
Score improvement is 40% higher
Action: Making video mode more prominent, adding AR features (virtual eye contact correction).
- Real-Time Hints During Practice
When users get stuck, gentle AI prompts help:
"Remember to use the STAR method..."
"Can you add a specific example?"
"Try to be more concise - you're at 3 minutes already"
Users love this. Completion rate jumped from 68% → 73%.
Action: Expanding hints library, making them more contextual.
What's Not Working (Fixing or Killing)
- Group Practice Mode
We built peer-to-peer practice matching.
Result:
Only 8% adoption
Scheduling is a nightmare
Quality is inconsistent
Decision: Shelving this feature. AI interviews are superior for convenience.
- Interview Recording Upload & Analysis
Users can upload recordings of real interviews for AI analysis.
Problem:
Legal/ethical concerns (recording others without permission)
Audio quality is often terrible
Hard to evaluate without context
Decision: Pivoting to "post-interview reflection" mode instead - users answer questions about how it went, AI provides structured feedback.
- Gamification
We added badges, streaks, leaderboards.
Result:
Users found it juvenile
"This is stressful, not a game"
No impact on retention
Decision: Removing gamification. This isn't Duolingo. Job hunting is serious.
The Competitor Landscape
When we launched, we thought we were unique.
We were wrong.
Within 2 months, we saw:
Direct Competitors:
Final Round AI (US-based, $3M funding)
Interviewing.io (Established player)
Pramp (Peer practice model)
Indirect Competitors:
ChatGPT (users create their own mock interviews)
YouTube (free but passive)
Human coaches (expensive but personalized)
Our Differentiation Strategy:
- India-First
Indian accents understood
Indian companies in database
Indian workplace culture
Pricing for Indian market (₹299 vs $30 elsewhere)
- AI + Human Hybrid
AI for practice
Humans for emotional support
Best of both worlds
- Full Job Search Platform
Not just interviews
Resume optimization
Job matching
Application tracking
End-to-end solution
- Continuous Learning
AI improves with every interview
Personalized question selection
Adaptive difficulty
User Stories That Keep Us Going
Meera (24, Bangalore): "I'm an introvert. Talking to people drains me. But I had to nail this interview for my dream job at Flipkart. I practiced with your AI 12 times. I got the offer. For the first time, being introverted wasn't a disadvantage."
Arjun (31, Mumbai): "I was laid off during the pandemic. Spent a year unemployed. Lost all confidence. Your AI never judged me. It just helped me get better. 8 months later, I'm a senior engineer at Razorpay."
Kavya (27, Hyderabad): "As a woman in tech, I face different questions in interviews - about marriage plans, family, career gaps. Your AI prepared me for tough questions professionally. I learned to redirect inappropriate questions without being rude."
Amit (22, Fresher): "I'm from a tier-3 college. No one taught us how to interview. Your AI was my only coach. I got placed at TCS with a 7 LPA package. My family cried with happiness."
These aren't just testimonials. These are lives changed.
That's why we do this.
The Road Ahead: 2025 Vision
Q1 2025: Depth
Expanding Interview Scenarios:
Panel interviews (multiple AI interviewers)
Case study interviews (consulting)
Technical whiteboarding (with code evaluation)
Take-home assignment feedback
Better Personalization:
AI learns your weakness patterns
Recommends specific practice areas
Creates custom practice plans
Tracks improvement scientifically
Q2 2025: Breadth
Beyond Job Interviews:
College admission interviews
Scholarship interviews
Visa interviews
Promotion discussions
Salary negotiations
B2B Expansion:
Sell to colleges (placement training)
Sell to corporates (internal mobility)
Sell to coaching institutes (white-label)
Q3 2025: Intelligence
Next-Gen AI:
GPT-5 integration (when available)
Emotion detection (nervous? confident?)
Body language analysis (video mode)
Real interview recording analysis (with consent)
Predictive success scoring
Community Features:
Connect with others targeting same companies
Share preparation strategies
Form study groups
Mentor matching
Q4 2025: Automation
Full Job Search Automation:
AI applies to jobs for you
AI schedules interviews
AI prepares you specifically for each interview
AI handles follow-ups
AI negotiates offers (with your approval)
Dream: You tell us what job you want. We get you there.
Key Lessons for Other Builders
If you're building something similar, here's what I'd tell you:
- Start With a Painful Problem
Don't build "cool technology." Build solutions to real pain.
Bad: "Let's use GPT-4 for something" Good: "People fail interviews despite being qualified. How can AI fix this?"
- Talk to Users Before Writing Code
We spent Week 1 interviewing 50 users. Best decision ever.
They told us:
What questions stress them most
What feedback they wish they got
How they currently prepare
What would make them pay
This shaped our entire product.
- MVP Doesn't Mean "Bad"
Our MVP had:
20 questions (not 5,000)
Basic feedback (not comprehensive)
Text-only (no voice)
One role (software engineer)
But it worked. It solved the core problem.
We added features based on user feedback, not our assumptions.
- Pricing Is Part of Product
We tested 5 pricing strategies:
❌ ₹99/month: Too cheap, users didn't value it
❌ ₹999/month: Too expensive, no conversions
❌ Pay-per-interview: Users hesitated to practice
❌ One-time payment: No recurring revenue
✅ ₹299/month: Sweet spot - affordable yet valuable
- AI Isn't Magical—Prompts Are Everything
We went through 47+ iterations of our evaluation prompt.
Small changes = massive differences:
Prompt A: "Score this answer 1-10" Result: Inconsistent, harsh, unhelpful
Prompt B: "You're an experienced hiring manager. Evaluate this answer considering the candidate is a [role] with [experience]. Provide a score and specific feedback." Result: Consistent, constructive, valuable
Spend time on prompt engineering. It's 80% of the work.
- Free Tier Is Essential, But Not Too Generous
Our free tier evolution:
V1: 5 interviews/month → 22% conversion
V2: 3 interviews/month → 26% conversion
V3: 2 interviews/month → 28% conversion
Finding: Users need to feel the limitation to appreciate premium.
- Build in Public
We shared our journey on:
Twitter (daily updates)
LinkedIn (weekly lessons)
Medium (monthly deep dives)
This blog post!
Results:
2,000+ email subscribers before launch
50+ beta testers from social media
3 angel investors reached out
Media coverage in YourStory, Inc42
Transparency builds trust.
- Technology Is Only 30% of Success
The real work is:
Understanding users (25%)
Marketing & distribution (25%)
Customer support (10%)
Iteration based on feedback (10%)
Technology (30%)
Most founders overindex on tech. Don't.
Open Challenges We're Still Solving
Challenge 1: Evaluation Consistency
Same answer, different evaluations.
Example:
Day 1: "Tell me about yourself" → Score: 7.5
Day 3: Same exact answer → Score: 8.2
Why: AI models have inherent randomness (temperature parameter).
Working on:
Lower temperature settings
Multiple evaluation passes (average 3 scores)
Human spot-checking
Challenge 2: Cultural Bias in AI
GPT-4 was trained primarily on Western data.
Problem:
Values direct communication over respectful hierarchy
Favors individualism over collectivism
Doesn't understand Indian workplace norms
Example: User: "I consulted my manager before making the decision" GPT-4: "Shows lack of initiative and independence" ❌ Reality: "Shows respect for hierarchy and collaboration" ✅
Working on:
Fine-tuning on Indian interview data
Cultural context in prompts
Custom evaluation models
Challenge 3: Handling Sensitive Topics
Users sometimes share:
Discrimination experiences
Mental health struggles
Family pressures
Financial stress
Our AI isn't equipped to handle these properly.
Solution:
Trigger warnings for sensitive topics
Immediate connection to human counselors
Partnership with mental health professionals
The Team Behind This
This couldn't have been built by one person. Our tiny but mighty team:
Rohit (That's me) - Founder & CEO
Previously: Software engineer at Flipkart
Role: Product vision, fundraising, user research
Fun fact: Did 100+ mock interviews myself to test the product
Priya - CTO
Previously: ML engineer at Amazon
Role: AI architecture, prompt engineering, technical decisions
Fun fact: She improved our evaluation accuracy from 67% to 89%
Karthik - Full-Stack Engineer
Previously: Freelancer
Role: Frontend, backend, everything in between
Fun fact: Built the entire first version in 12 days
Anjali - Product Designer
Previously: Designer at Zomato
Role: UX/UI, user flows, design system
Fun fact: Her empathy interviews revealed our biggest insights
Special Thanks:
50 beta testers who gave brutal honest feedback
1,200 users who believed in us
My co-founder who said "You're crazy, but let's do it"
The Indian startup ecosystem for inspiration
Try It Yourself (Final CTA)
If you've read this far, you're probably:
Building something with AI
Preparing for interviews
Interested in startup journeys
All of the above
For Builders: I hope our story helps you avoid our mistakes and accelerate your journey. Feel free to reach out—I'm happy to help.
For Job Seekers: Stop reading. Start practicing.
Your next interview could be the one that changes your life. But only if you're prepared.
🚀 Try your first free AI mock interview: karmsakha.com/mock-interview
What to expect:
15-minute realistic interview
Instant detailed feedback
Specific improvement suggestions
Confidence boost
No credit card. No spam. Just practice.
Let's Connect
Want to follow our journey?
📧 Email: yamankhetan@karmsakha.com 🐦 Twitter: @karmsakha 💼 LinkedIn: KarmSakha 📝 Medium: @karmsakha 💻 Dev.to: @karmsakha
Questions about:
Technical implementation?
AI prompt engineering?
Startup challenges?
How we got our first users?
Drop a comment below or DM me. I respond to everyone.
One Final Thought
We built an AI interview system in 30 days.
But the real product isn't the technology.
It's confidence.
Every user who goes from "I'm terrified" to "I'm ready"—that's the product.
Every person who gets their dream job after practicing with us—that's the product.
Every career transformed because someone finally had a safe space to practice—that's the product.
The AI is just the vehicle.
The destination is a world where everyone has equal access to interview preparation, regardless of their college, connections, or financial background.
That's what we're building.
That's why we'll keep building.
If this post helped you, please share it with someone who's preparing for interviews. You might just change their career trajectory.
BuildingInPublic #AIStartup #InterviewPrep #IndianStartup #TechForGood
Word Count: 8,947 words Read Time: 35 minutes Published: October 2025
Top comments (0)