The "Aha!" Moment
Picture this: You're running a SaaS company. A customer emails you in Spanish, clearly frustrated. Your support team scrambles to:
- Copy-paste into Google Translate
- Read the translation
- Guess if they're angry or just asking a question
- Craft a response
- Translate it back to Spanish
- Hope it sounds natural
This takes 10-15 minutes per ticket.
What if an API could do all of this in 2 seconds?
That's what I built. And this is the story of my trials and tribulations.
What I Actually Built
Y TU API does three things that no other translation API does together:
π Translation (The Easy Part)
- 195+ languages
- Powered by Google Translate
- ~500ms response time
- 95%+ confidence scores
π Tone Analysis (The Hard Part)
Using OpenAI GPT-4, it detects:
- Emotion: positive, negative, neutral, mixed
- Urgency: low, medium, high
- Intent: question, complaint, request, feedback
- Cultural context: Western direct vs Asian indirect communication styles
- Formality score: 1-10 scale
π¬ Response Generation (The Magic Part)
Generates contextually-appropriate responses in 7 different tones:
- Professional, casual, friendly, formal
- Empathetic, concise, creative
All wrapped in a RESTful API, live on RapidAPI.
The Tech Stack (And Why I Chose Each Piece)
Backend: Node.js + Express + TypeScript
Database: MongoDB Atlas
AI: OpenAI GPT-4o-mini
Translation: Google Translate API
Deployment: Vercel (serverless)
Auth: Custom API key system
Marketplace: RapidAPI
Why Serverless?
I wanted zero devops. No servers to manage, no scaling headaches, no 3am wake-up calls.
The dream: Write code, git push, done.
The reality: Vercel's free tier has a 10-second execution limit.
Spoiler: This nearly killed the project.
Day 1: "This Will Be Easy"
Narrator: It was not easy.
Hour 1-3: Project Setup
Started with the typical setup dance:
npm init -y
npm install express typescript mongoose
# ... 47 more packages
Created the basic structure:
src/
βββ routes/ # API endpoints
βββ controllers/ # Business logic
βββ middleware/ # Auth, rate limiting
βββ models/ # MongoDB schemas
βββ services/ # External API calls
βββ utils/ # Helpers
Lesson 1: TypeScript setup takes longer than you think. Always does.
Hour 4-8: Building the Core Translation Endpoint
The translation endpoint seemed straightforward:
// What I thought it would be:
const translated = await googleTranslate(text, targetLang);
return { translatedText: translated };
// What it actually became:
async translateText(req: Request, res: Response) {
// Input validation
const errors = validationResult(req);
if (!errors.isEmpty()) {
return res.status(400).json({ errors });
}
// Rate limiting check
const rateLimit = await checkRateLimit(req.user);
if (rateLimit.exceeded) {
return res.status(429).json({ error: 'Rate limit exceeded' });
}
// Language code validation
if (!isValidLanguageCode(targetLanguage)) {
return res.status(400).json({ error: 'Invalid language' });
}
// Actual translation
try {
const result = await googleTranslateService.translate(
text,
targetLanguage,
sourceLanguage
);
// Usage tracking
await trackUsage(req.user, 'translation');
// Return with metadata
return sendSuccess(res, {
originalText: text,
translatedText: result.translatedText,
confidence: result.confidence,
sourceLanguage: result.detectedSourceLanguage,
timestamp: new Date().toISOString()
});
} catch (error) {
logger.error('Translation failed', error);
return res.status(500).json({ error: 'Translation failed' });
}
}
Lesson 2: The 80/20 rule is real. 20% of the code is the feature, 80% is error handling, validation, and logging.
Hour 9-12: The MongoDB Nightmare
"Let's add a database!" - Famous last words
MongoDB in serverless environments is... special.
The Problem:
// This works locally
await mongoose.connect(MONGODB_URI);
const apiKey = await ApiKeyModel.findOne({ key });
// In production on Vercel:
// Error: Operation `api_keys.find()` buffering timed out after 10000ms
Mongoose tries to be helpful by "buffering" operations until connected. In serverless, this means every request tries to connect, queries buffer, timeouts happen, chaos ensues.
The Solution:
// Call connect() explicitly BEFORE every database operation
export class Database {
async connect() {
// Check if already connected
if (mongoose.connection.readyState === 1) {
return; // Skip if connected
}
if (mongoose.connection.readyState === 2) {
// Wait if connecting
await new Promise(resolve => {
mongoose.connection.once('connected', resolve);
});
return;
}
// Actually connect
await mongoose.connect(process.env.MONGODB_URI, {
serverSelectionTimeoutMS: 5000,
socketTimeoutMS: 45000,
});
}
}
// Then before EVERY query:
await database.connect();
const apiKey = await ApiKeyModel.findOne({ key });
Lesson 3: Serverless means "stateless." Treat every request like it's the first request ever.
Day 2: "AI Will Save Me"
Narrator: AI tried to kill the project
Hour 13-18: Adding OpenAI for Tone Analysis
The tone analysis endpoint was supposed to be the "cool" feature. And it was!
Until I checked the response times.
First attempt:
const analysis = await openai.chat.completions.create({
model: "gpt-4",
messages: [
{
role: "system",
content: "You are an expert at analyzing human communication..."
},
{
role: "user",
content: `Analyze this: "${text}"`
}
]
});
Response time: 25-30 seconds
Vercel timeout: 10 seconds
My reaction: π
The Fix:
Switched to gpt-4o-mini:
- Response time: ~5 seconds (acceptable!)
- Cost: 60% cheaper
- Quality: Actually better for this use case (less verbose)
const analysis = await openai.chat.completions.create({
model: "gpt-4o-mini",
messages: [...],
temperature: 0.3, // Lower = more consistent
max_tokens: 500 // Limit response length
});
Lesson 4: The latest/greatest model isn't always the right model. Sometimes smaller and faster wins.
Hour 19-24: The Response Generation Feature
By this point, I was in deep. Too deep to quit.
Response generation took everything I learned and made it harder:
Requirements:
- Generate multiple response options
- Score them by quality
- Return the best one + alternatives
- Do it in under 10 seconds
- Support 7 different tones
My approach:
async generateResponse(originalText: string, tone: string, context: string) {
// Generate 5 candidate responses
const candidates = await Promise.all([
this.generateSingleResponse(originalText, tone, context, 0.7),
this.generateSingleResponse(originalText, tone, context, 0.8),
this.generateSingleResponse(originalText, tone, context, 0.6),
this.generateSingleResponse(originalText, tone, context, 0.9),
this.generateSingleResponse(originalText, tone, context, 0.5),
]);
// Score each candidate
const scored = candidates.map(c => ({
text: c,
score: this.scoreResponse(c, originalText, tone)
}));
// Sort by score
scored.sort((a, b) => b.score - a.score);
return {
response: scored[0].text,
score: scored[0].score,
alternatives: scored.slice(1, 3) // Return top 3
};
}
Response time: 12-16 seconds
Vercel's response: π₯ TIMEOUT
My response: π
The "Process" Endpoint That Wasn't
I had this grand vision: One endpoint to rule them all!
POST /process
{
"text": "Β‘Estoy muy enojado!",
"targetLanguage": "en",
"options": {
"translate": true,
"analyzeTone": true,
"generateResponse": true
}
}
Timeline:
- Translation: ~0.5s
- Tone analysis: ~5s
- Response generation: ~15s
- Total: ~20 seconds
Vercel free tier limit: 10 seconds
Math doesn't lie.
The Decision:
Ship it anyway, but document the limitation clearly:
Note: The /process endpoint may timeout when all three operations are enabled on Vercel's free tier. Recommended approach: Call /translate, /analyze-tone, and /generate-response separately for full control.
Lesson 5: Perfect is the enemy of shipped. Document limitations and move on.
Day 3: Authentication difficulties
Hour 25-30: Building the API Key System
Needed a system that could:
- Generate secure API keys
- Track usage per key
- Enforce rate limits
- Support multiple tiers (FREE, PRO, ULTRA, MEGA)
- Work with RapidAPI's proxy
The key generation:
function generateApiKey(): string {
const prefix = 'ytu_'; // "Y Tu" branding
const randomBytes = crypto.randomBytes(32);
const base64 = randomBytes.toString('base64')
.replace(/\+/g, '-')
.replace(/\//g, '_')
.replace(/=/g, '');
return `${prefix}${base64}`;
}
// Example: ytu_u-6_r9eYPTsWMKf_SHBtHJUuY6pvorELgIimfJxCB_M
The rate limiter:
async checkRateLimit(apiKey: string): Promise<RateLimitResult> {
const key = await ApiKeyModel.findOne({ key: apiKey });
// Get today's usage
const today = new Date().toISOString().split('T')[0];
const usage = await UsageModel.countDocuments({
apiKey: key._id,
date: today
});
// Check against tier limit
const limits = {
FREE: 100,
PRO: 10000,
ULTRA: 20000,
MEGA: 100000
};
const limit = limits[key.tier];
const remaining = Math.max(0, limit - usage);
return {
limit,
remaining,
exceeded: remaining === 0
};
}
Lesson 6: Rate limiting is not optional. Users will abuse any API without limits.
Day 4-5: The RapidAPI Integration Saga
Where everything that could go wrong, did
Attempt 1: Testing in RapidAPI's Interface
RapidAPI has a built-in testing interface. Should be simple, right?
What I expected:
- Upload OpenAPI spec
- Test endpoints
- Ship it
What actually happened:
- Uploaded OpenAPI spec β
- Testing interface doesn't work β
- Error: "Invalid API key" β
- Error: "X-RapidAPI-Host missing" β
- Headers not populating β
- Created test app β
- Still doesn't work β
After 6 hours of debugging, I realized: RapidAPI's testing UI is just buggy.
The Solution:
Test the actual gateway URL directly:
$headers = @{
"X-RapidAPI-Key" = "your-rapidapi-key"
"X-RapidAPI-Host" = "y-tu-api.p.rapidapi.com"
"Content-Type" = "application/json"
}
Invoke-RestMethod -Uri "https://y-tu-api.p.rapidapi.com/translate" `
-Method Post `
-Headers $headers `
-Body '{"text":"Hello","targetLanguage":"es"}'
Result:
{
"success": true,
"data": {
"translatedText": "Hola",
"confidence": 0.95
}
}
It worked! π
Lesson 7: When the UI is broken, go around it. Test with curl/PowerShell/Postman.
The Pricing Dilemma
How do you price something you built in a weekend?
My thought process:
Too cheap:
- Attracts tire-kickers
- Unsustainable if it actually works
- Signals low quality
Too expensive:
- Nobody will try it
- No feedback loop
- Wasted effort
The tiers I chose:
| Tier | Price | Requests/Day | Logic |
|---|---|---|---|
| FREE | $0 | 100 | Hook them with value |
| PRO | $49/mo | 10,000 | Sweet spot for small businesses |
| ULTRA | $99/mo | 20,000 | For growing companies |
| MEGA | $149/mo | 100,000 | Enterprise-lite |
Why these numbers?
- FREE: Big enough to be useful (100/day = 3,000/month), small enough to make them want more
- PRO at $49: Psychologically under $50, covers all my costs at scale
- ULTRA at $99: Double the capacity, not quite double the price (less value = pushes to MEGA)
- MEGA at $149: Best value per request ($0.05 per 1K vs $0.16 for PRO)
Lesson 8: Pricing is psychology, not math. Make the tier you want people to choose the obvious "best value."
The Documentation Marathon
At this point, I had a working API. But would anyone actually use it?
The README Nobody Reads (But Everyone Judges)
Spent 4 hours writing a comprehensive README:
- β What the API does (in plain English)
- β Real-world use cases with examples
- β Code snippets in multiple languages
- β Step-by-step integration guides
- β Troubleshooting section
- β ROI calculator (show them the money!)
- β Pricing comparison table
- β Architecture diagrams
The secret: Write docs as if explaining to your non-technical friend. If they get it, developers will too.
The OpenAPI Spec issue
OpenAPI specs are supposed to be easy with tools like Swagger.
Reality: Manually writing 500+ lines of YAML because my endpoints were too custom.
The pain points:
-
Tone validation mismatch:
- OpenAPI said:
empathetic, concise - Code accepted:
professional, casual, friendly, formal, creative - Result: Users got validation errors
- Fix: Manually sync'd everything
- OpenAPI said:
-
Security schemes:
- Needed to support both
X-API-Keyand RapidAPI's headers - Required two separate security schemes
- Took 3 attempts to get right
- Needed to support both
-
Example responses:
- Had to manually craft realistic examples
- Spent 2 hours making them look professional
Lesson 9: Your docs are your marketing. Invest the time.
Production Testing: When Reality Hits
Test 1: Direct API (β Perfect)
curl -X POST https://y-tu-api.vercel.app/api/v1/translate \
-H "X-API-Key: ytu_..." \
-d '{"text":"Hello","targetLanguage":"es"}'
# Response: 500ms, perfect!
Test 2: Tone Analysis (β Good Enough)
curl -X POST https://y-tu-api.vercel.app/api/v1/analyze-tone \
-H "X-API-Key: ytu_..." \
-d '{"text":"I am furious!","language":"en"}'
# Response: 5 seconds
# Result: Emotion: "negative", Urgency: "high" β
Test 3: Response Generation (β οΈ Close Call)
curl -X POST https://y-tu-api.vercel.app/api/v1/generate-response \
-H "X-API-Key: ytu_..." \
-d '{"originalText":"Where is my order?","tone":"professional"}'
# Response: 15 seconds
# Result: Generated 3 alternatives, scored properly β
# BUT: Vercel threw warnings about approaching timeout
Test 4: The Full Pipeline (β Timeout)
curl -X POST https://y-tu-api.vercel.app/api/v1/process \
-H "X-API-Key: ytu_..." \
-d '{
"text":"Β‘Estoy enojado!",
"targetLanguage":"en",
"options":{"translate":true,"analyzeTone":true,"generateResponse":true}
}'
# Response: ERROR - Function execution timeout
Decision time: Ship it anyway with documented limitations, or delay launch to fix?
I shipped it. Here's why:
- 3 out of 4 endpoints work perfectly
- Users can chain the working endpoints
- I can fix it later with Vercel Pro ($20/mo)
- Better to have real users than perfect features
Lesson 10: Ship the 80% solution. Perfect is a myth.
Launch Day: Making It Public
The Pre-Launch Checklist
β
Remove test API key from Gateway settings
β
Verify all endpoints work
β
Double-check pricing is correct
β
README is comprehensive
β
OpenAPI spec matches actual code
β
Rate limiting tested
β
Error messages are helpful
β
Monitoring set up (basic)
One last test through RapidAPI's actual gateway:
# Created RapidAPI app to get real API key
# Subscribed to my own API (FREE tier)
# Tested through gateway:
Invoke-RestMethod -Uri "https://y-tu-api.p.rapidapi.com/translate" `
-Headers @{
"X-RapidAPI-Key" = "my-rapidapi-key"
"X-RapidAPI-Host" = "y-tu-api.p.rapidapi.com"
} `
-Body '{"text":"Final test!","targetLanguage":"es"}'
# β
SUCCESS!
# Response: "Β‘Prueba final!"
Clicked "Make Public"
My API was live. π
What I Learned
Technical Lessons
-
Serverless is amazing until it's not
- Perfect for 90% of use cases
- That 10% will hurt
- Plan for timeouts from day 1
-
MongoDB + Serverless = Effort
- Explicit connection management
- Connection pooling
- Timeout configuration
- Worth it for the flexibility
-
AI APIs are expensive and slow
- GPT-4: Great quality, terrible speed/cost
- GPT-4o-mini: Sweet spot for production
- Always have a budget alert
-
Rate limiting is not optional
- Someone WILL abuse your API
- Implement before launch
- Make it per-user, not global
-
Testing through proxies is ridiculously difficult
- RapidAPI's UI is buggy
- Always test with actual HTTP clients
- Document your testing process
Business Lessons
-
Documentation > Features
- People can't use what they don't understand
- Good docs = less support time
- Examples are worth 1000 words
-
Ship imperfect products
- Perfect is the enemy of done
- Real users > perfect features
- You can always iterate
-
Pricing is psychology
- Make one tier obviously the "best value"
- FREE tier hooks them
- Price high enough to cover costs + profit
-
Integration is harder than building
- Your API is 50% of the work
- Integration with marketplaces is the other 50%
- Budget time for debugging proxies
-
Limitations can be features
- "/process endpoint requires separate calls"
- Becomes: "Full control with granular endpoints"
- Spin it right
Current Status
After 1 week live:
- API is searchable on RapidAPI β
- All core endpoints working β
- Rate limiting enforced β
- Zero downtime so far β
- Documentation complete β
Current costs: $0-50/month
Break-even: 1-2 paying customers
Profit margin: 70-80%
What's Next
Immediate (Week 1-4):
- [ ] Add error monitoring (Sentry)
- [ ] Create usage analytics dashboard
- [ ] Build Python SDK
- [ ] Add more code examples
- [ ] Set up status page
When profitable ($500/mo):
- [ ] Upgrade to Vercel Pro (fix
/processtimeout) - [ ] Add webhook support
- [ ] Build batch processing endpoint
- [ ] Add caching layer (Redis)
Long-term ($5K/mo):
- [ ] Custom model training
- [ ] White-label options
- [ ] Regional deployments
- [ ] Enterprise features
Try It Yourself
Search "Y TU Translation" on RapidAPI
Free tier includes:
- 100 requests/day
- All endpoints
- Full documentation
- No credit card required
Want to build something similar?
The full stack:
- Node.js + TypeScript + Express
- MongoDB Atlas (free tier)
- OpenAI API
- Google Translate API
- Vercel (serverless)
- RapidAPI (marketplace)
Questions? Ask in the comments!
The Real Numbers
Total development time: ~40 hours over 5 days
Lines of code: ~3,000
API endpoints: 12
Languages supported: 195
Cost to build: $0 (free tiers everywhere)
Cost to run: $0-50/month
Was it worth it? Ask me in 3 months. π
Final Thoughts
Building an API isn't hard.
Building a production-ready API that:
- Handles errors gracefully
- Scales automatically
- Integrates with marketplaces
- Has proper auth and rate limiting
- Is well-documented
- Actually solves a real problem
That's hard.
But it's also incredibly rewarding.
If you're thinking about building an API, my advice:
Start with a real problem - Don't build "another translation API". Build "translation with emotional intelligence for customer service."
Pick boring tech - Node.js, MongoDB, Vercel. Not exciting, but reliable.
Ship fast - Don't wait for perfect. Ship at 80% and iterate.
Document everything - Your docs are your marketing.
Charge money - Free APIs die. Paid APIs get improved.
Remember
The gap between you and the person you hope to become and what you aim to achieve is a leap of faith.
Now go build something!
SENZEN
Building APIs for global communication
Contact: stainfluence@gmail.com
Try Y TU API: Search "Y TU Translation" on RapidAPI
Tags: #api #ai #nodejs #typescript #mongodb #openai #rapidapi #startup #buildinpublic
Top comments (0)