DEV Community

Cover image for How I went from "wouldn't it be cool if..." to a production API serving 195 languages with AI-powered emotional intelligence
SENZEN
SENZEN

Posted on

How I went from "wouldn't it be cool if..." to a production API serving 195 languages with AI-powered emotional intelligence

The "Aha!" Moment

Picture this: You're running a SaaS company. A customer emails you in Spanish, clearly frustrated. Your support team scrambles to:

  1. Copy-paste into Google Translate
  2. Read the translation
  3. Guess if they're angry or just asking a question
  4. Craft a response
  5. Translate it back to Spanish
  6. Hope it sounds natural

This takes 10-15 minutes per ticket.

What if an API could do all of this in 2 seconds?

That's what I built. And this is the story of my trials and tribulations.


What I Actually Built

Y TU API does three things that no other translation API does together:

🌍 Translation (The Easy Part)

  • 195+ languages
  • Powered by Google Translate
  • ~500ms response time
  • 95%+ confidence scores

🎭 Tone Analysis (The Hard Part)

Using OpenAI GPT-4, it detects:

  • Emotion: positive, negative, neutral, mixed
  • Urgency: low, medium, high
  • Intent: question, complaint, request, feedback
  • Cultural context: Western direct vs Asian indirect communication styles
  • Formality score: 1-10 scale

πŸ’¬ Response Generation (The Magic Part)

Generates contextually-appropriate responses in 7 different tones:

  • Professional, casual, friendly, formal
  • Empathetic, concise, creative

All wrapped in a RESTful API, live on RapidAPI.


The Tech Stack (And Why I Chose Each Piece)

Backend:     Node.js + Express + TypeScript
Database:    MongoDB Atlas
AI:          OpenAI GPT-4o-mini
Translation: Google Translate API
Deployment:  Vercel (serverless)
Auth:        Custom API key system
Marketplace: RapidAPI
Enter fullscreen mode Exit fullscreen mode

Why Serverless?

I wanted zero devops. No servers to manage, no scaling headaches, no 3am wake-up calls.

The dream: Write code, git push, done.

The reality: Vercel's free tier has a 10-second execution limit.

Spoiler: This nearly killed the project.


Day 1: "This Will Be Easy"

Narrator: It was not easy.

Hour 1-3: Project Setup

Started with the typical setup dance:

npm init -y
npm install express typescript mongoose
# ... 47 more packages
Enter fullscreen mode Exit fullscreen mode

Created the basic structure:

src/
β”œβ”€β”€ routes/        # API endpoints
β”œβ”€β”€ controllers/   # Business logic
β”œβ”€β”€ middleware/    # Auth, rate limiting
β”œβ”€β”€ models/        # MongoDB schemas
β”œβ”€β”€ services/      # External API calls
└── utils/         # Helpers
Enter fullscreen mode Exit fullscreen mode

Lesson 1: TypeScript setup takes longer than you think. Always does.


Hour 4-8: Building the Core Translation Endpoint

The translation endpoint seemed straightforward:

// What I thought it would be:
const translated = await googleTranslate(text, targetLang);
return { translatedText: translated };

// What it actually became:
async translateText(req: Request, res: Response) {
  // Input validation
  const errors = validationResult(req);
  if (!errors.isEmpty()) {
    return res.status(400).json({ errors });
  }

  // Rate limiting check
  const rateLimit = await checkRateLimit(req.user);
  if (rateLimit.exceeded) {
    return res.status(429).json({ error: 'Rate limit exceeded' });
  }

  // Language code validation
  if (!isValidLanguageCode(targetLanguage)) {
    return res.status(400).json({ error: 'Invalid language' });
  }

  // Actual translation
  try {
    const result = await googleTranslateService.translate(
      text, 
      targetLanguage,
      sourceLanguage
    );

    // Usage tracking
    await trackUsage(req.user, 'translation');

    // Return with metadata
    return sendSuccess(res, {
      originalText: text,
      translatedText: result.translatedText,
      confidence: result.confidence,
      sourceLanguage: result.detectedSourceLanguage,
      timestamp: new Date().toISOString()
    });
  } catch (error) {
    logger.error('Translation failed', error);
    return res.status(500).json({ error: 'Translation failed' });
  }
}
Enter fullscreen mode Exit fullscreen mode

Lesson 2: The 80/20 rule is real. 20% of the code is the feature, 80% is error handling, validation, and logging.


Hour 9-12: The MongoDB Nightmare

"Let's add a database!" - Famous last words

MongoDB in serverless environments is... special.

The Problem:

// This works locally
await mongoose.connect(MONGODB_URI);
const apiKey = await ApiKeyModel.findOne({ key });

// In production on Vercel:
// Error: Operation `api_keys.find()` buffering timed out after 10000ms
Enter fullscreen mode Exit fullscreen mode

Mongoose tries to be helpful by "buffering" operations until connected. In serverless, this means every request tries to connect, queries buffer, timeouts happen, chaos ensues.

The Solution:

// Call connect() explicitly BEFORE every database operation
export class Database {
  async connect() {
    // Check if already connected
    if (mongoose.connection.readyState === 1) {
      return; // Skip if connected
    }

    if (mongoose.connection.readyState === 2) {
      // Wait if connecting
      await new Promise(resolve => {
        mongoose.connection.once('connected', resolve);
      });
      return;
    }

    // Actually connect
    await mongoose.connect(process.env.MONGODB_URI, {
      serverSelectionTimeoutMS: 5000,
      socketTimeoutMS: 45000,
    });
  }
}

// Then before EVERY query:
await database.connect();
const apiKey = await ApiKeyModel.findOne({ key });
Enter fullscreen mode Exit fullscreen mode

Lesson 3: Serverless means "stateless." Treat every request like it's the first request ever.


Day 2: "AI Will Save Me"

Narrator: AI tried to kill the project

Hour 13-18: Adding OpenAI for Tone Analysis

The tone analysis endpoint was supposed to be the "cool" feature. And it was!

Until I checked the response times.

First attempt:

const analysis = await openai.chat.completions.create({
  model: "gpt-4",
  messages: [
    {
      role: "system",
      content: "You are an expert at analyzing human communication..."
    },
    {
      role: "user", 
      content: `Analyze this: "${text}"`
    }
  ]
});
Enter fullscreen mode Exit fullscreen mode

Response time: 25-30 seconds

Vercel timeout: 10 seconds

My reaction: πŸ’€

The Fix:

Switched to gpt-4o-mini:

  • Response time: ~5 seconds (acceptable!)
  • Cost: 60% cheaper
  • Quality: Actually better for this use case (less verbose)
const analysis = await openai.chat.completions.create({
  model: "gpt-4o-mini",
  messages: [...],
  temperature: 0.3, // Lower = more consistent
  max_tokens: 500   // Limit response length
});
Enter fullscreen mode Exit fullscreen mode

Lesson 4: The latest/greatest model isn't always the right model. Sometimes smaller and faster wins.


Hour 19-24: The Response Generation Feature

By this point, I was in deep. Too deep to quit.

Response generation took everything I learned and made it harder:

Requirements:

  • Generate multiple response options
  • Score them by quality
  • Return the best one + alternatives
  • Do it in under 10 seconds
  • Support 7 different tones

My approach:

async generateResponse(originalText: string, tone: string, context: string) {
  // Generate 5 candidate responses
  const candidates = await Promise.all([
    this.generateSingleResponse(originalText, tone, context, 0.7),
    this.generateSingleResponse(originalText, tone, context, 0.8),
    this.generateSingleResponse(originalText, tone, context, 0.6),
    this.generateSingleResponse(originalText, tone, context, 0.9),
    this.generateSingleResponse(originalText, tone, context, 0.5),
  ]);

  // Score each candidate
  const scored = candidates.map(c => ({
    text: c,
    score: this.scoreResponse(c, originalText, tone)
  }));

  // Sort by score
  scored.sort((a, b) => b.score - a.score);

  return {
    response: scored[0].text,
    score: scored[0].score,
    alternatives: scored.slice(1, 3) // Return top 3
  };
}
Enter fullscreen mode Exit fullscreen mode

Response time: 12-16 seconds

Vercel's response: πŸ’₯ TIMEOUT

My response: 😭


The "Process" Endpoint That Wasn't

I had this grand vision: One endpoint to rule them all!

POST /process
{
  "text": "Β‘Estoy muy enojado!",
  "targetLanguage": "en",
  "options": {
    "translate": true,
    "analyzeTone": true,
    "generateResponse": true
  }
}
Enter fullscreen mode Exit fullscreen mode

Timeline:

  • Translation: ~0.5s
  • Tone analysis: ~5s
  • Response generation: ~15s
  • Total: ~20 seconds

Vercel free tier limit: 10 seconds

Math doesn't lie.

The Decision:

Ship it anyway, but document the limitation clearly:

Note: The /process endpoint may timeout when all three operations are enabled on Vercel's free tier. Recommended approach: Call /translate, /analyze-tone, and /generate-response separately for full control.

Lesson 5: Perfect is the enemy of shipped. Document limitations and move on.


Day 3: Authentication difficulties

Hour 25-30: Building the API Key System

Needed a system that could:

  1. Generate secure API keys
  2. Track usage per key
  3. Enforce rate limits
  4. Support multiple tiers (FREE, PRO, ULTRA, MEGA)
  5. Work with RapidAPI's proxy

The key generation:

function generateApiKey(): string {
  const prefix = 'ytu_'; // "Y Tu" branding
  const randomBytes = crypto.randomBytes(32);
  const base64 = randomBytes.toString('base64')
    .replace(/\+/g, '-')
    .replace(/\//g, '_')
    .replace(/=/g, '');

  return `${prefix}${base64}`;
}

// Example: ytu_u-6_r9eYPTsWMKf_SHBtHJUuY6pvorELgIimfJxCB_M
Enter fullscreen mode Exit fullscreen mode

The rate limiter:

async checkRateLimit(apiKey: string): Promise<RateLimitResult> {
  const key = await ApiKeyModel.findOne({ key: apiKey });

  // Get today's usage
  const today = new Date().toISOString().split('T')[0];
  const usage = await UsageModel.countDocuments({
    apiKey: key._id,
    date: today
  });

  // Check against tier limit
  const limits = {
    FREE: 100,
    PRO: 10000,
    ULTRA: 20000,
    MEGA: 100000
  };

  const limit = limits[key.tier];
  const remaining = Math.max(0, limit - usage);

  return {
    limit,
    remaining,
    exceeded: remaining === 0
  };
}
Enter fullscreen mode Exit fullscreen mode

Lesson 6: Rate limiting is not optional. Users will abuse any API without limits.


Day 4-5: The RapidAPI Integration Saga

Where everything that could go wrong, did

Attempt 1: Testing in RapidAPI's Interface

RapidAPI has a built-in testing interface. Should be simple, right?

What I expected:

  1. Upload OpenAPI spec
  2. Test endpoints
  3. Ship it

What actually happened:

  1. Uploaded OpenAPI spec βœ…
  2. Testing interface doesn't work ❌
  3. Error: "Invalid API key" ❌
  4. Error: "X-RapidAPI-Host missing" ❌
  5. Headers not populating ❌
  6. Created test app ❌
  7. Still doesn't work ❌

After 6 hours of debugging, I realized: RapidAPI's testing UI is just buggy.

The Solution:

Test the actual gateway URL directly:

$headers = @{
    "X-RapidAPI-Key" = "your-rapidapi-key"
    "X-RapidAPI-Host" = "y-tu-api.p.rapidapi.com"
    "Content-Type" = "application/json"
}

Invoke-RestMethod -Uri "https://y-tu-api.p.rapidapi.com/translate" `
  -Method Post `
  -Headers $headers `
  -Body '{"text":"Hello","targetLanguage":"es"}'
Enter fullscreen mode Exit fullscreen mode

Result:

{
  "success": true,
  "data": {
    "translatedText": "Hola",
    "confidence": 0.95
  }
}
Enter fullscreen mode Exit fullscreen mode

It worked! πŸŽ‰

Lesson 7: When the UI is broken, go around it. Test with curl/PowerShell/Postman.


The Pricing Dilemma

How do you price something you built in a weekend?

My thought process:

Too cheap:

  • Attracts tire-kickers
  • Unsustainable if it actually works
  • Signals low quality

Too expensive:

  • Nobody will try it
  • No feedback loop
  • Wasted effort

The tiers I chose:

Tier Price Requests/Day Logic
FREE $0 100 Hook them with value
PRO $49/mo 10,000 Sweet spot for small businesses
ULTRA $99/mo 20,000 For growing companies
MEGA $149/mo 100,000 Enterprise-lite

Why these numbers?

  • FREE: Big enough to be useful (100/day = 3,000/month), small enough to make them want more
  • PRO at $49: Psychologically under $50, covers all my costs at scale
  • ULTRA at $99: Double the capacity, not quite double the price (less value = pushes to MEGA)
  • MEGA at $149: Best value per request ($0.05 per 1K vs $0.16 for PRO)

Lesson 8: Pricing is psychology, not math. Make the tier you want people to choose the obvious "best value."


The Documentation Marathon

At this point, I had a working API. But would anyone actually use it?

The README Nobody Reads (But Everyone Judges)

Spent 4 hours writing a comprehensive README:

  • βœ… What the API does (in plain English)
  • βœ… Real-world use cases with examples
  • βœ… Code snippets in multiple languages
  • βœ… Step-by-step integration guides
  • βœ… Troubleshooting section
  • βœ… ROI calculator (show them the money!)
  • βœ… Pricing comparison table
  • βœ… Architecture diagrams

The secret: Write docs as if explaining to your non-technical friend. If they get it, developers will too.


The OpenAPI Spec issue

OpenAPI specs are supposed to be easy with tools like Swagger.

Reality: Manually writing 500+ lines of YAML because my endpoints were too custom.

The pain points:

  1. Tone validation mismatch:

    • OpenAPI said: empathetic, concise
    • Code accepted: professional, casual, friendly, formal, creative
    • Result: Users got validation errors
    • Fix: Manually sync'd everything
  2. Security schemes:

    • Needed to support both X-API-Key and RapidAPI's headers
    • Required two separate security schemes
    • Took 3 attempts to get right
  3. Example responses:

    • Had to manually craft realistic examples
    • Spent 2 hours making them look professional

Lesson 9: Your docs are your marketing. Invest the time.


Production Testing: When Reality Hits

Test 1: Direct API (βœ… Perfect)

curl -X POST https://y-tu-api.vercel.app/api/v1/translate \
  -H "X-API-Key: ytu_..." \
  -d '{"text":"Hello","targetLanguage":"es"}'

# Response: 500ms, perfect!
Enter fullscreen mode Exit fullscreen mode

Test 2: Tone Analysis (βœ… Good Enough)

curl -X POST https://y-tu-api.vercel.app/api/v1/analyze-tone \
  -H "X-API-Key: ytu_..." \
  -d '{"text":"I am furious!","language":"en"}'

# Response: 5 seconds
# Result: Emotion: "negative", Urgency: "high" βœ…
Enter fullscreen mode Exit fullscreen mode

Test 3: Response Generation (⚠️ Close Call)

curl -X POST https://y-tu-api.vercel.app/api/v1/generate-response \
  -H "X-API-Key: ytu_..." \
  -d '{"originalText":"Where is my order?","tone":"professional"}'

# Response: 15 seconds
# Result: Generated 3 alternatives, scored properly βœ…
# BUT: Vercel threw warnings about approaching timeout
Enter fullscreen mode Exit fullscreen mode

Test 4: The Full Pipeline (❌ Timeout)

curl -X POST https://y-tu-api.vercel.app/api/v1/process \
  -H "X-API-Key: ytu_..." \
  -d '{
    "text":"Β‘Estoy enojado!",
    "targetLanguage":"en",
    "options":{"translate":true,"analyzeTone":true,"generateResponse":true}
  }'

# Response: ERROR - Function execution timeout
Enter fullscreen mode Exit fullscreen mode

Decision time: Ship it anyway with documented limitations, or delay launch to fix?

I shipped it. Here's why:

  1. 3 out of 4 endpoints work perfectly
  2. Users can chain the working endpoints
  3. I can fix it later with Vercel Pro ($20/mo)
  4. Better to have real users than perfect features

Lesson 10: Ship the 80% solution. Perfect is a myth.


Launch Day: Making It Public

The Pre-Launch Checklist

βœ… Remove test API key from Gateway settings

βœ… Verify all endpoints work

βœ… Double-check pricing is correct

βœ… README is comprehensive

βœ… OpenAPI spec matches actual code

βœ… Rate limiting tested

βœ… Error messages are helpful

βœ… Monitoring set up (basic)

One last test through RapidAPI's actual gateway:

# Created RapidAPI app to get real API key
# Subscribed to my own API (FREE tier)
# Tested through gateway:

Invoke-RestMethod -Uri "https://y-tu-api.p.rapidapi.com/translate" `
  -Headers @{
    "X-RapidAPI-Key" = "my-rapidapi-key"
    "X-RapidAPI-Host" = "y-tu-api.p.rapidapi.com"
  } `
  -Body '{"text":"Final test!","targetLanguage":"es"}'

# βœ… SUCCESS!
# Response: "Β‘Prueba final!"
Enter fullscreen mode Exit fullscreen mode

Clicked "Make Public"

My API was live. πŸš€


What I Learned

Technical Lessons

  1. Serverless is amazing until it's not

    • Perfect for 90% of use cases
    • That 10% will hurt
    • Plan for timeouts from day 1
  2. MongoDB + Serverless = Effort

    • Explicit connection management
    • Connection pooling
    • Timeout configuration
    • Worth it for the flexibility
  3. AI APIs are expensive and slow

    • GPT-4: Great quality, terrible speed/cost
    • GPT-4o-mini: Sweet spot for production
    • Always have a budget alert
  4. Rate limiting is not optional

    • Someone WILL abuse your API
    • Implement before launch
    • Make it per-user, not global
  5. Testing through proxies is ridiculously difficult

    • RapidAPI's UI is buggy
    • Always test with actual HTTP clients
    • Document your testing process

Business Lessons

  1. Documentation > Features

    • People can't use what they don't understand
    • Good docs = less support time
    • Examples are worth 1000 words
  2. Ship imperfect products

    • Perfect is the enemy of done
    • Real users > perfect features
    • You can always iterate
  3. Pricing is psychology

    • Make one tier obviously the "best value"
    • FREE tier hooks them
    • Price high enough to cover costs + profit
  4. Integration is harder than building

    • Your API is 50% of the work
    • Integration with marketplaces is the other 50%
    • Budget time for debugging proxies
  5. Limitations can be features

    • "/process endpoint requires separate calls"
    • Becomes: "Full control with granular endpoints"
    • Spin it right

Current Status

After 1 week live:

  • API is searchable on RapidAPI βœ…
  • All core endpoints working βœ…
  • Rate limiting enforced βœ…
  • Zero downtime so far βœ…
  • Documentation complete βœ…

Current costs: $0-50/month

Break-even: 1-2 paying customers

Profit margin: 70-80%


What's Next

Immediate (Week 1-4):

  • [ ] Add error monitoring (Sentry)
  • [ ] Create usage analytics dashboard
  • [ ] Build Python SDK
  • [ ] Add more code examples
  • [ ] Set up status page

When profitable ($500/mo):

  • [ ] Upgrade to Vercel Pro (fix /process timeout)
  • [ ] Add webhook support
  • [ ] Build batch processing endpoint
  • [ ] Add caching layer (Redis)

Long-term ($5K/mo):

  • [ ] Custom model training
  • [ ] White-label options
  • [ ] Regional deployments
  • [ ] Enterprise features

Try It Yourself

Search "Y TU Translation" on RapidAPI

Free tier includes:

  • 100 requests/day
  • All endpoints
  • Full documentation
  • No credit card required

Want to build something similar?

The full stack:

  • Node.js + TypeScript + Express
  • MongoDB Atlas (free tier)
  • OpenAI API
  • Google Translate API
  • Vercel (serverless)
  • RapidAPI (marketplace)

Questions? Ask in the comments!


The Real Numbers

Total development time: ~40 hours over 5 days

Lines of code: ~3,000

API endpoints: 12

Languages supported: 195

Cost to build: $0 (free tiers everywhere)

Cost to run: $0-50/month

Was it worth it? Ask me in 3 months. πŸ˜…


Final Thoughts

Building an API isn't hard.

Building a production-ready API that:

  • Handles errors gracefully
  • Scales automatically
  • Integrates with marketplaces
  • Has proper auth and rate limiting
  • Is well-documented
  • Actually solves a real problem

That's hard.

But it's also incredibly rewarding.

If you're thinking about building an API, my advice:

  1. Start with a real problem - Don't build "another translation API". Build "translation with emotional intelligence for customer service."

  2. Pick boring tech - Node.js, MongoDB, Vercel. Not exciting, but reliable.

  3. Ship fast - Don't wait for perfect. Ship at 80% and iterate.

  4. Document everything - Your docs are your marketing.

  5. Charge money - Free APIs die. Paid APIs get improved.

Remember

The gap between you and the person you hope to become and what you aim to achieve is a leap of faith.

Now go build something!


SENZEN

Building APIs for global communication

Contact: stainfluence@gmail.com

Try Y TU API: Search "Y TU Translation" on RapidAPI


Tags: #api #ai #nodejs #typescript #mongodb #openai #rapidapi #startup #buildinpublic

Top comments (0)