Let me tell you about the 3 AM scroll. You know the one. You're lying in bed, anxiety through the roof, doom-scrolling Instagram. The algorithm keeps throwing travel influencers and hustle culture at you. Nobody asked, nobody cares, and you feel worse than when you started.
That's the moment MoodFeed was born. Not in some fancy accelerator pitch, but in my bedroom at 3 AM, feeling like garbage and thinking "there has to be a better way."
The Problem Is Obvious (Once You See It)
Every social app treats you like a data point. "User 47362 likes tech content." Cool. But what if User 47362 just got dumped? What if they're celebrating a promotion? What if they're grieving? The algorithm doesn't give a damn. Same feed, different day, zero empathy.
I spent four months fixing this. Built an app that actually asks "how are you feeling?" and then shows you content that matches. Revolutionary? Nah. Just basic human decency meets technology.
What Actually Makes This Thing Work
The concept is stupid simple: tell us your mood (or let our camera read it), get a feed curated for that exact emotional state. Stressed? Here's calming content. Hyped? Motivational videos. Lonely? Wholesome community stories.
But here's where it gets interesting. The AI doesn't just match your mood—it learns your patterns. After two weeks, it knows you cope with stress through dark humor, or that nature videos help your anxiety more than guided meditation. It becomes YOUR mood companion, not some generic wellness app.
Tech Stack: Why I Went All-In on React Native
I chose React Native with Expo because I wanted to move fast and I didn't want to maintain two codebases. Plus, the JavaScript ecosystem is unmatched for rapid prototyping. Fight me.
The Full Stack Breakdown
Frontend: React Native + Expo because life's too short for native development. TypeScript because I like my code to make sense six months later. React Navigation for routing, Redux Toolkit for state (yes, Redux is still alive and actually good now), and Axios for API calls.
Backend: Node.js with Express. MongoDB for data storage because JSON documents just make sense for user preferences and content metadata. Nothing fancy, just solid and scalable.
AI Layer: OpenAI API for text sentiment analysis and conversational mood detection. Hugging Face for the facial emotion recognition model (MediaPipe face mesh + custom emotion classifier). Content tagging uses a hybrid system—pre-labeled datasets plus real-time categorization.
Infrastructure: Everything's containerized with Docker. Deployed on AWS ECS with CloudFront CDN. Redis for caching frequently accessed content. WebSockets for real-time mood updates.
Push Notifications: Firebase Cloud Messaging because it just works.
Architecture: Simple, Not Stupid
I hate overengineered systems. This is a straightforward client-server setup with AI microservices:
┌────────────────────────────────────────┐
│ React Native App (iOS/Android) │
│ • Expo Managed Workflow │
│ • TypeScript + Redux Toolkit │
│ • Expo Camera + Media Library │
└───────────────┬────────────────────────┘
│
│ REST API / WebSocket
│
┌───────────────▼────────────────────────┐
│ Node.js Express Server │
│ ┌──────────────────────────────────┐ │
│ │ API Gateway Layer │ │
│ ├──────────────────────────────────┤ │
│ │ • Auth Service (JWT) │ │
│ │ • Mood Processing Pipeline │ │
│ │ • Content Aggregation Engine │ │
│ │ • Recommendation Algorithm │ │
│ │ • Real-time Sync (Socket.io) │ │
│ └──────────────────────────────────┘ │
└───────────────┬────────────────────────┘
│
┌───────┴────────┐
│ │
┌───────▼─────┐ ┌─────▼──────────┐
│ MongoDB │ │ AI Services │
│ • Users │ │ ┌──────────┐ │
│ • Content │ │ │ OpenAI │ │
│ • Moods │ │ │ GPT-4 │ │
│ • Analytics│ │ └──────────┘ │
└─────────────┘ │ ┌──────────┐ │
│ │ HuggingFace│
┌─────────────┐ │ │ MediaPipe│ │
│ Redis Cache │ │ └──────────┘ │
│ • Sessions │ └────────────────┘
│ • Feed Data │
└─────────────┘
The Flow:
- User opens app → mood check prompt (manual/camera/voice)
- Mood data hits backend → AI analyzes emotion & context
- Content engine filters database by mood tags + user history
- Personalized feed streams back → user gets exactly what they need
- Engagement tracked → algorithm learns and improves
Project Structure: Clean Code Isn't Optional
I've seen too many React Native projects that look like a tornado hit a filing cabinet. Here's how I kept MoodFeed organized:
moodfeed/
├── src/
│ ├── api/
│ │ ├── client.ts
│ │ ├── endpoints.ts
│ │ └── interceptors.ts
│ ├── assets/
│ │ ├── images/
│ │ ├── icons/
│ │ ├── fonts/
│ │ └── animations/
│ ├── components/
│ │ ├── common/
│ │ │ ├── Button.tsx
│ │ │ ├── Input.tsx
│ │ │ ├── Card.tsx
│ │ │ └── LoadingSpinner.tsx
│ │ ├── mood/
│ │ │ ├── MoodSelector.tsx
│ │ │ ├── MoodCamera.tsx
│ │ │ ├── EmotionIndicator.tsx
│ │ │ └── MoodHistory.tsx
│ │ ├── feed/
│ │ │ ├── FeedCard.tsx
│ │ │ ├── VideoPlayer.tsx
│ │ │ ├── MemeViewer.tsx
│ │ │ └── ArticlePreview.tsx
│ │ └── profile/
│ │ ├── MoodChart.tsx
│ │ ├── StatsCard.tsx
│ │ └── SettingsPanel.tsx
│ ├── navigation/
│ │ ├── AppNavigator.tsx
│ │ ├── AuthNavigator.tsx
│ │ └── navigationTypes.ts
│ ├── screens/
│ │ ├── auth/
│ │ │ ├── LoginScreen.tsx
│ │ │ ├── SignupScreen.tsx
│ │ │ └── OnboardingScreen.tsx
│ │ ├── mood/
│ │ │ ├── MoodCheckScreen.tsx
│ │ │ └── MoodCameraScreen.tsx
│ │ ├── feed/
│ │ │ ├── HomeFeedScreen.tsx
│ │ │ ├── ContentDetailScreen.tsx
│ │ │ └── ExploreScreen.tsx
│ │ └── profile/
│ │ ├── ProfileScreen.tsx
│ │ ├── MoodStatsScreen.tsx
│ │ └── SettingsScreen.tsx
│ ├── store/
│ │ ├── index.ts
│ │ ├── slices/
│ │ │ ├── authSlice.ts
│ │ │ ├── moodSlice.ts
│ │ │ ├── feedSlice.ts
│ │ │ └── userSlice.ts
│ │ └── middleware/
│ │ └── apiMiddleware.ts
│ ├── services/
│ │ ├── authService.ts
│ │ ├── moodService.ts
│ │ ├── contentService.ts
│ │ ├── cameraService.ts
│ │ └── analyticsService.ts
│ ├── hooks/
│ │ ├── useAuth.ts
│ │ ├── useMood.ts
│ │ ├── useFeed.ts
│ │ └── useCamera.ts
│ ├── utils/
│ │ ├── constants.ts
│ │ ├── helpers.ts
│ │ ├── validators.ts
│ │ └── dateUtils.ts
│ ├── types/
│ │ ├── models.ts
│ │ ├── api.ts
│ │ └── navigation.ts
│ ├── theme/
│ │ ├── colors.ts
│ │ ├── typography.ts
│ │ ├── spacing.ts
│ │ └── theme.ts
│ └── App.tsx
├── backend/
│ ├── src/
│ │ ├── config/
│ │ │ ├── database.js
│ │ │ ├── redis.js
│ │ │ └── environment.js
│ │ ├── models/
│ │ │ ├── User.js
│ │ │ ├── Content.js
│ │ │ ├── MoodLog.js
│ │ │ └── Preference.js
│ │ ├── routes/
│ │ │ ├── index.js
│ │ │ ├── auth.routes.js
│ │ │ ├── mood.routes.js
│ │ │ ├── feed.routes.js
│ │ │ └── user.routes.js
│ │ ├── controllers/
│ │ │ ├── authController.js
│ │ │ ├── moodController.js
│ │ │ ├── feedController.js
│ │ │ └── userController.js
│ │ ├── services/
│ │ │ ├── aiService.js
│ │ │ ├── contentAggregator.js
│ │ │ ├── recommendationEngine.js
│ │ │ └── emotionDetector.js
│ │ ├── middleware/
│ │ │ ├── auth.middleware.js
│ │ │ ├── validation.middleware.js
│ │ │ ├── rateLimit.middleware.js
│ │ │ └── errorHandler.js
│ │ ├── utils/
│ │ │ ├── logger.js
│ │ │ ├── cache.js
│ │ │ └── validators.js
│ │ └── sockets/
│ │ └── moodSync.js
│ ├── tests/
│ ├── package.json
│ └── server.js
├── app.json
├── package.json
├── tsconfig.json
└── .env
Backend is equally clean:
Every file has one job. Every folder is self-contained. No magic, no mess.
Design Philosophy: Brutalist Minimalism Meets Emotion
Most wellness apps look like a yoga studio exploded on your screen. Pastels, gradients, floating meditation gurus. I went the opposite direction.
Minimalist Brutalism: Clean lines, bold typography, generous whitespace. No unnecessary decoration. The content is the design.
Mood-Based Color Psychology: Each emotional state has a subtle color theme. Not overwhelming, just a gentle atmospheric shift. Calm uses deep blues and grays. Energetic gets warm oranges and yellows. Melancholic uses muted purples and soft pinks.
Micro-Interactions That Feel Right: Every tap has weight. Smooth 60fps animations. Haptic feedback that matches the emotional context. Swipe gestures that feel natural. No jank, no lag, no bullshit.
Typography-First: Using Inter for body text and Clash Display for headers. Readable, modern, doesn't try too hard.
Dark Mode as Default: Because most people use this late at night when they're feeling some type of way.
Features That Actually Matter
1. The Mood Camera (This Is The Viral Feature)
Open camera, look at your phone for 2 seconds, AI reads your face. It detects micro-expressions—the slight eyebrow furrow that indicates stress, the forced smile that hides sadness. Using MediaPipe's face mesh with a custom-trained emotion classifier.
Is it creepy? Maybe. Is it accurate? About 85% of the time. Can you correct it? Absolutely. Privacy is opt-in, all processing happens on-device first, then only anonymized data hits the server if you allow it.
2. Conversational Mood Check
Don't want to use the camera? Just type. "I'm feeling anxious about work" → AI understands context, intensity, and specific triggers. GPT-4 analyzes the sentiment and maps it to content categories.
Better than button-clicking because humans are nuanced. You're not just "happy" or "sad." You're "cautiously optimistic but slightly anxious about the presentation tomorrow."
3. The Adaptive Feed Algorithm
This is where months of work went. The feed isn't just mood-matching. It's emotionally intelligent:
- Matching content (60%): Aligns with current mood
- Transitional content (30%): Gently shifts mood in healthy direction
- Discovery content (10%): Introduces new mood-content connections
If you're sad, we don't keep you sad. We show comforting content, then gradually introduce uplifting material. But we also don't force toxic positivity. Sometimes you just need to sit with your feelings.
4. Mood Analytics Dashboard
Track your emotional patterns over time. See mood trends, identify triggers, understand what content actually helps. It's like having a mood journal that analyzes itself.
The stats page shows:
- Mood distribution (pie chart)
- Emotional timeline (line graph)
- Content effectiveness (what actually helped)
- Pattern recognition (you're stressed every Monday at 9 AM)
5. Social Sharing (Without the Toxicity)
You can share your daily mood stats—not individual pieces of content. It's a conversation starter without the performance pressure.
"Had a 70% calm day today" is more authentic than curated highlight reels. Friends can react with supportive messages, not just likes.
6. Offline Support
Downloaded content stays accessible offline. Because mental health crises don't wait for WiFi. The app caches your last 50 feed items and all your mood history locally.
The Technical Challenges That Nearly Broke Me
Challenge 1: Real-Time Emotion Detection on Mobile
Running ML models on phones is hard. MediaPipe helped, but I still had to optimize the hell out of it. Reduced model size by 60%, implemented frame skipping (analyzing every 3rd frame), and added fallback to manual selection if device is too slow.
Challenge 2: Content Moderation at Scale
You CANNOT mess this up. Showing triggering content to someone vulnerable can cause real harm. Built a three-layer filter:
- Automated content tagging (AI)
- Community reporting system
- Human review for flagged items
All sensitive content has warnings. All graphic content is blurred by default.
Challenge 3: Privacy vs Personalization
Better personalization needs more data. But people (rightfully) don't trust apps with their emotions. Solution:
- All facial analysis happens on-device
- Only mood labels (not images) sent to server
- All data encrypted end-to-end
- Users can delete everything anytime
- Complete transparency in settings
Challenge 4: The Cold Start Problem
New users have no history. How do you personalize a feed for someone you don't know?
Built an onboarding flow that asks 10 questions about content preferences and emotional patterns. Enough to create a decent initial profile, not so much that people bail before finishing signup.
Challenge 5: Content Licensing
You can't just scrape Reddit and TikTok. I had to:
- Build partnerships with meme accounts (yes, really)
- Use API access from platforms that allow it
- Create original content
- Build a creator program
Legal stuff is boring but necessary.
Performance Optimizations (Because Users Are Impatient)
Lazy Loading Everything: FlatList with proper optimization. Only render visible items. Recycle item views. Never load everything at once.
Smart Image Caching: Using Fast Image for aggressive caching. Images load once, cache forever. Placeholder blur-ups for perceived speed.
Video Optimization: Videos start at 480p, upgrade to 1080p as they buffer. Pre-load next 3 videos in feed. Nobody notices the quality shift, everyone thinks it's fast.
Code Splitting: Lazy-loaded screens. Only load authentication flow when needed. Only load camera components when user wants to use camera.
Redux Optimization: Memoized selectors everywhere. Normalized state shape. Only components that need updates re-render.
API Request Batching: Multiple requests combined into one. Reduced API calls by 60%.
Background Sync: Mood logs sync in background. User never waits for network requests.
Monetization (Because Ramen Gets Old)
Free tier is generous:
- Unlimited mood checks
- 50 feed items per day
- Basic analytics
Premium ($5.99/month):
- Unlimited feed access
- Advanced analytics
- Custom mood categories
- No ads (yes, there are subtle ones in free)
- Priority content recommendations
- Early access to new features
7% conversion rate so far. Not bad for a side project.
What People Actually Say
The best feedback came from a user who DMed me: "I was having a panic attack at 2 AM. Opened MoodFeed, it showed me exactly what I needed. First app that's ever actually helped in the moment."
That's the whole point. Not engagement metrics. Not retention rates. Just helping people feel less alone when they're going through it.
Lessons From Building This
Ship fast, perfect later: My MVP had basic mood selection and a simple feed. Everything else came from user feedback.
AI is a tool, not magic: GPT-4 is powerful but you need good prompts, content context, and fallback logic. It's not going to solve everything.
Design is not decoration: I rebuilt the UI four times. Each iteration stripped away more unnecessary elements. Less really is more.
Mental health is serious: Added crisis resources, partnered with licensed therapists for review, put disclaimers everywhere. Don't be reckless with people's wellbeing.
Community feedback is gold: Half my features came from users. The voice mood check? Suggested by a user with mobility issues.
TypeScript saves your ass: Static typing caught so many bugs before production. Worth the initial setup time.
React Native is actually good now: Performance is solid, Expo makes life easy, and the ecosystem is mature. Stop using 2018 takes to criticize it.
The Roadmap (What's Next)
Q1 2025: Apple Watch app for quick mood logging
Q2 2025: Voice-only mood check (for accessibility)
Q3 2025: Group mood circles—share vibes with close friends
Q4 2025: AI-generated personalized content based on your patterns
Want to Build Something Similar?
The tech I used isn't the only option. You could swap:
- React Native → Flutter (if you prefer Dart)
- MongoDB → PostgreSQL (if you like relational)
- OpenAI → Claude or local Llama models
- Expo → bare React Native (if you need native modules)
The architecture matters more than the specific tools. Focus on:
- Clean separation of concerns
- Proper state management
- Optimistic UI updates
- Solid error handling
- User privacy by design
Final Real Talk
Building MoodFeed taught me that technology doesn't have to make people feel worse. Social media's default mode is exploitation—extracting attention and selling it. We can do better.
This isn't about disrupting an industry or being a unicorn startup. It's about building something useful that respects people's emotional state instead of weaponizing it.
The app's live on both App Store and Google Play. Go try it. Break it. Tell me what sucks. That's how we make it better.
And if you're building something in this space—mental health tech, emotion AI, content personalization—hit me up. Always down to talk architecture, swap war stories, or debate why Redux is still relevant in 2025.
That's a wrap 🎁
Now go touch some code 👨💻
Top comments (0)