Six months ago I was a solo developer with an idea: build a journaling app that uses AI to give people genuine reflections on what they write, not generic motivational quotes, but responses that actually engage with their words.
Today that app is live on Google Play. It's called Eventide: Journal & Mood, and I built every screen, every animation, every backend function myself. Here's the full technical story, the decisions that worked, the ones I'd reconsider, and what I learned shipping a real product alone.
The stack
| Layer | Choice |
|---|---|
| Framework | Flutter (latest stable) |
| Language | Dart, strict null safety |
| State management | Riverpod + riverpod_generator |
| Navigation | go_router |
| Backend | Firebase (Auth + Firestore + Cloud Functions) |
| AI | Anthropic Claude API via Firebase Cloud Functions |
| Subscriptions | RevenueCat SDK |
| Local storage | Hive |
| Charts | fl_chart |
| Design | Material 3, Google Fonts (Lora + Inter) |
Why Flutter
I needed iOS and Android from a single codebase. React Native was the other contender, but Flutter's rendering engine won me over. When you're building something with custom animations (breathing exercises, mood selectors with radial pulse effects, a Year in Pixels grid with 365 tappable squares), having full control over every pixel matters.
The hot reload cycle is genuinely as good as people say. I could tweak an animation curve, save, and see the result in under a second. For a solo dev iterating on feel and polish, that speed compounds fast.
The tradeoff: platform-specific plugins can be rough. Speech-to-text, local notifications, and biometric auth all required platform channel debugging that ate more hours than I'd like to admit.
Why Riverpod over Bloc or Provider
I evaluated all three seriously. Provider is simpler but doesn't scale well when you have computed state that depends on multiple sources. Bloc is powerful but the boilerplate for events and states felt heavy for a solo project where velocity matters.
Riverpod with code generation hit the sweet spot. Providers are testable, composable, and the ref.watch pattern makes reactive UI straightforward. My insights screen computes a 30-day rolling mood average, weekly summaries, a Year in Pixels map, and entry mode breakdowns, all derived from a single list of journal entries through chained providers.
One pattern I landed on: every screen handles three states explicitly. Loading, data, and error. No exceptions. This sounds obvious but it's easy to forget error states when you're moving fast, and users notice.
The AI architecture (and why the API key never touches the client)
This was the most important architectural decision in the entire project.
Eventide uses Anthropic's Claude API to generate reflections on journal entries. The temptation as a solo dev is to call the API directly from the Flutter app. It's faster to implement, fewer moving parts, ship it and move on.
Do not do this.
Any API key embedded in a mobile app binary can be extracted. It doesn't matter how you obfuscate it. Someone will decompile your APK, find the key, and either abuse your quota or do something worse. I've seen indie devs get surprise bills in the thousands from this exact mistake.
Instead, I set up a Firebase Cloud Function as a proxy. The Flutter app calls the Cloud Function (authenticated via Firebase Auth), the Cloud Function reads the Anthropic API key from environment config, calls Claude, and returns the reflection. The key never leaves the server.
Flutter App -> Firebase Auth Token -> Cloud Function -> Claude API -> Response
Three callable functions handle the AI layer:
getAiReflection — takes entry text, mood score, and entry mode. Switches the system prompt based on whether it's a full entry or a quick check-in. Full entries get a three-sentence reflection (empathy, pattern recognition, open question). Quick check-ins get one to two sentences max.
getWeeklyInsights — fetches the last seven entries, generates a two-sentence pattern summary, caches it in Firestore for 24 hours so repeat loads don't burn API calls.
getDailyPrompt — returns a random prompt from a server-side bank of 60 prompts across six categories. No AI call needed, just a JSON lookup.
The system prompts took more iteration than any other part of the project. Early versions produced generic therapy-speak. The final prompts explicitly ban words like "journey," "validate," and "self-care," and instruct the model to mirror the user's emotional register rather than defaulting to cheerful encouragement.
Offline-first with Hive
Journal apps have to work offline. People write on planes, in bed at 2 AM with bad signal, in waiting rooms with one bar of service.
Every journal entry saves to Hive first, then syncs to Firestore when connectivity returns. The sync flag (isSynced) on each entry tracks what needs uploading. This means the app is fully functional with no internet connection. AI reflections queue for later, but the core writing and mood tracking experience is instant.
Hive was chosen over SQLite for simplicity. It's a key-value store that handles Dart objects natively with type adapters. For a data model as simple as journal entries, it's more than enough.
RevenueCat for subscriptions
Implementing in-app subscriptions from scratch across both platforms is a notorious time sink. RevenueCat abstracts the store-specific APIs into a single SDK. I defined two products: monthly at $4.99 and annual at $29.99, both with a three-day free trial.
The free tier gives users three journal entries with AI reflections. After that, the paywall appears with a blurred preview of premium features (insights, Year in Pixels, weekly summaries). The blur is intentional. People need to see what they're missing, not just be told about it.
Lessons from shipping solo
Scope is everything. My original feature list was twice as long. I cut printed book export, multiplayer shared journals, and three other features that sounded great but would have delayed launch by months. Ship the core, iterate from feedback.
Design polish is not optional. Users judge apps in the first five seconds. I spent meaningful time on the onboarding flow, the animated lotus logo, the glassmorphic card effects, and the completion celebration screen. These don't affect functionality. They absolutely affect whether someone keeps the app installed.
Test on real devices early. The speech-to-text feature worked perfectly in the emulator and crashed on three different Android phones. Platform channel issues are real and you won't find them in simulation.
Your build environment will fight you. My project path had spaces in it, which broke a native build hook. I ended up using subst drives on Windows to work around it. Embarrassing but true.
What's next
The app is in closed testing on Google Play now. Firebase Auth and Firestore sync are being wired up to replace the local stubs. The Cloud Functions are deployed and tested, just waiting to be called from the client.
If you're a solo dev considering a similar project, my honest advice: pick a stack you can debug alone at midnight, design for offline from day one, and never put an API key in your client code.
Eventide: Journal & Mood is live on Google Play.
Check out the landing page at reflektapp.net
I'd love to hear from other indie devs building in this space. Find me in the comments or on the site.
Top comments (0)