Solo dev, no funding, one app that needed to work offline and think online. Why the architecture ended up the way it did.
I spent the last several months building an AI-powered wardrobe app called Outfii. No cofounders, no funding, no team. Just me, too much chai, and a mass of decisions I wasn't qualified to make.
You photograph your clothes, the app organizes them, and AI helps you figure out what to wear. It's on Google Play now. Here's how it actually went.
The problem that wouldn't leave me alone
Every morning, same thing. Full closet, nothing to wear. I looked it up and apparently most people regularly use about 20% of what they own. The rest just hangs there.
I don't have a fashion background. But "help me combine clothes I already own" felt like something code could handle. Whether I was the right person to build it is still an open question.
Why the app needs two brains
This is the part that shaped every other decision.
Some things need to happen instantly. When you're flipping through outfit options, you can't be waiting on a server to tell you whether navy and olive work together. That feedback loop needs to be under 50ms or it feels broken.
Other things need actual intelligence. Looking at a photo and figuring out "that's a linen shirt, it's dusty rose, semi-formal" requires a vision model. Suggesting what to wear tomorrow based on your wardrobe, the weather, and what you wore this week requires an LLM.
So the app has two brains. One lives on your phone. One lives in the cloud. They do completely different jobs.
The on-device brain handles color analysis, harmony scoring, and outfit compatibility. I tried doing this in Dart first. It was too slow. Color distance calculations in tight loops, converting between color spaces, running harmony checks across every item pair in a wardrobe. Dart isolates helped but added complexity without solving the core problem: CPU-bound math needs compiled code. I rewrote it in Rust, bridged to Flutter via flutter_rust_bridge. Scoring now runs in ~20-30ms on a mid-range Android phone. The Rust binary adds about 4MB to the APK, which felt worth it.
The scoring algorithm itself went through three complete rewrites. Telling navy from black programmatically is genuinely hard. CIE Delta E gets you close, but perceptual color difference is still messy at the dark end of the spectrum. Your eyes handle this effortlessly. Code does not.
The cloud brain handles understanding. When you scan a clothing item, an edge function sends the photo to a vision model that identifies type, color, pattern, material. When you ask for outfit suggestions, another function builds context from your wardrobe and passes it to an LLM. Different tasks, different models. Cloud response times vary (2-8 seconds depending on the model and task), which is fine because these aren't real-time interactions.
The two never overlap. Scoring is always local. Understanding is always cloud. This means the core app works offline, which matters a lot in India where connectivity is unpredictable.
The BYOK question
AI features cost money to run. I'm bootstrapped. Subsidizing API calls for every user isn't sustainable.
So I built a bring-your-own-key system. Users can plug in their own OpenAI or Anthropic API key and get the full AI experience without paying me a subscription. Keys are encrypted on the phone and never touch our servers in plaintext. There's also paid tiers for people who don't want to think about API keys.
This was controversial in my head for a while. "Asking users to get their own API key" sounds like terrible UX. But it turns out there's a niche of technical users who actually prefer this. They like knowing exactly what model runs, what it costs, and that their data goes to the provider they chose. It's not for everyone, but it's a real segment.
Everything lives on your phone first
The wardrobe is stored locally in SQLite. Not as a cache. As the source of truth.
I didn't want the app to break when you lose signal. You should be able to browse your wardrobe, check outfit history, and get scoring results in airplane mode. Cloud sync happens in the background when you're online.
The downside is sync conflicts. Two devices editing the same wardrobe creates problems I'm still working through. Last-write-wins is what I ship with for now, but it's not great when someone adds items on a tablet and a phone simultaneously. Solving this properly is on the list.
What went wrong
I shipped too many features at launch. Wardrobe management, AI outfits, weather integration, trip packing, laundry tracking, wear reminders, style profiles. That's three apps pretending to be one. Should've shipped wardrobe + AI outfits and added the rest over time.
My Play Store screenshots were raw app captures. Status bars visible. Timestamps. Battery icons. No marketing framing. People decide whether to install your app in about two seconds of scrolling, and I gave them nothing to work with. Still fixing this weeks later.
Debugging across the Rust bridge was also painful early on. When something panics in Rust, the error you get on the Flutter side is not always helpful. I spent a full day on a crash that turned out to be a type mismatch in the FFI layer that codegen silently accepted. Added a lot of defensive logging after that.
I also copy-pasted boilerplate across backend functions for months before building a shared utilities layer. Auth middleware, response helpers, error formatting, all duplicated. Embarrassing but honest.
What went right
The blog was a good early bet. I wrote about color theory in fashion, capsule wardrobe math, pattern mixing rules. Technical content at the intersection of fashion and algorithms. Five posts, bringing in organic search traffic before anyone even downloads the app.
The on-device scoring engine was painful to set up but it's a genuine differentiator. Most wardrobe apps send every request to a server. Having instant, offline scoring on a 29MB app feels noticeably better. Users don't know it's Rust running on their phone. They just know it's fast.
Where it's going
Social features are rolling out. Users can share outfit combinations. After that, iOS and a web app.
The developer account is under Clarixo, my parent brand. Outfii is the first product. Bootstrapped, planning to stay that way.
If you want to try it: outfii.in
Play Store: Outfii - AI Wardrobe Stylist
If you're building solo, optimize for decisions you can live with for a while. The architecture won't be perfect. Ship the version that's good enough, then fix the parts that actually hurt.
Top comments (0)