This is a submission for the Google AI Studio Multimodal Challenge
What I Built
FitThat.Me is an AI-powered virtual dressing room that helps you try on clothes anytime, anywhere — no changing room required.
- Upload a full-body photo of yourself.
- Add images of clothing items.
- Instantly see how the outfit fits your style.
This app solves the hassle of online shopping uncertainty and makes it easier for users to visualize outfits before purchase. Whether you’re at home, commuting, or traveling, you can try on clothes on-the-go.
Demo
Web deploy: https://fitthatme.netlify.app/ (I'm still fixed it, free tier - out of quota)
GitHub repo: FitThat.Me repo
Google AI studio: FitThat.Me
Image:
How I Used Google AI Studio
I leveraged Google AI Studio with the Gemini 2.5 multimodal APIs to build the try-on experience:
- Gemini 2.5 Flash Image Preview → for image editing & composition, merging clothing onto the user’s uploaded photo.
- Imagen 4.0 Generate 001 → for generating placeholder product images (in case users don’t have high-quality clothing photos).
These APIs made it possible to handle realistic image overlays while keeping the system lightweight and responsive.
Multimodal Features
- Image Understanding → Detects user body shape and clothing alignment.
- Image Editing & Composition → Fits clothes naturally onto uploaded photos.
- AI-Generated Clothing Previews → Fills in missing product visuals with AI-generated placeholders.
Together, these multimodal features create a personalized, interactive fitting experience that enhances confidence in styling choices and online shopping.
Team
Yusup Almadani
Github : https://github.com/splmdny
Website : https://splmdny.vercel.app/
Top comments (0)