DEV Community

Hemang Murugan
Hemang Murugan

Posted on

Building Drip: An AI-Powered Wardrobe and Outfit Recommender

The intersection of artificial intelligence and personal lifestyle applications is rapidly expanding. While large language models (LLMs) are often utilized for chat interfaces, their potential within programmatic, data-driven applications is vast. In our project, Drip, we sought to build an AI-powered wardrobe assistant that isn't just a gimmick, but a functional, comprehensive full-stack application. Drip allows users to upload their clothing items, automatically categorizes them using computer vision, and acts as a personalized stylist by generating outfits based on current weather, user mood, and past outfit history.

Building Drip required the integration of modern web frameworks, strict database architecture, and complex integrations with multiple AI models. In this post, we will dissect the architecture, explore the challenges we encountered, detail the AI modalities employed, and discuss the rigorous engineering practices we utilized to construct the final product.


1. Architectural Foundation and Tech Stack

When designing the foundational architecture, our goal was to embrace a robust, scalable, and type-safe ecosystem. We selected Next.js 16 (React 19) running on the revolutionary App Router for the front-end, paired with Supabase (PostgreSQL) for our backend operations.

The Next.js Paradigm

Next.js allowed us to bridge the gap between frontend interactivity and backend logic seamlessly. Unlike traditional single-page applications (SPAs) that suffer from massive client-side JavaScript bundles and slow time-to-interactive (TTI) metrics, Next.js utilizes Server Components. For Drip, this meant we could securely process sensitive operations—such as calling the Google Gemini API or querying Supabase—entirely on the server before shipping a finalized HTML document to the client.

To handle data mutations, such as a user logging an outfit or adding a new shirt to their wardrobe, we leaned heavily into Next.js Server Actions. In our lib/actions/ directory, form submissions are naturally intercepted by asynchronous server functions. This negated the need for building intermediary REST APIs for our own app's internal state management. Server Actions, combined with Next.js's revalidatePath(), provided an elegant way to update the database and instantly reflect the new state in the user's dashboard without forcing full-page reloads.

Supabase and Row Level Security (RLS)

Security in a wardrobe app is surprisingly critical. Users do not want their personal clothing logs exposed to the public internet. Supabase, built on top of PostgreSQL, offered not only a high-performance relational database but also a sophisticated Auth system.

We designed three primary database tables:

  1. profiles: Storing user metadata, onboarding lifestyle preferences, location coordinates, and weather sensitivity constraints.
  2. clothing_items: A comprehensive catalog of a user’s garments, tracking type, color, warmth ratings, and seasons.
  3. outfit_logs: A transactional history table linking a timestamp and a mood specifically to an array of clothing item UUIDs.

To secure this, we implemented strict Row Level Security (RLS) policies. RLS operates at the database level rather than the application level. By writing policies directly in SQL such as CREATE POLICY "Users can only view their own items" ON clothing_items FOR SELECT USING (auth.uid() = user_id);, we guaranteed that even if an API endpoint was compromised or poorly authored, the database engine itself would adamantly refuse to return data not explicitly owned by the requesting user.


2. Leveraging Artificial Intelligence Modalities

The assignment challenged us to integrate AI not just as a feature, but as an intrinsic part of the development lifecycle. We achieved this by utilizing three distinct modalities: Claude Web, Antigravity (IDE), and Google Gemini API.

Modality 1: Claude Web (Architectural Planning)

Before writing a single line of code, we engaged with Claude via a web chat interface to validate our system design. Designing a relational schema that could adequately support complex queries (e.g., "Find all outerwear items worn less than twice in the past month suitable for cold weather") required foresight. Claude Web acted as our senior database architect. We provided our initial Entity Relationship Diagram (ERD), and Claude refined our indexing strategies and helped untangle the logic required for the RLS policies, ensuring our data layer was bulletproof before we began implementation.

Modality 2: Antigravity IDE (Contextual Generation & Testing)

During rapid development, context-switching between a browser and an IDE severely degrades velocity. We utilized Antigravity, an AI agent natively integrated into our development environment, to operate as a hyper-capable pair programmer.

Antigravity excelled in three distinct areas:

  • Boilerplate Reduction: The Next.js App Router demands significant boilerplate when configuring custom caching strategies or creating complex Tailwind CSS layouts. Antigravity generated structural React components that adhered strictly to our predefined design tokens, allowing the human developers to focus on granular business logic.
  • Test Generation: Achieving high test coverage is notoriously tedious. We tasked Antigravity with analyzing our utility functions (like the filterWardrobe algorithm) and our backend server actions. It successfully drafted over 14 distinct test suites using Vitest, isolating boundary conditions and creating mocked Supabase database clients.
  • Refactoring: When migrating our database fetches to Next.js server actions, Antigravity swiftly refactored existing endpoints to direct function calls without breaking the surrounding architecture.

Modality 3: Gemini API (Programmatic Intelligence)

The hallmark features of Drip rely heavily on Google's Gemini models functioning as programmatic operators inside our application endpoints. We implemented two distinct AI-driven workflows:

1. Vision Analysis (/api/analyze-clothing)
When a user uploads a new piece of clothing, manually categorizing attributes like "warmth rating" or "formality" introduces heavy friction. We utilized the gemini-1.5-flash model specifically for its multi-modal vision capabilities.
Our server action intercepts the uploaded image, converts it into a Base64 payload, and forwards it to Gemini alongside an explicit, strict prompt:

"Analyze this clothing item image and return a JSON object with these exact fields: name, type, color, warmth_rating, seasons, formality, sub_type. Return ONLY the JSON object, no other text."

This allows Drip to function as a smart ingestion engine. A simple photo of a jacket becomes a deeply categorized database entry, heavily minimizing user churn during onboarding.

2. The Stylist Engine (/api/generate-outfit)
The core functionality of Drip resides in its outfit recommendation engine. Suggesting an outfit requires synthesizing numerous dynamic parameters. Our backend executes a complex parallel data fetch: grabbing the user's profile (for lifestyle preferences), querying the clothing_items table (for available wardrobe), fetching outfit_logs (to prevent suggesting the exact same shirt worn yesterday), and querying the OpenWeather API (for immediate localized meteorological constraints).

Once synthesized, we run a pre-filtering algorithm locally. This was a critical software engineering decision: relying entirely on an LLM to filter 100+ clothing items is computationally expensive and prone to hallucination. Instead, our deterministic algorithm strips out "winter coats" if the OpenWeather API reports 30°C.

The filtered subset is then packed into a payload and sent to the gemini-2.0-flash model. The prompt instructs the model to act as a personal stylist, strictly adhering to an output format that returns exact item UUIDs alongside a "reasoning" string explaining why the generated pieces work well together in standard fashion. By offloading the artistic combination logic to the LLM but retaining the deterministic filtering logic in our code, we achieved a highly resilient and reliable feature.


3. Integrating Third-Party Context: The Weather Engine

Creating an outfit recommendation app that does not understand the weather is inherently flawed. Creating a robust weather pipeline was one of the most mechanically challenging aspects of this project.

Our /api/weather endpoint had to serve three distinct operational states:

  1. Real-time current conditions: Required to tell the user what it feels like outside now.
  2. Forecast arrays: Required for the UI dashboard to display upcoming weather in an interactive carousel.
  3. Historical lookbacks: To accurately log an outfit from three days ago, the app needs to know the weather from three days ago.

We initially integrated strictly with the OpenWeather API. However, OpenWeather places historical data lookbacks behind an expensive enterprise paywall. Instead of compromising the feature, we engineered a hybrid API architecture. If the endpoint receives a request for the present day, it interfaces with OpenWeather, utilizing robust edge-caching (stale-while-revalidate) headers to avoid rate limits. If the endpoint receives a specific UNIX timestamp (dt), we intercept the request and dynamically route it to the Open-Meteo Archive API, which provides free historical data.

We then wrote intermediate transformation layers to normalize the proprietary WMO weather codes used by Open-Meteo into standard icon strings utilized by our frontend components, ensuring a seamless user experience regardless of which upstream data provider fulfilled the request.


4. Engineering Practices: CI/CD, Testing, and Quality Assurance

Deploying an application is a trivial task; maintaining one requires discipline. To ensure Drip conforms to professional engineering standards, we established a rigorous Continuous Integration (CI) and Deployment (CD) pipeline.

The CI Pipeline

We orchestrated our CI pipeline via GitHub Actions (.github/workflows/ci.yml). This pipeline intercepts every push and pull_request aimed at our primary branches, enforcing a strict gated requirement before code is allowed to deploy. The CI runner executes:

  • Linting & Typechecking: ESLint and TypeScript compilation guarantees that dead code, loosely typed variables, and syntactical bugs are caught immediately.
  • Security Scanning: An advanced heuristic scan via npm audit combined with the Snyk Security Action dynamically parses our dependency tree to flag any high-severity cryptographic or logic vulnerabilities in our third-party Node modules.
  • Automated Testing: Finally, the pipeline runs our Vitest and Playwright suites, capturing the coverage data.

Test Architecture

Attempting to manually QA a complex state machine is impossible. Drip employs a testing architecture that spans the testing pyramid, hitting an exceptional 99.35% line coverage.

  • Unit Tests: By mocking out the Supabase network layer entirely using Vitest's vi.mock(), we validated every branch of our localized deterministic algorithms, particularly testing how the outfit-engine.ts mathematically processes consecutive wear days.
  • Integration Tests: The API routes connecting to Gemini and OpenWeather were explicitly tested against mock upstream failures. We explicitly wrote tests asserting that if Gemini goes offline or returns a 500 Internal Server Error, Drip gracefully degrades to a localized rule-based fallback algorithm rather than crashing the user interface.
  • E2E Tests: Finally, we utilized Playwright to spin up a headless Chromium browser instance. Playwright physically clicks through the app, simulating a user typing into authentication fields, ensuring that the final DOM renders correctly under real-world conditions.

The Staging Pipeline

Testing in production is a cardinal sin. We configured Vercel alongside our GitHub repository to create a Multi-Stage Pipeline. The main branch acts as production, natively hooking into our primary drip-azure.vercel.app URL and interacting directly with our live Supabase databases.

However, we designated a standalone staging branch. In Vercel, we utilized Environment Variable Overrides mapped strictly to the Preview target for this specific branch. When a developer creates a Pull Request originating from or merging into staging, Vercel intercepts the webhook, spins up an isolated, ephemeral web server, and injects staging-specific database URLs. This creates a secure, sandboxed testing environment where the entire application including Gemini integrations and Supabase operations—can be evaluated extensively without risking the destruction or corruption of real user data.


5. Conclusion

Building Drip elevated our engineering toolkit far beyond simple web development. It required harmonizing relational database architecture, reactive user interfaces, and external data services. Most profoundly, it demonstrated that integrating Artificial Intelligence requires thoughtful architectural constraints. By offloading generic deterministic tasks to our Next.js backend and dedicating the AI exclusively to specialized, complex operations such as multi-modal vision parsing and artistic outfit synthesis we successfully built a resilient, professional, full-stack application. Drip is not simply an interface wrapper around an LLM; it is a meticulously engineered software system augmented by AI.

Top comments (0)