<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Ashmeet</title>
    <description>The latest articles on DEV Community by Ashmeet (@beep_boop).</description>
    <link>https://dev.to/beep_boop</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/beep_boop"/>
    <language>en</language>
    <item>
      <title>The Missing Layer Between AI and Design Consistency</title>
      <dc:creator>Ashmeet</dc:creator>
      <pubDate>Fri, 01 May 2026 00:31:37 +0000</pubDate>
      <link>https://dev.to/beep_boop/the-rise-of-vibe-designing-2g6p</link>
      <guid>https://dev.to/beep_boop/the-rise-of-vibe-designing-2g6p</guid>
      <description>&lt;p&gt;By now, "vibe coding" is completely normalized. You describe a thing, AI builds it, you nudge it until it gets it right. Nobody bats an eye.&lt;/p&gt;

&lt;p&gt;But what about &lt;em&gt;vibe designing&lt;/em&gt;?&lt;/p&gt;

&lt;p&gt;The idea has come up in cycles. AI-generated UI, prompt-driven mockups, no-Figma workflows. Every time it almost gets traction, it fizzles. The outputs are too generic, you lose control of consistency, or it only works for throwaway prototypes you'd never ship.&lt;/p&gt;

&lt;p&gt;I've been poking at Google Stitch lately, and something about it finally made the concept feel workable. Specifically &lt;code&gt;DESIGN.md&lt;/code&gt;, a spec format that quietly reframes how AI and design systems talk to each other. Google open-sourced it last week under Apache 2.0, so any agent that writes UI code can use it, not just Stitch.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Even Is Google Stitch?
&lt;/h2&gt;

&lt;p&gt;Stitch is Google's AI-native design tool. You describe an app (vibe, colors, features), it generates screens, you iterate on them with natural language. There's code export, a component system, an MCP server, and agent skills that plug into your existing coding setup.&lt;/p&gt;

&lt;p&gt;Earlier this year they shipped a significant update: an AI-native infinite canvas, a smarter design agent that reasons across entire projects, voice input, multi-screen generation, and design system support including &lt;code&gt;DESIGN.md&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  A Markdown File as Your Design Contract
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;DESIGN.md&lt;/code&gt; isn't merely a style guide you write once and forget. It's a machine-readable contract between your design intent and whatever AI is building your UI.&lt;/p&gt;

&lt;p&gt;You export it from Stitch, and it contains your color tokens, spacing values, rounding rules, typography - everything. Structured as YAML, inside a markdown file. It's readable by humans &lt;em&gt;and&lt;/em&gt; by agents. It works with Claude Code, Cursor, Copilot, Gemini CLI, anything that writes UI code.&lt;/p&gt;

&lt;p&gt;The practical upside: changes to your design propagate automatically. Non-developers can update the design in Stitch without touching the codebase. It's also useful for catching drift, e.g., components that have wandered from the source of truth show up clearly when you have a spec to check against.&lt;/p&gt;

&lt;h2&gt;
  
  
  Part 1: The Static Export
&lt;/h2&gt;

&lt;p&gt;I started with the simplest path. Export &lt;code&gt;DESIGN.md&lt;/code&gt; from a Stitch project (a toy music player) and hand it to Gemini CLI as context in the project root.&lt;/p&gt;

&lt;p&gt;A trimmed look at the generated DESIGN.md:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Lumina Audio&lt;/span&gt;
&lt;span class="na"&gt;colors&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;surface&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;#12121d'&lt;/span&gt;
  &lt;span class="na"&gt;surface-dim&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;#12121d'&lt;/span&gt;
&lt;span class="na"&gt;typography&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;display-lg&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;fontFamily&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Be Vietnam Pro&lt;/span&gt;
    &lt;span class="na"&gt;fontSize&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;48px&lt;/span&gt;
&lt;span class="na"&gt;rounded&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;sm&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;0.25rem&lt;/span&gt;
  &lt;span class="na"&gt;DEFAULT&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;0.5rem&lt;/span&gt;
&lt;span class="na"&gt;spacing&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;unit&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;8px&lt;/span&gt;
  &lt;span class="na"&gt;container-padding&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;32px&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;

&lt;span class="c1"&gt;## Brand &amp;amp; Style&lt;/span&gt;

&lt;span class="s"&gt;This design system is built for an immersive, high-fidelity desktop music experience. It leverages **Glassmorphism** to create a sense of depth and airiness, making the interface feel like a digital lens over a living, breathing soundscape.&lt;/span&gt; 

&lt;span class="c1"&gt;## Colors&lt;/span&gt;

&lt;span class="s"&gt;The palette is rooted in a deep, nocturnal neutral to allow vibrant accents to pop. The primary, secondary, and tertiary colors are designed to be used within mesh gradients for the application background, creating a "lava lamp" effect that shifts behind the frosted glass panels.&lt;/span&gt;

&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="err"&gt;*&lt;/span&gt;&lt;span class="nv"&gt;*Display&lt;/span&gt; &lt;span class="s"&gt;Type:** Large headlines use a tighter letter spacing and heavy weights to anchor the layout against the soft glass backgrounds.&lt;/span&gt;

&lt;span class="c1"&gt;## Layout &amp;amp; Spacing&lt;/span&gt;

&lt;span class="s"&gt;The layout follows a **Fluid Grid** model with high-margin "safe zones" to allow the background gradients to frame the content.&lt;/span&gt; 

&lt;span class="c1"&gt;## Elevation &amp;amp; Depth&lt;/span&gt;

&lt;span class="s"&gt;Depth is not communicated through traditional shadows, but through **cumulative backdrop blurring** and **border luminosity**.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The question: could the agent generate a UI just from the spec file, without seeing the actual Stitch screens?&lt;/p&gt;

&lt;p&gt;Short answer: kind of.&lt;/p&gt;

&lt;p&gt;The agent respected the design system. Colors, spacing, and typography all came through correctly. But it didn't reproduce the screens. It's like giving someone the same bricks from your house and expecting them to build the same house but without ever actually seeing it.&lt;/p&gt;

&lt;p&gt;There's a gap between "follows the rules" and "knows what the layout looks like." &lt;code&gt;DESIGN.md&lt;/code&gt; tells the agent &lt;em&gt;how things should look&lt;/em&gt;, not &lt;em&gt;what things should exist&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9uyn45e6yuus4rvuze2q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9uyn45e6yuus4rvuze2q.png" alt="Gemini CLI's attempt at recreation with DESIGN.md alone" width="800" height="410"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So if you're hoping the file alone bridges design and code, it gets you maybe 60% of the way there. The tokens are right. The vibe is right. The actual layout structure? That's not in the file.&lt;/p&gt;

&lt;h2&gt;
  
  
  Part 2: Adding the MCP
&lt;/h2&gt;

&lt;p&gt;To close that gap, I connected Stitch directly to Gemini CLI via MCP. This is the difference between handing the agent a style guide and giving it actual eyes on your project.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1: Get a Stitch API key&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Go to &lt;a href="https://stitch.withgoogle.com/" rel="noopener noreferrer"&gt;stitch.withgoogle.com&lt;/a&gt;, sign in, open profile settings, and create a key under the API Keys section.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2: Add the MCP server&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;gemini mcp add stitch &lt;span class="nt"&gt;--transport&lt;/span&gt; http https://stitch.googleapis.com/mcp &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--header&lt;/span&gt; &lt;span class="s2"&gt;"X-Goog-Api-Key: YOUR_API_KEY"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 3: Verify it connected&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Restart Gemini CLI. Inside your session, run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/mcp list
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should see the Stitch server listed with its available tools. From there, you can prompt it like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Stitch, extract the design context from my 'Lumina' project into DESIGN.md.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Stitch, give me the React code for the sidebar component.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once the MCP was connected, the agent wasn't just following rules from a file. It could query actual layout data from my Stitch project. That's when the generated screens started matching what I'd designed in the Stitch canvas. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcv4n92zhk46cpn2um6gu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcv4n92zhk46cpn2um6gu.png" alt="Exactly matching the Stitch canvas" width="800" height="413"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What Actually Came Out of It
&lt;/h2&gt;

&lt;p&gt;The refinement loop is where this got genuinely interesting. I changed &lt;code&gt;primary&lt;/code&gt; in &lt;code&gt;DESIGN.md&lt;/code&gt; from &lt;code&gt;#ecb2ff&lt;/code&gt; to &lt;code&gt;#00ffcc&lt;/code&gt;, told the agent to sync, and it updated &lt;code&gt;tailwind.config.js&lt;/code&gt; and the components together. One instruction, consistent everywhere.&lt;/p&gt;

&lt;p&gt;The result of adding a fresh page that still convincingly respected the design system of the app:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwebr8vzcl1j946bzntxz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwebr8vzcl1j946bzntxz.png" alt="The result of adding a wrapped-like page that still convincingly respects the design system of the app" width="800" height="408"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The outputs were coherent in a way that AI-generated UI usually isn't, because there was an actual spec anchoring everything. I prompted, it generated, I nudged, it regenerated. Nothing drifted. The agent always had something to check itself against.&lt;/p&gt;

&lt;p&gt;That's the thing that's been missing from vibe designing. Not better generation, but &lt;em&gt;something to keep the generation consistent&lt;/em&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  A Couple of Honest Caveats
&lt;/h2&gt;

&lt;p&gt;Stitch works best when you're incremental. "Make the primary button larger and use the brand blue" lands better than "redesign the login screen." One thing at a time, especially early in a project.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;DESIGN.md&lt;/code&gt; is also still in a sort of public beta. The spec and token schema are under active development, so things might change.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why I Think This Is Interesting Anyway
&lt;/h2&gt;

&lt;p&gt;Vibe coding caught on because it lowered the floor for building. You didn't need to know every pattern, you could describe what you wanted and iterate towards it.&lt;/p&gt;

&lt;p&gt;Vibe designing always had the same promise but kept stumbling on the same problem: consistency. One-off mockups are easy. A &lt;em&gt;coherent&lt;/em&gt; design system, maintained across an entire app, updated by prompts without breaking things, is hard.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;DESIGN.md&lt;/code&gt; is a direct answer to that problem. It gives the AI something to stay consistent against, not just &lt;strong&gt;vibes in, vibes out&lt;/strong&gt;. And because the spec is agent-agnostic, it's not tied to any one tool in your workflow.&lt;/p&gt;

&lt;p&gt;The static export gets you the spec. The MCP gives the agent eyes on your actual design. Together they're the most credible version of vibe designing I've seen so far.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Fair warning&lt;/strong&gt;: I'm not a designer or a senior dev, I'm still figuring this out too. If you've tried Stitch, I'd genuinely love to hear how you're using it and what else you're pairing it with!&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>design</category>
      <category>googleai</category>
      <category>stitch</category>
    </item>
    <item>
      <title>Are you ready to take the Happy Pill? 💊</title>
      <dc:creator>Ashmeet</dc:creator>
      <pubDate>Sat, 11 Apr 2026 08:02:08 +0000</pubDate>
      <link>https://dev.to/beep_boop/are-you-ready-to-take-the-happy-pill-4nh</link>
      <guid>https://dev.to/beep_boop/are-you-ready-to-take-the-happy-pill-4nh</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a submission for the &lt;a href="https://dev.to/challenges/aprilfools-2026"&gt;DEV April Fools Challenge&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Built
&lt;/h2&gt;

&lt;p&gt;Let’s face it. Developers are a moody bunch. You can wallow in your &lt;code&gt;undefined is not a function&lt;/code&gt; errors for only so long before that frown holds you back from providing real shareholder value.&lt;/p&gt;

&lt;p&gt;This is why I built &lt;strong&gt;Happy Pill&lt;/strong&gt;, an enterprise-grade wellness solution that ensures your engineering team maintains peak performance.&lt;/p&gt;

&lt;p&gt;This April, sadness is a privilege, not a right. &lt;strong&gt;KEEP GRINNING&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Demo
&lt;/h2&gt;

&lt;p&gt;  &lt;iframe src="https://www.youtube.com/embed/2U_Hy7hoVsI"&gt;
  &lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;It plays exactly one song - &lt;em&gt;Happy&lt;/em&gt; by &lt;em&gt;Pharrell Williams&lt;/em&gt;. This isn't a bug, &lt;strong&gt;seriously&lt;/strong&gt;. Stop smiling? Your listening privilege is immediately revoked until you smile again. That’s your new performance metric. There’s a grace period of 5 seconds at the beginning for a smooth onboarding, so don’t worry!&lt;/p&gt;

&lt;p&gt;👉🏼 &lt;a href="https://happy-pill.pages.dev/" rel="noopener noreferrer"&gt;Live project&lt;/a&gt; (fallback no-api-key version with smile simulated through a toggle)&lt;/p&gt;

&lt;p&gt;And because misery loves company, your non-compliance for more than 15 seconds will trigger a cup of coffee for you. Of course, to only then be served a &lt;code&gt;418 I’m a Teapot&lt;/code&gt; error instead. Because even the server is judging you.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa5i955v3ocd55qwbkv1b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa5i955v3ocd55qwbkv1b.png" alt="wellness app offering coffee after 15 seconds of no smile detection" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Code
&lt;/h2&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://assets.dev.to/assets/github-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/ashmeet-chhabra" rel="noopener noreferrer"&gt;
        ashmeet-chhabra
      &lt;/a&gt; / &lt;a href="https://github.com/ashmeet-chhabra/happy-pill" rel="noopener noreferrer"&gt;
        happy-pill
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;div class="markdown-heading"&gt;
&lt;h1 class="heading-element"&gt;Happy Pill&lt;/h1&gt;
&lt;/div&gt;
&lt;p&gt;This is a submission for the &lt;a href="https://dev.to/beep_boop/are-you-ready-to-take-the-happy-pill-4nh" rel="nofollow"&gt;DEV April Fools Challenge&lt;/a&gt;&lt;/p&gt;
&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Getting Started&lt;/h2&gt;
&lt;/div&gt;
&lt;div class="markdown-heading"&gt;
&lt;h3 class="heading-element"&gt;Prerequisites&lt;/h3&gt;
&lt;/div&gt;
&lt;ul&gt;
&lt;li&gt;Node.js (v18 or later recommended)&lt;/li&gt;
&lt;li&gt;A modern web browser with webcam support&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="markdown-heading"&gt;
&lt;h3 class="heading-element"&gt;Installation&lt;/h3&gt;

&lt;/div&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Clone the repository:&lt;/p&gt;
&lt;div class="snippet-clipboard-content notranslate position-relative overflow-auto"&gt;&lt;pre class="notranslate"&gt;&lt;code&gt;git clone https://github.com/ashmeet-chhabra/happy-pill
cd happy-pill
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Install dependencies:&lt;/p&gt;
&lt;div class="snippet-clipboard-content notranslate position-relative overflow-auto"&gt;&lt;pre class="notranslate"&gt;&lt;code&gt;npm install
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Set up a Gemini API key for automatic smile detection:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Create a &lt;code&gt;.env&lt;/code&gt; file and add:
&lt;div class="snippet-clipboard-content notranslate position-relative overflow-auto"&gt;&lt;pre class="notranslate"&gt;&lt;code&gt;VITE_GEMINI_API_KEY=your_api_key_here
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/li&gt;
&lt;li&gt;If you skip this step, the app will automatically enable the &lt;strong&gt;manual smile toggle&lt;/strong&gt; as a fallback, allowing you to control the smile state manually.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="markdown-heading"&gt;
&lt;h3 class="heading-element"&gt;Running the App&lt;/h3&gt;

&lt;/div&gt;
&lt;p&gt;Start the development server:&lt;/p&gt;
&lt;div class="snippet-clipboard-content notranslate position-relative overflow-auto"&gt;&lt;pre class="notranslate"&gt;&lt;code&gt;npm run dev
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;Open your browser and navigate to the local server address provided in the terminal.&lt;/p&gt;
&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Project Structure&lt;/h2&gt;

&lt;/div&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;src/components/&lt;/code&gt; – UI components (Header, WebcamPreview, StatusPanel, ControlsBar, YouTubePlayerPanel, FrownPopup)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;src/hooks/&lt;/code&gt; – Custom React hooks for camera, smile analysis, and YouTube player logic&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;src/services/&lt;/code&gt; – Service modules for camera, Gemini API, and YouTube integration&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;src/types/&lt;/code&gt; – TypeScript type definitions&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;public/418.html&lt;/code&gt; – Custom…&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
  &lt;/div&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/ashmeet-chhabra/happy-pill" rel="noopener noreferrer"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;/div&gt;


&lt;h2&gt;
  
  
  How I Built This &lt;em&gt;Utopian&lt;/em&gt; Smile Police
&lt;/h2&gt;

&lt;p&gt;The whole system runs on a carefully calibrated feedback loop: camera detects face → AI analyzes smile → playback responds instantly → UI updates to reflect your current compliance status.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;React + TypeScript:&lt;/strong&gt; Because every project deserves type safety, even this one. I ensure the app fails gracefully with proper error messages when it inevitably breaks.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;YouTube IFrame API:&lt;/strong&gt; To help hold you hostage to Pharrell Williams' &lt;em&gt;Happy&lt;/em&gt; on loop.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tailwind CSS:&lt;/strong&gt; For that sleek, &lt;em&gt;surveillance-state aesthetic&lt;/em&gt; &lt;strong&gt;uwu&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Gemini Integration&lt;/strong&gt;: More on that in the next section. (face-api.js who 👎🏼🍅)&lt;/p&gt;

&lt;h2&gt;
  
  
  Prize Category
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Best Ode to Larry Masinter&lt;/strong&gt;: duh&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6oo9ep7vz3x7smonb3f9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6oo9ep7vz3x7smonb3f9.png" alt="418 I'm a Teapot Error when asked for coffee" width="800" height="413"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Best Google AI Usage&lt;/strong&gt;: The heart of this innovation! I used &lt;code&gt;Gemini 3.1 Flash-Lite&lt;/code&gt;, Google AI’s fastest low-latency high volume model.&lt;/p&gt;

&lt;p&gt;Using cutting-edge, state-of-the-art AI to monitor your smile is most agreeably the need of the hour, just as it is to scaffold and overengineer an app for a zero-tolerance risk project like this. For that I turned to &lt;code&gt;Gemini Code Assist&lt;/code&gt; which came handy right there in my IDE building this revolution with me.&lt;/p&gt;

</description>
      <category>devchallenge</category>
      <category>418challenge</category>
      <category>showdev</category>
    </item>
  </channel>
</rss>
