<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Corporeal</title>
    <description>The latest articles on DEV Community by Corporeal (@corporeal).</description>
    <link>https://dev.to/corporeal</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/corporeal"/>
    <language>en</language>
    <item>
      <title>How I Built a Multimodal AI Virtual Stager with the Gemini API and Cloud Run</title>
      <dc:creator>Corporeal</dc:creator>
      <pubDate>Thu, 12 Mar 2026 03:50:10 +0000</pubDate>
      <link>https://dev.to/corporeal/how-i-built-a-multimodal-ai-virtual-stager-with-the-gemini-api-and-cloud-run-1cjg</link>
      <guid>https://dev.to/corporeal/how-i-built-a-multimodal-ai-virtual-stager-with-the-gemini-api-and-cloud-run-1cjg</guid>
      <description>&lt;p&gt;

  &lt;iframe src="https://www.youtube.com/embed/t02YKAJ_APc"&gt;
  &lt;/iframe&gt;


&lt;/p&gt;

&lt;h3&gt;
  
  
  The Problem with Empty Rooms
&lt;/h3&gt;

&lt;p&gt;If you have ever tried to sell a house, you know the golden rule: empty homes sit on the market longer and sell for less. Buyers struggle to visualize the potential of a blank space. &lt;/p&gt;

&lt;p&gt;As a Senior Data Scientist who is &lt;em&gt;also&lt;/em&gt; fully licensed in real estate and insurance (yes, I can build the AI to stage your house, legally sell it to you, and write the insurance policy all in one go 😂), I knew there had to be a better, cheaper way than paying thousands of dollars for physical furniture staging.&lt;/p&gt;

&lt;p&gt;For the &lt;strong&gt;Gemini Live Agent Challenge&lt;/strong&gt;, I decided to build the &lt;strong&gt;Open House AI Storyteller&lt;/strong&gt;—a full-stack, multimodal AI agent that takes a simple photo of an empty room and instantly generates a photorealistic staged image, a compelling marketing narrative (with a built-in Feng Shui expert mode!), and a studio-quality audio voiceover.&lt;/p&gt;

&lt;p&gt;Here is a breakdown of the architecture, the tech stack, and the biggest roadblock I hit while orchestrating multiple AI models in Node.js.&lt;/p&gt;




&lt;h3&gt;
  
  
  🛠️ The Tech Stack
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Frontend:&lt;/strong&gt; React, Vite, Tailwind CSS, Framer Motion&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Backend:&lt;/strong&gt; Node.js, Express&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Database:&lt;/strong&gt; Supabase (for lead capture and deduplication)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Infrastructure:&lt;/strong&gt; Google Cloud Run&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AI &amp;amp; APIs:&lt;/strong&gt; * Gemini 3.1 Flash Image (for pixel-perfect virtual staging)

&lt;ul&gt;
&lt;li&gt;Gemini 2.5 Flash (for rapid text generation and Feng Shui analysis)&lt;/li&gt;
&lt;li&gt;Google Cloud Text-to-Speech (for the audio tour)&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;




&lt;h3&gt;
  
  
  🏗️ The Architecture: Splitting the Brain
&lt;/h3&gt;

&lt;p&gt;My initial approach was to send a single, massive prompt to the &lt;code&gt;gemini-3.1-flash-image-preview&lt;/code&gt; model, asking it to both draw the staged room &lt;em&gt;and&lt;/em&gt; write the marketing copy. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The result?&lt;/strong&gt; A frozen server and an infinite loading spinner on my frontend. &lt;/p&gt;

&lt;p&gt;I quickly learned a crucial lesson about multimodal AI: &lt;strong&gt;dedicated image models only output pixels.&lt;/strong&gt; They will completely ignore text-generation instructions, and if your code is &lt;code&gt;await&lt;/code&gt;-ing a text response that never comes, your API call will hang until the server crashes.&lt;/p&gt;

&lt;p&gt;To fix this, I implemented a "split-brain" architecture in my Express backend. I separated the tasks so each model did exactly what it was best at:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1: The Vision Request&lt;/strong&gt;&lt;br&gt;
I used &lt;code&gt;sharp&lt;/code&gt; to compress the user's uploaded image to an optimal size, then sent it to the &lt;code&gt;gemini-3.1-flash-image&lt;/code&gt; model with a strict visual prompt. To protect my Cloud Run instance from hanging on transient network spikes, I wrapped the API call in a custom 60-second timeout function with a smart retry loop.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2: The Narrative Request&lt;/strong&gt;&lt;br&gt;
The second the image finished generating, I immediately triggered &lt;code&gt;gemini-2.5-flash&lt;/code&gt;. I passed it the &lt;em&gt;same&lt;/em&gt; base64 image along with a dedicated copywriter prompt. Because 2.5 Flash is multimodal, it could actually "see" the empty room layout and write a highly accurate description of how the new furniture layout maximized Qi flow and adhered to Feng Shui principles.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3: The Voiceover&lt;/strong&gt;&lt;br&gt;
Finally, the generated text was passed to the Google Cloud TTS API to generate an MP3 buffer, which was sent back to the React frontend alongside the image and the story.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>googlecloud</category>
      <category>javascript</category>
      <category>webdev</category>
    </item>
  </channel>
</rss>
