<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Keble Zhu</title>
    <description>The latest articles on DEV Community by Keble Zhu (@keble_zhu).</description>
    <link>https://dev.to/keble_zhu</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/keble_zhu"/>
    <language>en</language>
    <item>
      <title>Beyond the Crop: Automating "Ghost Mannequin" Effects with Depth-Aware Inpainting</title>
      <dc:creator>Keble Zhu</dc:creator>
      <pubDate>Mon, 20 Apr 2026 14:19:34 +0000</pubDate>
      <link>https://dev.to/keble_zhu/beyond-the-crop-automating-ghost-mannequin-effects-with-depth-aware-inpainting-362n</link>
      <guid>https://dev.to/keble_zhu/beyond-the-crop-automating-ghost-mannequin-effects-with-depth-aware-inpainting-362n</guid>
      <description>&lt;p&gt;**The Struggle of E-commerce Apparel 👕&lt;br&gt;
**In professional apparel photography, the "Ghost Mannequin" (or hollow man) effect is the gold standard. It makes clothing look 3D and "worn" without a visible model. Traditionally, this requires hours of manual clipping and compositing two separate photos (one with the model, one with the garment inside-out).&lt;/p&gt;

&lt;p&gt;At Rewarx Studio AI, we decided this was a perfect problem for Generative AI to solve—but it’s a lot harder than just hitting "Generate."&lt;/p&gt;

&lt;p&gt;**The Technical Challenge: Depth &amp;amp; Occlusion&lt;br&gt;
**The hardest part isn't removing the model; it's reconstructing what was behind them. Specifically:&lt;/p&gt;

&lt;p&gt;*&lt;em&gt;The inner back of the collar.&lt;br&gt;
*&lt;/em&gt;&lt;br&gt;
*&lt;em&gt;The curvature of the sleeve openings.&lt;br&gt;
*&lt;/em&gt;&lt;br&gt;
Maintaining the lighting consistency inside the "hollow" areas.&lt;/p&gt;

&lt;p&gt;**Our 3-Step Pipeline 🛠️&lt;br&gt;
**To solve this, we moved away from generic inpainting and built a specialized pipeline:&lt;/p&gt;

&lt;p&gt;**Semantic Masking (SAM): **We use the Segment Anything Model to precisely isolate the garment. But we don't just mask the model; we predict the "inner" bounds where the mannequin would logically end.&lt;/p&gt;

&lt;p&gt;**Depth Estimation (Depth Anything): **To make the clothing look 3D and not like a flat sticker, we generate a depth map. This tells the AI: "This collar area is 5cm behind the front zipper," which guides the shading.&lt;/p&gt;

&lt;p&gt;**Context-Aware Inpainting: **This is where the magic happens. We use a fine-tuned SDXL Inpainting model that understands apparel structure.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Let’s Talk Prompts (The Precision Part) 🔍&lt;/strong&gt;&lt;br&gt;
For AI to understand "invisible interior," generic terms fail. We inject technical descriptors into the latent space to guide the texture:&lt;/p&gt;

&lt;p&gt;*&lt;em&gt;Internal Prompt Logic:&lt;br&gt;
*&lt;/em&gt;(3D hollow effect:1.2), (inner garment texture:1.3), invisible mannequin, detailed fabric weave inside collar, consistent studio lighting, photorealistic, 8k, --no floating limbs, --no distorted seams.&lt;/p&gt;

&lt;p&gt;**The Result&lt;br&gt;
**We’ve managed to reduce a process that usually takes a senior retoucher 20-30 minutes per image down to under 15 seconds. For a brand with 500 SKUs, that’s a game-changer.&lt;/p&gt;

&lt;p&gt;**What’s your take?&lt;br&gt;
**I’m curious—has anyone else in the community experimented with combining Depth Maps and Inpainting for industrial use cases? I’d love to hear your thoughts on maintaining material texture during high-strength inpainting.&lt;/p&gt;

&lt;p&gt;Cheers,&lt;br&gt;
Keble&lt;br&gt;
Founder &lt;a href="//www.rewarx.com"&gt;@ Rewarx Studio AI&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ghostmannequin</category>
      <category>ai</category>
      <category>productphotography</category>
      <category>imagegenerator</category>
    </item>
    <item>
      <title>Beyond Generic Prompts: How we’re solving CMF Precision in AI Product Photography</title>
      <dc:creator>Keble Zhu</dc:creator>
      <pubDate>Sun, 19 Apr 2026 13:06:31 +0000</pubDate>
      <link>https://dev.to/keble_zhu/beyond-generic-prompts-how-were-solving-cmf-precision-in-ai-product-photography-2g8b</link>
      <guid>https://dev.to/keble_zhu/beyond-generic-prompts-how-were-solving-cmf-precision-in-ai-product-photography-2g8b</guid>
      <description>&lt;p&gt;Hello Dev.to! 👋&lt;br&gt;
I’m Keble, and I’ve spent the last few years navigating the intersection of product management and AI-driven design. After working on mobile utility apps like iPurevia, I’ve recently shifted my focus to a challenge that’s been bugging the e-commerce world: The "hallucination" problem in commercial photography.&lt;/p&gt;

&lt;p&gt;I’m currently building Rewarx Studio AI, and I wanted to share some of the technical hurdles we’re clearing.&lt;/p&gt;

&lt;p&gt;The Problem: When "Good Enough" Isn't Enough&lt;br&gt;
Most generative models are great at making "pretty" pictures, but they fail miserably when it comes to CMF (Color, Material, and Finish) consistency. For a brand, a "slightly different" shade of blue or a distorted metallic texture isn't a minor bug it's a dealbreaker.&lt;/p&gt;

&lt;p&gt;Our Tech Stack &amp;amp; Approach&lt;br&gt;
To bridge the gap between creative AI and industrial-grade precision, we’re moving beyond simple text to image workflows:&lt;/p&gt;

&lt;p&gt;Geometric Anchoring: We use ControlNet (Canny and Depth maps) to ensure the product’s physical structure remains 100% immutable.&lt;/p&gt;

&lt;p&gt;Segment Anything (SAM) Integration: Automating the isolation of products from messy backgrounds to allow for seamless environment synthesis.&lt;/p&gt;

&lt;p&gt;The "Ghost Mannequin" Challenge: One of our coolest features is automating the hollow, 3D look for apparel. This involves a mix of depth estimation and context aware inpainting to reconstruct the interior of a garment where a physical model used to be.&lt;/p&gt;

&lt;p&gt;Why I'm Here&lt;br&gt;
I’m a big believer in the Apple style minimalist aesthetic and precision in design. I’m here on Dev.to to:&lt;/p&gt;

&lt;p&gt;Share our learnings in fine tuning Diffusion models for commercial use.&lt;/p&gt;

&lt;p&gt;Connect with fellow devs working on CV (Computer Vision) and generative pipelines.&lt;/p&gt;

&lt;p&gt;Get roasted (constructively!) on our architectural choices as we scale Rewarx.&lt;/p&gt;

&lt;p&gt;I’m also a marine reef hobbyist, so if you want to talk about NO3/PO4 nutrient management in saltwater tanks alongside latent diffusion I’m your guy.&lt;/p&gt;

&lt;p&gt;Looking forward to learning from this community!&lt;/p&gt;

&lt;p&gt;Cheers,&lt;br&gt;
Keble&lt;br&gt;
Founder &lt;a href="https://www.rewarx.com" rel="noopener noreferrer"&gt;@ Rewarx Studio AI&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>nanobanana</category>
      <category>productivity</category>
      <category>beginners</category>
    </item>
  </channel>
</rss>
