<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Kaiwei Li</title>
    <description>The latest articles on DEV Community by Kaiwei Li (@likaiwei99).</description>
    <link>https://dev.to/likaiwei99</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/likaiwei99"/>
    <language>en</language>
    <item>
      <title>Pixshop — consistent portraits from a single selfie</title>
      <dc:creator>Kaiwei Li</dc:creator>
      <pubDate>Sun, 26 Apr 2026 23:51:02 +0000</pubDate>
      <link>https://dev.to/likaiwei99/pixshop-consistent-portraits-from-a-single-selfie-33ch</link>
      <guid>https://dev.to/likaiwei99/pixshop-consistent-portraits-from-a-single-selfie-33ch</guid>
      <description>&lt;p&gt;I built &lt;a href="https://www.pixshop.art" rel="noopener noreferrer"&gt;Pixshop&lt;/a&gt; after running into the same issue with most AI photo tools:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;they either need a bunch of photos + a training step, or they drift your face across outputs.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;The goal:&lt;/strong&gt;&lt;br&gt;
use a single selfie, skip training, and still keep identity consistent while changing the setting.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How it works:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Upload one selfie&lt;/li&gt;
&lt;li&gt;Pick a “Look” (headshot, dating photo, travel, etc.)&lt;/li&gt;
&lt;li&gt;Each look runs on a fixed “recipe” (structured prompt + tuned config)&lt;/li&gt;
&lt;li&gt;Output is a small batch (4–6 images), with preserved identity&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The interesting part is the recipe layer.&lt;/p&gt;

&lt;p&gt;Instead of open-ended prompting, each look encodes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;camera distance + framing&lt;/li&gt;
&lt;li&gt;lighting direction and intensity&lt;/li&gt;
&lt;li&gt;background constraints&lt;/li&gt;
&lt;li&gt;facial anchoring to reduce drift&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In practice, this mattered more than model choice for consistency.&lt;/p&gt;

&lt;p&gt;There’s no per-user training or fine-tuning step — generation runs directly on image models, so results come back quickly.&lt;/p&gt;

&lt;p&gt;Stack: Next.js, Vercel Blob, Neon + Drizzle, QStash for async jobs, Clerk + Stripe.&lt;/p&gt;

&lt;p&gt;Free tier: 3 credits, no card required.&lt;/p&gt;

&lt;p&gt;Happy to answer anything about:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;how we keep identity stable across different looks&lt;/li&gt;
&lt;li&gt;what failed before landing on the recipe approach&lt;/li&gt;
&lt;li&gt;async generation pipeline&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;You can try Pixshop here:&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://www.pixshop.art" rel="noopener noreferrer"&gt;Pixshop&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>webdev</category>
      <category>typescript</category>
      <category>startup</category>
    </item>
  </channel>
</rss>
