I built Pixshop after running into the same issue with most AI photo tools:
- they either need a bunch of photos + a training step, or they drift your face across outputs.
The goal:
use a single selfie, skip training, and still keep identity consistent while changing the setting.
How it works:
- Upload one selfie
- Pick a “Look” (headshot, dating photo, travel, etc.)
- Each look runs on a fixed “recipe” (structured prompt + tuned config)
- Output is a small batch (4–6 images), with preserved identity
The interesting part is the recipe layer.
Instead of open-ended prompting, each look encodes:
- camera distance + framing
- lighting direction and intensity
- background constraints
- facial anchoring to reduce drift
In practice, this mattered more than model choice for consistency.
There’s no per-user training or fine-tuning step — generation runs directly on image models, so results come back quickly.
Stack: Next.js, Vercel Blob, Neon + Drizzle, QStash for async jobs, Clerk + Stripe.
Free tier: 3 credits, no card required.
Happy to answer anything about:
- how we keep identity stable across different looks
- what failed before landing on the recipe approach
- async generation pipeline
You can try Pixshop here:
Pixshop
Top comments (0)