The problem nobody wants to admit
Gift-giving is a search problem with terrible inputs. You know your sister is 34, into pottery, allergic to cilantro (somehow relevant), and hates anything monogrammed. What you don't know is what to buy her that won't end up in a drawer.
Existing "gift finder" sites are SEO farms with affiliate links and a quiz that asks if the recipient is "fun" or "practical." Cool, thanks.
So I built The Gift Whisperer. You describe someone you love in plain English, and it returns 12 real gift ideas, an illustrated card, and a message that actually sounds like you wrote it (but better).
Why it's a separate service
I run a few small apps under one umbrella. My first instinct was to bolt this onto the monolith as another route. I didn't.
The Gift Whisperer does three things that the rest of my stack doesn't:
- Long-running LLM calls (sometimes 15–30 seconds for the full payload)
- Image generation that spikes memory
- Bursty traffic around holidays
Sharing a process with my normal CRUD apps means one slow gift request blocks everything else. So it lives in its own Railway service with its own scaling rules and its own crash blast radius. If it dies on Mother's Day, the rest of my stuff keeps humming.
Isolation is underrated. Microservices are overrated. The truth is boring: put the weird workload in its own box.
The prompt is the product
The actual interesting engineering here isn't the framework choice (Next.js, shocker). It's the prompt pipeline.
A single "give me 12 gifts" prompt produces garbage — either generic Amazon bestseller slop or bizarre hallucinations ("a handcrafted ferret hammock"). So I split it into stages:
- Extract traits from the free-text description into a structured profile
- Generate candidates across price tiers and categories in parallel
- Re-rank and dedupe against the original description
- Generate the card + message using the refined profile, not the raw input
Stage 2 looks roughly like this:
const tiers = ['under_25', 'under_75', 'splurge'];
const candidates = await Promise.all(
tiers.map((tier) =>
openai.chat.completions.create({
model: 'gpt-4o-mini',
response_format: { type: 'json_object' },
messages: [
{ role: 'system', content: GIFT_SYSTEM_PROMPT },
{ role: 'user', content: JSON.stringify({ profile, tier, count: 4 }) }
]
})
)
);
const ideas = candidates.flatMap((c) => JSON.parse(c.choices[0].message.content).gifts);
Parallelizing per-tier cut total latency roughly in half and — more importantly — forces variety. A single call to "give me 12 ideas across budgets" consistently clusters around the middle tier. Splitting the constraint into separate calls fixes that without any clever reranking.
The illustrated card was the hard part
LLMs are great at text. Image models are great at images. Getting them to agree on what the gift actually looks like is the hard part.
I pass the top-ranked gift idea plus a stripped-down style prompt to the image model. No recipient details, no names, no "make it feel warm and personal" — image models interpret that as cursed stock photography. Constraints beat vibes.
Why "a message that slaps" matters
The gift ideas are table stakes. The note is the moat. I spent more time tuning the message prompt than anything else, because a good gift with a generic card is still forgettable. The message prompt explicitly avoids words like journey, cherish, and blessed. You're welcome.
Try it
Got someone hard to shop for? Go describe them:
👉 gift-whisperer.edgecasefactory.com
Free to try. Takes about 30 seconds. Bring receipts if it nails it — I want to know what worked and what didn't.
Top comments (0)