I added a single sentence to my AI prompt for indie game recommendations: "include one specific 'avoid if' caveat per game that mentions a real limitation." The output went from blandly positive to advice I'd actually trust — the kind of thing a friend who'd played the game would say.
This is a 1,000-word post about why that one-line change matters more than I expected, and where it falls apart.
The marketing-speak problem
Pull up almost any indie game directory site and you'll find the same shape of copy: every game is "breathtaking", "innovative", "unforgettable". The third-party recommendation engines built on top of Steam are particularly bad at this — they default to the marketing voice of the developer's own store page.
I noticed it as soon as I started building Find Games Like, the indie-games half of a three-site experiment I'm running. I was generating recommendations for ~140 indie titles using Claude Haiku 4.5, and the first batch read like the back of a Steam keyword search: lots of adjectives, zero specific information about who shouldn't play.
Generic AI tone matches generic marketing tone, because both are trying to maximize the chance that any given reader thinks the product is for them. That's exactly the wrong objective for a recommendation engine. A good recommendation is most useful when it filters out the wrong audience.
The actual prompt change
Here's what I changed. The original prompt asked Claude to produce three reasons to recommend a game. The new version asks for the same plus one specific caveat:
For each game, write:
- 3 specific reasons someone might love it
- 1 "avoid if" caveat naming a concrete audience trait or game element
that would make this game frustrating
That's the entire change. Not even a system prompt edit — just two more lines in the user-turn instruction.
The model behavior shift was bigger than the prompt size suggests. The act of having to name who shouldn't play forces Claude to think about the game's actual properties rather than generating flattery. The caveat sentence then anchors the surrounding three positive sentences in something that feels real.
What it produces
Three real outputs from the current pipeline:
Celeste — avoid if you're uncomfortable with themes of anxiety and panic attacks. The game's narrative is explicitly about the protagonist's mental health, and several mechanics tie difficulty spikes to representations of intrusive thoughts.
Hades — avoid if you dislike permadeath roguelikes or need a single linear story you can finish in one sitting. Each run takes 30-50 minutes, and the narrative deliberately rewards repeated failure.
Hollow Knight — avoid if you find punishing platforming and unforgiving boss fights frustrating. The map is intentionally cryptic and progression often requires returning to old areas after acquiring abilities you didn't know existed.
Each one is specific. Each one names an actual game property a hypothetical player could check against their own preferences. None of them say "if you don't like indie games" or "if you don't like Metroidvanias", which is what a lazy version of this prompt would produce.
Why this works
There's a credibility mechanic here that's older than directory sites: admitting weakness signals integrity. A recommender that only ever says good things either has bad taste or is hiding something. A recommender that says "this is great, but it's not for you if X" is making a calibrated claim — which makes the positive part of the claim more believable.
The same instinct shows up in product copy when it's done well. Patagonia's "Don't Buy This Jacket" Black Friday ad is the canonical example. App Store reviews that lead with "I had to stop using this because…" are the most useful ones in the entire reviews section. Restaurant guides that explicitly note service quirks alongside food quality feel more trustworthy than ones that only star-rate.
The "avoid if" pattern is just the directory-listing version of that instinct, applied at the prompt layer instead of relying on individual writers to remember to do it.
Where it breaks down
Not every game has a meaningful caveat. Some titles are genuinely broad-appeal — short narrative games, cozy farming sims, party games for groups. Claude noticed this early in the pipeline and started inventing caveats for games that didn't really need one. The first version of Stardew Valley in my data read "avoid if you don't enjoy slow-paced games", which is approximately useless.
I've been adding a fallback rule to the prompt: "if no honest caveat exists, return null for the avoid_if field rather than fabricating one." Compliance is around 80% — Claude still occasionally invents weak caveats — but it's better than forcing one on every entry.
Two other places this technique fails:
Taste-driven domains where there's no objective basis for caveats. Recommending books for a particular mood, or recommending restaurants for a particular cuisine — the caveat would be tautological ("avoid if you don't like mystery novels"). The technique needs a domain where some audience traits genuinely conflict with the product.
When the underlying knowledge is wrong. Claude once generated a caveat for Outer Wilds about "puzzle complexity" when the game's actual stress point is the time-loop mechanic. Wrong-axis caveats are arguably worse than no caveats — they look authoritative while pointing the reader at a fictional concern. I haven't fully solved this; the best mitigation so far is grounding the prompt with the game's actual Steam tags and a one-paragraph manual summary before asking for the caveat.
Where to apply it beyond games
I think the same pattern applies to almost any directory or recommendation site:
- OSS alternatives to SaaS: "Stay on the SaaS if your team has fewer than three engineers and no DevOps capacity."
- AI tools directories: "Avoid this model if your workload is latency-sensitive at scale."
- Restaurant guides: "Skip on a date — the lighting is industrial and tables are loud."
- Book recommendations: "Skip if you've already read X — the thesis overlaps significantly."
What these share is a mental model where the audience has a specific trait, the product has a specific limitation, and a useful directory makes that mapping explicit instead of leaving every reader to discover it post-purchase.
What I'm watching
I don't have data yet on whether "avoid if" caveats actually translate into more reader trust. Find Games Like is eight days old and traffic is essentially zero — I'll publish real engagement numbers (comments, time on page, return visits) at the 30-day mark.
The falsifiable claim is: pages with caveats should outperform pages without them on engagement metrics. If that turns out to be wrong by month two, I'll need to revisit the assumption that honesty-as-trust-signal works in directory contexts the way it works in book reviews.
Until then: every AI-generated recommendation I add to that site has a caveat. The prompt is two extra lines. The output reads like advice from someone who's actually played the game.
Part of an ongoing 6-month experiment running three AI-curated directory sites. The technical claims here are real; this article was AI-assisted.
Top comments (0)