The bug report
A customer wrote in:
Why is the tool drafting a SaaS sales pitch to my aunt?
I didn't know whether to laugh or hide. We'd just shipped warm-market draft-generation in our Chrome extension — a tool that scrapes a user's Facebook + LinkedIn connections and drafts personalized outreach messages they can edit and send. The AI was supposed to read each contact's profile, infer the relationship, and tune the message accordingly.
The customer was an MLM team leader. We'd correctly inferred their aunt ran a small business. So we drafted a polished, professional pitch about how our customer's MLM product could help her growing operation.
The aunt thought our customer had been hacked.
What we shipped first (and why it didn't work)
The first version had ONE classification field on each contact: segment. Values like family, friend, coworker, biz_owner, influencer. The drafter prompt would adjust tone based on whichever segment was set.
The bug above happened because the AI fit-assessment step looked at the aunt's profile bio ("Owner @ Lake View Florals") and set segment = 'biz_owner'. The drafter then dutifully produced a pitch that would land fine to a stranger named Lake View Florals — but read as deeply weird to a family member who happened to also run a business.
Our first patch was to add a "family override": if the operator had explicitly tagged a contact as family, force segment = 'family'. This worked! ...for the contacts the operator had pre-tagged.
The deeper bug remained: the AI was inferring relationship from the wrong data. A profile bio tells you what someone DOES, not who they ARE to YOU.
The three axes
The fix took two more iterations to land. Here's what we ended up with:
Each contact gets three independent classification dimensions, never conflated:
Relationship — who they ARE to you. Family, close friend, friend, coworker, former coworker, acquaintance, business contact. This can ONLY come from operator input. AI cannot guess this from a profile.
Approach — what pitch angle to weave in if any. None, MLM, biz_owner, partner, playbook. The AI fit-assessment step CAN seed this from a profile bio. But it's ONLY about the content of the pitch, not whether to pitch at all.
Goal — close-pressure dial. Build the relationship long-term, pure relationship (never pitch even if approach is set), or sell close now. This is operator-set per contact and modulates output independently of approach.
The hard rule: family or close friend hard-overrides approach to no-pitch tone, REGARDLESS of what approach the AI assessed from the profile. The aunt scenario triggers this: even if approach=biz_owner, relationship=family means the drafter generates a no-pitch family-tone message.
Why three axes, not one
The original "one segment field" assumption smuggled in a bunch of tacit equivalences:
- relationship = family → pitch tone = no_pitch
- relationship = biz_owner → pitch tone = sales
Those are usually right but break in edge cases. A family member who runs a business. A close friend who wants you to recruit them. A business contact you've decided to keep purely social.
Three axes makes those edge cases expressible. Family aunt who runs a business: relationship=family, approach=biz_owner (irrelevant, hard-overridden), goal=pure_relationship (extra protection). The drafter sees those three values and writes "hey, hope the shop is doing well!" instead of "I'd love to introduce you to a tool that's grown my MLM team 3x."
Production data after the fix
Customer-facing complaint rate on warm-market drafts went from ~7% to under 1%. The 1% that remain are mostly the AI guessing wrong on approach (e.g., flagging a hobbyist's portfolio site as a real business). The three-axis structure makes those failures recoverable — the operator flips one field and the next draft is clean.
Two months in, the most common operator action is changing goal from build_close to pure_relationship for contacts they decided to keep as friends. That's a feature: the tool exposed an explicit dial for something operators were doing implicitly anyway, and now they can do it deliberately.
The takeaway
When you're modeling a thing the AI is going to act on, ask whether your classification scheme is letting it conflate facts with intentions. "Family member" and "no-pitch tone" are correlated, not identical. Same with "business owner" and "OK to pitch." Build the schema so the AI can express the awkward middle cases — even if you think they're rare. They aren't.
I'm building Iron Front Digital — an AI marketing team for solo operators and small businesses. The Recon extension that learned this lesson is part of the stack.
Top comments (0)