DEV Community

Igor Ganapolsky
Igor Ganapolsky

Posted on

How my QSR review-triage workflow decides what hits a manager's inbox (open source n8n + tag taxonomy)

Restaurant review triage is one of the easiest places for AI to embarrass the brand.

If the AI auto-responds to a customer mentioning an allergic reaction, you have a legal incident, not a workflow. If it ignores a sustained complaint pattern, you lose customers without anyone noticing.

I open-sourced the review-triage workflow I'm using as the reference implementation for the QSR AI Ops Pack. The full JSON is here:

https://github.com/IgorGanapolsky/qsr-ai-preview/blob/main/workflows/n8n/qsr-review-triage-desk.json

This post walks through the classification logic — the part that actually decides whether a draft response goes out or whether a human gets paged.

The five-tag classification

Each incoming review gets tagged on five independent axes before any AI drafting happens:

  1. Severe terms — does the text contain language that requires a human, full stop?
  2. Service terms — is this an operational complaint that has standard playbooks?
  3. Priority — how fast does this need a response?
  4. Owner — who in the org is on the hook?
  5. Posture — what tone should the draft take if one is allowed?

No single AI step gets to override the severe-terms axis. That is the safety property the whole workflow is built around.

Severe terms — what triggers an instant human escalation

The severe-terms list is intentionally narrow. It catches the categories where a chatty AI response is worse than no response at all:

  • allergy / allergic reaction
  • illness / food poisoning / hospital / ER
  • legal / lawyer / lawsuit / sue
  • refund disputes with named amounts
  • regulatory mentions (health department, inspector)
  • discrimination claims
  • injury on premises

When any severe term is present, the workflow:

  • routes the review to a human inbox
  • attaches a one-line summary and the raw text
  • does not draft a public response
  • logs the decision so the brand can later prove the review was reviewed

Service terms — the cases where a draft is allowed

For non-severe complaints, the workflow draws from a service-terms list to pick a posture:

  • wait time → acknowledge + concrete improvement detail
  • order accuracy → apology + invitation to DM order number
  • temperature / freshness → acknowledge + reassurance about prep standards
  • staff behavior → acknowledge + escalation to GM, no specifics in public
  • cleanliness → acknowledge + state inspection cadence, do not promise specifics

The AI step then drafts a response constrained to the playbook for that posture — not free-form. That is the difference between a draft a manager can ship in 30 seconds and a draft that needs to be fully rewritten.

Priority and owner

Priority isn't just about response speed — it's about which queue the review lands in.

  • P0 (severe terms present): goes to legal + brand inbox
  • P1 (operational complaint, recent visit): goes to store GM
  • P2 (older review, mild): goes to brand response queue
  • P3 (positive but mentions a problem): goes to brand for thank-you variant

The owner tag maps to specific people in your org once you wire it up. The default mapping in the repo uses placeholders so you can adapt it without leaking real names into your fork.

What I learned shipping this

Three non-obvious things:

  1. The severe-terms list belongs in code, not in a prompt. Prompt-based classification drifts. A regex/keyword pass is auditable and a regulator can read it.
  2. A two-step draft is better than a one-step draft. First step picks posture from a constrained list. Second step writes copy within that posture. One-step drafts wander.
  3. Always log the decision, not just the action. When something goes wrong, you need to be able to show why the workflow did what it did. The logging node writes severity + posture + priority + owner + final action + draft text, every time.

If you want to ship this

The full importable n8n JSON is in the open-source preview:

https://github.com/IgorGanapolsky/qsr-ai-preview

If you want the packaged version with the OpenClaw agent specs, test fixtures, POS compatibility map, and demo walkthrough script (so a consultant can sell setup work), the $29 pack:

https://iganapolsky.gumroad.com/l/qsr-ai-ops-pack

If you are an operator or consultant who wants one of your own workflows mapped — POS export, integration path, approval gate, smallest paid pilot — there is a $499 48-hour written diagnostic:

https://iganapolsky.gumroad.com/l/qsr-ai-automation-diagnostic

If you build review-triage workflows for restaurants and your taxonomy is different from mine, I'd genuinely like to compare notes — drop a comment with what you classify on.

Top comments (0)