DEV Community

TechPulse Lab
TechPulse Lab

Posted on • Originally published at aiarmory.shop

How to brief an AI strategy audit so the recommendations are actually useful

If you're paying someone for an "AI strategy session" and you walk in cold, you're going to walk out with generic advice. Not because the consultant is bad. Because the consultant has nothing real to react to.

The hour gets spent reconstructing what your business actually does. By the time you're past the basics, the call's over and the recommendations are some flavour of "you could probably automate your support inbox" — which you already suspected.

The way to make these calls useful is to bring artefacts. Not in a polished, deck-shaped way. In a quick-and-dirty, "here's the actual mess" way. Below are the seven prep prompts I tell buyers to run before any AI strategy or audit call (mine or anyone else's) so the hour spent on the call goes into specifics, not background.

Each prompt assumes you have access to ChatGPT, Claude, or any decent model. The output is for you, not for the consultant — they shouldn't get a polished deck, they should get raw material. Print it, scribble on it, hand them the marked-up version on the call.

1. The workflow inventory

List every recurring task you or your team does at least weekly. For each one,
include:
- Who does it
- Roughly how long it takes per occurrence
- How often it happens
- Whether the output is sent somewhere external (client, customer, vendor) or
  stays internal
- Whether it has a deadline or time-of-day constraint

Don't sort, don't prioritise, don't filter. Dump everything you can think of
in one pass. Aim for 20-50 items even if some feel trivial.
Enter fullscreen mode Exit fullscreen mode

Why this is the first prompt: the single most common audit failure mode is the buyer pitching one task — usually the noisiest one — and the consultant treating it as the whole problem. A 40-item list lets you both see the shape of the work, not just the loudest part of it. The "boring" tasks are usually where the highest-leverage automations hide, because they're the ones nobody's complained about long enough for someone to manually fix them.

Don't try to be clever about it. The unstructured dump is the point.

2. The friction inventory

List the three things in your work right now that consistently feel like they
take longer than they should, drain the most energy, or make you mutter "this
is stupid" while doing them. For each one, write:
- What the task is
- What part specifically is the friction (the typing? the deciding? the
  context-switching? the waiting on someone else?)
- What you'd ideally want it to look like instead
- Whether you've tried to fix it before, and if so, what happened
Enter fullscreen mode Exit fullscreen mode

This one matters because "AI" is the wrong noun ninety percent of the time. The actual underlying need is "I want this to take less of my attention." Sometimes that's a model. Often it's a checklist, a template, a Zapier zap, or just the realisation that nobody is asking you to do this and you can stop.

A good auditor will spend half the call separating "this needs a model" from "this needs a process" from "this needs a hard no, you should drop it." You can't have that conversation without the friction list.

3. The technical reality check

Describe in 5-10 bullet points the actual technical environment my work lives
in:
- What tools are we using (with names, not categories — "Notion" not
  "knowledge base", "HubSpot" not "CRM")
- Which of those tools have working API access we already use
- Where data lives that an automation might need to read or write (databases?
  spreadsheets? email? a CRM?)
- What's hosted where (cloud, on-prem, mix)
- Any compliance or data-handling constraints (HIPAA, SOC2, GDPR, client
  contracts, anything that limits where data can flow)
- Anything in our stack that doesn't have an API, or has a bad one
Enter fullscreen mode Exit fullscreen mode

Why this matters: half of "this won't work" answers in an AI strategy call come from learning, on the call, that the buyer's CRM is a Google Sheet someone manually updates from screenshots, or that their core data lives in a tool that was acquired and end-of-lifed three years ago.

You want this discovered before the call, not during. If the limitation is real, it shapes what's recommendable. If it's solvable, the call can spend time on which solution.

The "doesn't have an API" point is the one to be honest about. Tools that require a human to be in the loop for everything aren't automatable, and pretending otherwise wastes everyone's hour.

4. The budget reality (yes, the actual number)

Write down the honest answer to:
- What's the most I'd spend on a one-off setup if it would obviously save the
  team time
- What's the most I'd spend per month on ongoing AI tooling and API costs
- What would I want the payback period to be (in months) before I'd consider
  a setup "worth it"
- Is this my own money, the company's money, a budget I have signing
  authority on, or something I'd need to justify upward — and if upward, what
  story would land with that person
Enter fullscreen mode Exit fullscreen mode

This is the one buyers tell me they hate doing. Do it anyway. Three reasons:

  1. It calibrates which solutions are even on the table. Recommendations differ wildly between "I have $500 and a weekend" and "I have $50k and three months."
  2. It catches the case where you don't actually have decision-making authority. If you'd need someone else to approve, the audit needs to produce a document you can hand them, not just a list you'll act on yourself.
  3. It surfaces the budget conversation early enough that nobody pretends to be more flexible than they are. Most AI projects that die six weeks in die because the budget was "TBD" at scoping time.

5. Success criteria, falsifiable

For the top 3 problems you identified in prompts 1 and 2, write down what
"this got solved" would look like in measurable terms.

Avoid: "saves time", "feels easier", "is more efficient"

Aim for: "the support inbox response time drops from 16 hours to under 4
hours by the end of month 2" or "I stop doing the Friday status report
manually and the team gets the auto-generated version every Friday at 5pm
without me touching it."

If you can't write a measurable version, that's the answer — that problem
isn't ready to automate yet, and any audit recommendation on it will be
hand-wavy.
Enter fullscreen mode Exit fullscreen mode

Falsifiability is what separates a useful AI engagement from a vibes-based one. If "did it work?" is unanswerable, "should we keep paying for it?" becomes unanswerable too, and that's how organisations end up with three half-deployed AI tools, none of which anyone's willing to turn off.

The flip side: a problem you can't write a measurable success criterion for is a problem the audit shouldn't try to solve yet. Better to leave it explicitly out of scope than to have it haunt the engagement.

6. The constraint and integration map

For the top 3 problems, list:
- Who else needs to use, monitor, or maintain the solution besides me
- Anyone whose approval would be needed to roll it out (legal, security,
  IT, manager, partner)
- Existing systems the solution would need to plug into
- Existing systems the solution must NOT touch (because of compliance,
  trust, or political reasons)
- Whether the solution needs to keep working if I'm on holiday, or if I'm
  the only person who'd ever interact with it
Enter fullscreen mode Exit fullscreen mode

Most "AI fails" aren't model failures. They're rollout failures. The model writes a perfectly good draft response to a customer ticket; the system has no path to get it in front of the human who has to send it. The auto-generated weekly report runs flawlessly; nobody on the leadership team can be persuaded to look at a sixth dashboard.

The integration map is what turns "build it" into "build it and have it actually used." If you don't bring this, the auditor has to invent it on the call, and inventions made under time pressure tend to be optimistic.

7. The prior-attempts archive

List every attempt — yours or someone else's at this company — to solve any
of the top 3 problems with software, AI, automation, or process changes.
For each one:
- What was tried
- What worked
- What didn't
- Why you think it failed (or stalled, or just got dropped)
- What would have to be different this time to not repeat the same outcome
Enter fullscreen mode Exit fullscreen mode

Most non-trivial automation problems have already been attempted at least once. The reason to surface this isn't to make anyone feel bad about it — it's to avoid spending the audit hour re-pitching a solution that's already been ruled out for a non-obvious reason.

Sometimes the prior attempt failed because the tool didn't exist yet. Great, that's a real reason to retry. Sometimes it failed because there's a stakeholder who blocks it for political reasons. Knowing that going in changes the entire shape of what gets recommended.

How to use the output

Print all seven outputs. Not on screen — print them. Mark them up with a pen. Cross out things that became obvious as you wrote them. Star the friction items that the workflow inventory confirmed are large recurring time sinks.

The marked-up version is what you bring to the call. The unmarked version is overrated.

What this gets you is roughly an hour of consulting time spent on your specifics rather than on the consultant reconstructing your business out loud. Even if the engagement is short, even if the consultant is mediocre, the prep work makes the recommendations meaningfully more usable.

And in the case where you do all this and realise you don't actually need an audit — the workflow inventory plus the friction list plus the success criteria already point at the answer — congratulations, you just saved yourself the fee.

Going further

The other reason to do all seven of these is that they're the same artefacts I produce during a paid audit, just with my framing on top. If you'd rather hand the marked-up output to someone who runs multi-agent systems daily and wants to turn it into a prioritised roadmap with named tools and time-savings estimates, that's what the AI Strategy & Audit Session is — a 60-minute call, a written strategy document delivered within 24 hours, top 5 automations ranked by ROI, a 20% discount code for any follow-on work. $299 flat.

But you don't strictly need me. The seven prompts above are the actual asset. If you fill them in honestly, the next move is usually obvious whether you bring me, someone else, or nobody.

Top comments (0)