DEV Community

Patrick
Patrick

Posted on

Your L\&D Team Is About to Own the AI Adoption Problem. Here's What They Need.

Six months ago, IT rolled out GitHub Copilot. They sent an email. They created a SharePoint page with three tutorial links.

Utilization is now at 22%.

Somebody in Finance just asked for the ROI numbers.

That's how L&D ends up owning the AI adoption problem. Not because they caused it — but because they're the ones with the mandate to fix it.

If you're a Learning & Development manager about to inherit this, here's what you actually need to know.


The core problem isn't the tool

When AI tool adoption fails, people assume it's a technology problem. Wrong interface, wrong workflow, too many hallucinations.

But the teams that consistently hit 65–75% utilization within 30 days of training have one thing in common: they got structured, role-specific guidance on what the tool is actually for in their specific job.

The teams stuck at 20–35%? They got a license and a YouTube link.

This is a training problem. Which means it's your problem now.


What makes AI tool training different from other software rollouts

Standard enterprise software training (Salesforce, ServiceNow, Workday) teaches people a process. "Click here to submit an expense. Fill in these fields."

AI tool training is different. There's no fixed process. The value comes from developing judgment about when and how to use the tool — and that judgment is role-specific.

What a senior engineer does with Claude Code is completely different from what a junior developer does. What a business analyst does with Microsoft Copilot is completely different from what a product manager does.

Generic training fails because it tries to teach "using AI" as a single skill. It isn't.


The three things that actually move utilization numbers

1. Anchor it to a real job task they already hate

Don't train people on "prompting techniques." Train them on "how to use Claude Code to write the first draft of your test suite" or "how to use Copilot to prep your weekly status report in five minutes instead of forty-five."

Pick the most-hated recurring task in their workflow. Make that the anchor. Utilization follows.

2. Live demonstration beats self-paced video every time

Self-paced AI training has a specific failure mode: the person watches the demo, thinks "okay, that looks useful," closes the tab, and never opens the tool again.

Live training — even a 90-minute session with their actual team, on their actual work — has a completely different retention curve. People see a colleague get a real result in real-time. That's the moment something clicks.

3. Give them permission to be slow at first

Most engineers who give up on AI coding tools quit during week one, when they're still slower with the tool than without it. That's normal. The learning curve is real.

If nobody tells them this, they conclude the tool doesn't work. If a manager or trainer explicitly frames it — "you'll feel slower for 5-7 days, that's expected, here's the 30-day arc" — retention is dramatically higher.


What you need before you design any training

Baseline measurement. Before you run a single session, document current utilization by team and role. Usage dashboards exist for GitHub Copilot and Microsoft Copilot — pull them. If you don't have a baseline, you can't prove improvement, and you can't tell Finance whether the training worked.

This is the mistake most companies make. They measure utilization after training, see a number, and have no idea if it went up.

Role segmentation. "Developer training" isn't a plan. Are these frontend engineers? Backend? Data engineers? DevOps? The anchor tasks and prompt patterns are different for each. At minimum, segment by role before you build curriculum.

Manager buy-in. The biggest predictor of post-training utilization isn't the quality of the training. It's whether the direct manager reinforces it in the two weeks after. If managers aren't briefed on what good looks like and asked to check in, teams regress.


The measurement framework that justifies renewal

Six months after a Copilot rollout, Finance will ask: "Are we getting ROI?"

If you can't answer, the renewal is at risk.

What you need to have measured:

  • Utilization rate by team at 30, 60, 90 days post-training
  • Self-reported time savings per role (5-minute monthly survey)
  • PR review cycle time (before vs. after)
  • Optional: deployment frequency, if your team tracks it

You don't need all of these. You need one that Finance will believe.

The ROI math isn't complicated: a 10-person dev team saving 20 minutes per developer per day = 33 hours/month saved. At $100/hr loaded cost, that's $3,300/month. A $2,500 flat-fee workshop pays back in under three weeks.

You need that math ready when the question comes.


Where to start

If you're inheriting a failed AI rollout, don't start with more training. Start with diagnosis.

  1. Pull utilization data — who's using it, who isn't, which teams
  2. Have three conversations with people who stopped using it — what specifically made them give up?
  3. Find the two or three people who are using it effectively — what are they doing differently?

The answer is usually right there. Then build training around it.

If you want a benchmark for what good utilization looks like — and what moves teams from stuck to productive — we put together the numbers here: askpatrick.co/ai-adoption-benchmark.html

And if you want an outside set of eyes on your specific team's readiness before you design anything, we do a $500 async assessment: askpatrick.co/assessment.html


Ask Patrick helps engineering teams actually use the AI tools they've already paid for. Flat-fee training, starting at $2,500 for a full team.

Top comments (0)