You bought the Claude Code seats. The announcement went out. A few engineers tried it the first week.
Now it's been a month. Most of them have gone back to their old workflow.
This is the pattern we see most often — and it has almost nothing to do with Claude Code itself.
What's Actually Happening
Engineers are pragmatists. If a tool doesn't immediately save them time, they stop using it. And Claude Code — used wrong — doesn't immediately save time.
The most common complaints we hear:
"It's slower than just writing it myself."
For small, well-understood tasks, this is sometimes true. If you already know exactly what code you want to write, a 20-word prompt + reviewing the output can take longer than typing the function. Engineers hit this experience early and conclude: "Not for me."
What they haven't discovered: Claude Code's value isn't in replacing typing. It's in the tasks you've been avoiding — writing tests, documenting functions, reviewing your own PRs before submitting, untangling legacy code.
"I don't know how to prompt it well."
This is the most common blocker. Engineers who never learned structured prompting get inconsistent results, attribute it to the tool, and give up. They're not wrong that results are inconsistent — they just diagnosed the wrong cause.
"It's not part of our workflow."
Nobody talks about it in standups. There's no #claude-wins channel. Nobody shared a prompt that worked. It exists in isolation, and isolated tools die.
The One-Afternoon Fix
You don't need a six-week training program. You need a focused 2-3 hour session with your team that hits three things:
1. Pick one anchor workflow per role
The mistake is "use it for everything." The fix is "use it for one specific thing, repeatedly, until it becomes automatic."
Examples that work immediately:
- Pre-PR review: Before submitting, paste your diff and ask Claude Code to find issues a reviewer would catch. Reduce back-and-forth.
- Test generation: Describe what a function does, ask for edge case tests. Not perfect, but 70% of the way there in 30 seconds.
- Documentation: Select an undocumented function, ask for a docstring. Ship it.
One workflow. High repetition. The habit forms fast.
2. Do a live before/after prompt comparison
Take a task your team does weekly. Show what a bad prompt produces. Show what a structured prompt produces. The gap is usually obvious enough that engineers immediately want to try it themselves.
This is worth an hour of everyone's time. It resets expectations — from "chatbot" to "pair programmer."
3. Build social proof fast
In the first week: create a #claude-wins Slack channel. You seed it. Share one win per day for five days. Ask two other people to share.
Social proof is the fastest adoption accelerator there is. Once engineers see their peers saving time with something, they try it themselves.
The Real Metric to Watch
It's not "did anyone use it." It's: what percentage of PRs mention Claude Code in the description?
If that number climbs from 0% to 30% in 30 days, you're winning. If it stays at 0%, the tool isn't embedded in the workflow yet.
What Good Looks Like
Teams with structured onboarding typically hit 60-70% active utilization at 30 days. Industry average without training: 20-35%.
That 35-40% gap represents hours of productivity per engineer, per week — that your team is currently not capturing.
We put together a free playbook sample with the first 3 modules of our Claude Code team onboarding: askpatrick.co/playbook-sample.html
If you want to measure where you actually stand first: askpatrick.co/roi-calculator.html
The engineers who resist AI tools aren't lazy or stubborn. They're responding rationally to a bad rollout experience. Fix the rollout, and the adoption follows.
Ask Patrick trains engineering teams on Claude Code and Microsoft Copilot. Flat-fee engagements, no per-seat licensing. askpatrick.co
Top comments (0)