DEV Community

Patrick
Patrick

Posted on

The 90-Day AI Retrospective: How to Know If Your Copilot Rollout Actually Worked

Six months after your Copilot deployment, your CTO asks: "Is it working?"

If your answer is a shrug, you are not alone. Most engineering orgs have no structured way to evaluate AI tool adoption. They know seats purchased and licenses active — but they have no idea if the tools are changing how engineers work.

Here is a 90-day retrospective format we run with teams after a training engagement. It takes 90 minutes and produces a clear read on where you stand.

Why 90 Days?

  • Week 1-2: Novelty phase. Engineers try it, get inconsistent results, form early opinions.
  • Week 3-6: Plateau. Enthusiasm drops. Usage falls unless engineers found specific workflows that stuck.
  • Week 7-12: Either adoption is happening or it is not. At 90 days, the pattern is set.

This is when the data is meaningful.

The 4 Questions Your Retrospective Should Answer

1. What is our actual utilization rate — and how does it compare to our baseline?

If you did not set a baseline before rollout, you cannot answer this. If you did: compare completion rates, feature usage, and time-in-tool from your analytics dashboard.

Benchmark: Teams with structured training typically hit 60-75% weekly active usage at 90 days. Teams with email-only rollouts typically sit at 20-35%.

2. Which workflows changed — and which did not?

Survey your team (5 questions, anonymous, 3 minutes):

  • What tasks do you now use Copilot for regularly?
  • Where did you try it and stop? Why?
  • What is one thing you wish it did better?
  • Did your PR review time change?
  • Would you recommend it to a peer at another company?

The answers tell you where the training gaps are.

3. Are the skeptics converting or cementing?

In most teams there is a 30/50/20 split at day 1: 30% enthusiastic adopters, 50% neutral, 20% resistant. At day 90, check where that 50% landed. If they moved toward adoption, your rollout worked. If they drifted toward resistance, you have a training problem that is getting harder to fix.

4. What is the ROI story you could tell leadership?

You do not need perfect data. You need a story. Time saved per engineer per week × loaded hourly cost × team size = monthly value. Compare to licensing + training cost. If you cannot construct this, you cannot justify renewal.

The 30-Minute Exercise

Split your team into groups of 3-4. Give each group 15 minutes to answer:

  • "What is one workflow that is genuinely faster now?"
  • "What is one thing we tried and abandoned?"
  • "If we were starting over, what would we do differently in week 1?"

Debrief as a full group. Record the answers. This surfaces the actual state of adoption faster than any survey.

What to Do With the Results

If utilization is 60%+: You are in good shape. Focus on deepening skill — find the 3-4 prompting patterns that are working and document them for the whole team.

If utilization is 30-60%: The plateau is real. The gap is almost always training — engineers learned basic prompting but never learned the workflows that make the tool essential. A targeted 3-hour session focused on their actual use cases typically breaks through this.

If utilization is below 30%: You have an adoption failure. The tool is not integrated into workflows. Engineers have reverted to defaults. This is recoverable but it requires a structured re-launch, not just another nudge email.


Wondering where your team actually stands? The free ROI calculator at askpatrick.co/roi-calculator.html runs the utilization math for your team in about 2 minutes. If you want to run this retrospective with a facilitator, that is what we do: askpatrick.co/assessment.html

Top comments (0)