DEV Community

Cover image for Your client read the performance PDF. Then nothing happened.
Apogee Watcher
Apogee Watcher

Posted on • Originally published at apogeewatcher.hashnode.dev

Your client read the performance PDF. Then nothing happened.

You sent the report on Friday, and the client replied on Monday: "Looks good, thanks." Then nothing moved.

If you work in an agency, this is familiar. The report was not wrong. The metrics were there. You probably even explained LCP, INP, and CLS clearly. Still, no owner picked up the work, no ticket landed in sprint, and the same issues appeared again next month.

The problem is usually not the PDF itself. The problem is the operating model around it. For templates you can reuse directly, start with:

This post is the practical layer between those templates and day-to-day delivery.

Why reports stall after "looks good"

In our experience, one of these four gaps is present:

  1. No clear decision The report presents data, but not the decision required this week.
  2. No named owner "Dev team" is not an owner. One person needs the task in a board with a due date.
  3. No business impact wording Numbers are shown, but business impact is not translated into lost leads, paid traffic waste, or checkout risk.
  4. No review rhythm The report is delivered ad hoc, so teams treat it as commentary instead of an operating input.

Until those gaps close, performance reporting becomes a polite monthly ritual.

A report should produce three outputs, every time

Before a report leaves your team, check that it will produce:

  • One decision (what gets prioritised now)
  • One owner (who ships it)
  • One date (when it will be reviewed or completed)

If any of those are missing, the document is not done yet.

The agenda we use in client review calls

You do not need a long meeting. Thirty minutes is enough when the agenda is strict.

1) Baseline in one minute

Open with trend, not raw noise:

  • What changed since last cycle
  • Which template or route worsened
  • Which metric crossed agreed threshold

Keep it short. The point is alignment, not narration.

2) Top three regressions only

Do not bring ten issues to a team that can ship three.

For each item, include:

  • affected URL group (not only a single page)
  • likely root cause class (image payload, third-party script, render-blocking path, etc.)
  • size estimate (small, medium, large)

This reduces the common "we need more analysis first" loop.

3) One decision per regression

For each of the top three:

  • fix this sprint
  • defer with reason
  • monitor for one more cycle

Write the decision live during the call.

4) Confirm owner and due date before closing

No owner means no outcome, and no date means no accountability.

End the call only after each accepted action has:

  • owner name
  • board/ticket reference
  • expected review date

Turn metrics into client language

Most clients do not need another lecture on how Lighthouse works. They need to know what changes if they fund or delay the fix.

Instead of:
"INP p75 moved from 180ms to 270ms."

Use:
"Checkout interaction delay moved into a zone where users feel lag on mobile. If paid traffic keeps scaling, this increases drop-off risk during high-intent clicks."

Same data, different consequence.

Build a monthly cadence teams can trust

A reliable rhythm beats heroic one-off audits.

We suggest:

  • Week 1: collect baseline and annotate anomalies
  • Week 2: client review call and decisions
  • Week 3: implementation check-in with owners
  • Week 4: verify shipped fixes and update next cycle backlog

When this cadence runs for 2-3 months, reporting stops being "extra work" and starts functioning like a delivery control loop.

Where agencies lose margin (quietly)

Many agencies absorb performance analysis time without noticing:

  • repeated explanation of the same issue each month
  • rescue debugging before stakeholder calls
  • context switching because ownership was unclear

A tighter report-to-decision workflow protects margin because teams stop re-diagnosing issues that were already known.

A simple scorecard to track report quality

Add this to your internal QA before sending any client report:

  • Did we state one clear decision?
  • Did each accepted action get one owner and one date?
  • Did we translate at least one technical metric into business risk?
  • Did we limit priorities to what can ship this cycle?
  • Did we schedule the next review before ending the call?

If three or fewer are true, expect "thanks, looks good" and little follow-through.

What to do next

Use your next client report as a test:

  1. Limit it to top three decisions.
  2. Assign one owner per accepted action during the call.
  3. Book a verification checkpoint before the meeting ends.

Then compare completion rate against your previous cycle.

Better reporting is not about prettier PDFs. It is about whether work gets shipped after the PDF lands.

Top comments (0)