DEV Community

Cover image for Performance rituals that fail (and one that stuck)
Apogee Watcher
Apogee Watcher

Posted on • Originally published at apogeewatcher.hashnode.dev

Performance rituals that fail (and one that stuck)

Every agency accumulates performance rituals. Some look serious on a slide deck; few survive contact with a packed sprint board.

Below are three habits we see fail repeatedly, and one lightweight rhythm that kept teams honest after the novelty wore off. This is not a manifesto against meetings. It is a simple split between meetings that earn their place on the calendar and work that needs steady measurement.

For structure around reviews and cadence we link to:

The first spells out what belongs in a review; the second lists what to monitor on an ongoing basis. This piece stays on why rituals stall and what replaced hollow encouragement in teams that actually shipped.

Ritual one: the monthly Lighthouse export nobody reads

Someone runs Lighthouse on five URLs, saves PDFs to a shared drive, and posts "FYI" in Slack. The folder fills up. Nobody attaches those PDFs to tickets. Three months later the same templates regress, and nobody can say whether it drifted slowly or broke in one deploy.

The ritual signals diligence. It still produces no owners, no deadlines, and no trend line anyone trusts.

Ritual two: the performance agenda item with no exit criteria

"We should discuss performance" appears on a recurring call. Ten minutes pass. Everyone agrees it matters. The conversation drifts into tooling opinions. No decision, no assignee, no date.

Without one measurable threshold or route-level signal to anchor the discussion, performance stays abstract. Abstract topics rarely survive sprint planning.

Ritual three: hero fixes that never become policy

A developer spends a weekend shrinking hero images on one flagship site. Numbers improve. Leadership cheers. The pattern never becomes a component rule, a budget line, or a check in CI. The next client site repeats the same mistake.

Hero work without guardrails is theatre. The audience leaves after one act.

The one that stuck: short signal, one named owner

The habit that survived was smaller than any workshop. Each week, one person (rotate if you like) spent fifteen minutes on three questions only:

  1. Which monitored URL or template worsened against our agreed threshold?
  2. Is that regression tied to a deploy, a content change, or unknown noise?
  3. Who owns the next action, and when do we check again?

No slide deck and no sermon on Core Web Vitals. When nothing crossed the threshold, the stand-up note said so plainly. That honesty kept the ritual from turning into theatre.

The difference was not lack of ambition. It was signal frequency and named accountability. Weekly beats monthly when you want habits instead of guilt.

What the failures shared

Failed rituals usually celebrate activity instead of closing loops. The ritual that lasted treated performance like any other operational metric: visible trend, clear threshold, single owner, predictable review.

If you already run monthly reviews, add something weekly and dull between those meetings to tighten the loop. Dull is easier to keep than heroics.

If you are deciding what to monitor first, use the checklist linked above, then decide who owns the follow-up before you buy another tool.

Try this next week

Pick one high-traffic template. Agree on one metric threshold you will not reopen in the meeting (debate the number beforehand if you need to). Assign one owner for seven days. Review once. If nothing regresses, you still learn whether your alerting noise is low enough to trust.

Small ritual, clearer ownership: it beats another PDF graveyard.

Top comments (0)