If your platform team only tracks "services onboarded" or "deployments per week," you are measuring compliance, not value. The single metric that predicts whether your Internal Developer Platform (IDP) will deliver return on investment is voluntary adoption rate of the golden path — the percentage of new work that chooses the paved road when an off-road option still exists.
This article shows three ways to measure it concretely, using Backstage, GitHub, Argo CD, and Prometheus. It is the technical companion to the broader Platform Engineering business case — that piece argues why voluntary adoption matters; this one shows how to compute it.
Why activity metrics mislead platform teams
Most platform dashboards report on activity:
- Number of templates run
- Services in the catalog
- Pipeline executions per day
- Onboarded teams
These numbers go up regardless of whether developers actually like the platform. A mandated platform produces the same activity graph as a beloved one — until attrition spikes and the post-mortem reveals that nobody was self-serving anything; they were filing tickets to comply with a policy.
Voluntary adoption asks a harder, more honest question:
When developers had a real choice, what did they pick?
If the answer trends toward the golden path over time, the platform is genuinely removing friction. If it trends away — or if there was never an off-road option to reject — you do not have signal. You have theatre.
The three measurement layers
| Layer | What it measures | Source | Cadence |
|---|---|---|---|
| 1. Path-of-least-resistance rate | % of new services created via the golden-path template vs. ad-hoc | Backstage Scaffolder + GitHub repo creation events | Weekly |
| 2. Stickiness rate | % of services still on the golden path 90 days after creation | Catalog metadata + drift detection | Monthly |
| 3. Re-entry rate | % of legacy services voluntarily migrating onto the platform without a mandate | Catalog + GitOps PR activity | Quarterly |
You want all three trending up. Activity metrics — deploys, builds, pipeline runs — are downstream of these and noisy on their own.
Layer 1: path-of-least-resistance rate
The Backstage Scaffolder logs every template execution. Cross-reference that against all new repositories created in your GitHub organisation during the same window. The ratio between the two is your weekly voluntary adoption rate for new work.
-- Pseudocode against your data warehouse
SELECT
date_trunc('week', created_at) AS week,
COUNT(*) FILTER (WHERE source = 'backstage_scaffolder') AS golden_path,
COUNT(*) FILTER (WHERE source = 'manual_repo') AS off_road,
ROUND(
COUNT(*) FILTER (WHERE source = 'backstage_scaffolder')::numeric
/ NULLIF(COUNT(*), 0) * 100, 1
) AS voluntary_adoption_pct
FROM new_services
GROUP BY 1
ORDER BY 1;
Healthy trend: 60% → 80%+ over six months. Stalled below 40%? Your golden path is not actually the easiest path. Find out why before adding more features. The most common culprits are template rigidity, slow scaffold time, and missing escape hatches for legitimate edge cases.
Layer 2: stickiness rate
Services that get scaffolded from a template often drift off the paved road within weeks. Teams bypass the CI pipeline, stop publishing to the catalog, hand-edit the Helm chart instead of using the platform-provided one. Stickiness measures how many services are still genuinely platform-managed 90 days after creation.
Detect drift with a periodic reconciliation job and an annotation contract:
# Argo CD Application — every service should carry these annotations
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
annotations:
platform.bz/golden-path-version: "2.4"
platform.bz/last-reconciled: "2026-04-24T08:12:00Z"
Then ask: of services scaffolded 90 days ago, how many still carry a current golden-path-version annotation and pass policy admission without exemption?
That ratio is your stickiness rate. Sub-70% means your golden path is too rigid — developers leave because the road does not bend where it needs to. The fix is rarely more enforcement; it is usually more flexibility within the paved road, so teams do not need to step off it for legitimate variations.
Layer 3: re-entry rate
The hardest test, and the most informative one. Of services that existed before the platform, how many have voluntarily migrated onto it in the last quarter — without a top-down mandate forcing the move?
# Prometheus — services emitting platform-managed telemetry that were not managed 90 days ago
sum(
count by (service) (
platform_managed_service_info{managed="true"}
unless on(service)
platform_managed_service_info{managed="true"} offset 90d
)
)
This is the return-on-investment signal a CFO can defend in a budget review. A voluntary re-entry means the platform earned the developer's trust enough that they ported a working production service onto it — at their own initiative, on their own schedule, against the inertia of "if it works, do not touch it."
If you are seeing zero quarterly re-entries despite a healthy Layer 1 number, the migration cost is too high. Build a one-command migration template for the most common service shape and watch the rate move.
Reading the three numbers together
Each layer in isolation lies. Read them as a system:
| Pattern | Diagnosis |
|---|---|
| L1 high, L2 low | Templates are good; the runtime experience drifts. Invest in policy automation and reconciliation. |
| L1 low, L2 high | The few who adopt love it; discoverability is broken. Invest in developer experience and template marketing, not features. |
| L1 high, L3 zero | New work uses the platform; old work will not migrate. Build a migration template. |
| All three flat | You probably have a mandate hiding the truth. Remove it for one team and remeasure honestly. |
The one anti-pattern to avoid
Do not report voluntary adoption as a single headline percentage to leadership without the three layers underneath. A 92% headline with 30% stickiness is materially worse than a 60% headline with 85% stickiness — the first is a compliance illusion that will collapse when the mandate lifts; the second is a working product with room to grow.
Platform engineering is a product discipline. Measure it the way a product team measures retention, not the way an ops team measures uptime.
Closing
The mandate question is not ideological — it is a measurement question. If you can prove voluntary adoption is rising, you do not need a mandate. If you cannot measure it, no mandate will save the platform when budget season arrives and the CFO asks what changed.
What is your platform's voluntary adoption rate right now? And, more importantly: does anyone on your team know how to compute it?
For the broader strategic case behind this metric — Forrester ROI numbers, developer attrition costs, and why mandated adoption destroys returns — see the Platform Engineering business case.
For the reference architecture and tooling choices that make these measurements possible, see the Platform Engineering technology deep dive.
Top comments (0)