Most teams are having the wrong argument. The debate isn't "should we rebuild"—it's whether you've looked at the right data first.
I've pulled more than a few product teams out of this exact trap—one that burned a quarter rebuilding their onboarding to see activation barely move, another that iterated for six months on a flow with a structural problem from the start, adding tooltips and reorganizing steps while the actual issue sat untouched two layers deeper.
Here's what both teams had in common: they made the rebuild vs. improve call based on instinct and internal pressure, not signal.
That's fixable.
The real problem with this decision
When activation flatlines, the conversation usually goes one of two ways.
The first: "We've been incrementally improving this for months. It's time to rebuild." This feels decisive. But it's often a reaction to frustration, not evidence. Rebuilding satisfies the internal need to feel like you're taking bold action. It doesn't automatically address the root cause.
The second: "A full rebuild is too expensive right now. Let's keep iterating." This feels prudent. But if the current flow has a structural problem—a wrong assumption about what developers need to experience first, or a sequence that doesn't match how they actually think—iterating around it will only produce marginal gains. You're polishing something that needs to be torn down.
Neither instinct is wrong, exactly. They just aren't grounded in what the data can actually tell you.
There are three signals that cut through the argument. I've seen them hold up across enough developer products to trust them. And the time it takes to check all three is probably less than your next sprint planning meeting.
Signal 1: Where in the journey is the drop-off happening?
This one requires you to be honest about what "activation" actually means for your product. Not sign-up. Not email confirmation. The moment a developer has experienced enough of your product to understand whether it solves their problem.
Now look at your drop-off data. Where are developers leaving?
If developers are dropping off before they've had the chance to reach that value moment—before the first API call, before the first integration runs, before they've gotten any output at all—that's usually a rebuild signal. The problem isn't that your steps are confusing or your copy is unclear. The problem is that the path to value is too long, or the framing at the front end is creating the wrong expectations, or developers aren't self-selecting correctly before they start.
The instinct in this situation is to improve the steps right before the drop-off. But drop-off location is telling you something different: it's showing you where the gap between developer expectation and product reality became too wide to cross. If that gap opens before developers have experienced core value at all, improving the surrounding steps won't close it. The architecture is wrong—the flow is asking developers to commit before they understand what they're committing to.
That's not an iteration problem. That's a structural one.
If your drop-off is concentrated in the middle or later stages—after developers have had initial contact with the core value, but before they've completed the full flow—that's more often an improvement opportunity. The structure is right. There's friction you haven't removed yet.
The signal: Drop-off concentrated before value exposure means rebuild. Drop-off concentrated after value exposure means improve.
Signal 2: Does completing onboarding predict activation?
This one surprises teams every time they run it. Pull two cohorts: developers who completed your onboarding flow, and developers who skipped it and went straight to your docs. Compare activation rates.
If developers who completed onboarding activate at significantly higher rates than those who skipped it, your flow is doing something useful. The signal-to-noise ratio is there. You have a structure worth improving.
If the difference is small—or if developers who skipped onboarding activate at similar or better rates—that's a different story. It means your onboarding isn't providing something developers couldn't find themselves. They're working around it. That's a structural indictment, not an execution one.
When developers route around your designed path and still succeed, it usually means the flow is optimized around internal assumptions about what developers need—not around what they actually do to reach value. The flow reflects how your team thinks about the product, not how developers experience it. You can improve that flow indefinitely. You're improving the wrong thing.
The variation worth watching: if high-ICP developers skip onboarding and succeed, but lower-ICP developers complete it and still don't activate, your onboarding might be serving the wrong segment. That's a rebuild with a different diagnosis—you're building onboarding for developers who don't need it, instead of for the ones who do.
The signal: Completion predicts activation means improve. Completion and activation are uncorrelated means rebuild.
Signal 3: Has friction migrated without completion rates improving?
This is the one that tells you whether you've been iterating in circles.
Pull the last three iterations of your onboarding. For each one: where was the primary drop-off point, and what was the overall completion rate?
If the friction point has moved between iterations—step 4 became step 5 became step 7—but the completion rate hasn't meaningfully improved, you've been relocating friction, not removing it. Developers are hitting the same resistance; it's just showing up in a different place now.
This is the clearest rebuild signal in the data. It tells you there's a load-bearing structural issue that iteration hasn't touched. The friction isn't in the steps. It's in the assumption the steps are built on.
When the problem location moves but completion doesn't improve, you haven't solved anything—you've relocated the resistance. Eventually teams stop noticing that the friction point keeps shifting because each iteration feels like progress. It's not. You're rebuilding the same wall in a different spot.
If friction is migrating but completion is improving—say, from 30% to 45% to 55%—that's a healthy iteration signal. Keep going. The structure is sound and the changes are working.
The signal: Friction migrates, completion stays flat means rebuild. Friction migrates, completion improves means improve.
How the three signals combine
You rarely need all three to point the same direction to get clarity. Here's how I use them:
Two or more signals point to rebuild: Stop iterating. You're spending engineering time on the wrong problem. Rebuild means rethinking the journey architecture—what the flow is trying to accomplish, in what order, for which developer. Not just a new UI.
One signal points to rebuild, others are inconclusive: Dig deeper before committing. One anomaly is worth investigating; it might be segment-specific or explained by another variable. But don't dismiss it.
All signals point to improve: Good news—you have a structural foundation worth building on. The work ahead is meaningful friction removal, clearer copy, better error handling, tighter time-to-value within the existing architecture.
No signals point to rebuild and activation is still flat: Look upstream. The problem might not be in your onboarding at all. It might be in how developers arrive—the expectations your marketing and content are creating before they sign up. Fixing the flow won't fix a positioning mismatch.
One more thing before you start
The instinct call usually costs you a quarter. Occasionally it costs two. The three signals cost an afternoon with your analytics.
That's worth doing before you walk into the sprint planning meeting and announce a rebuild that might be the wrong call—or commit to another round of improvements on a flow that needs to be torn down.
If you want to know where your developer journey is actually breaking, the Developer Adoption Score is a free tool that evaluates your current experience across all five stages. It takes about ten minutes and gives you a starting point for this kind of diagnosis.
The debate teams have about rebuild vs. improve is usually the right debate. They're just having it without the data that would make it productive.
Top comments (0)