+73% revenue. Not from a new feature. Not from a redesign. Not from a growth hack some PM spent three quarters planning.
From a bug. A config error that sat in production for sixteen days, silently funnelling every new user in one of our European markets into the most expensive plan.
When someone finally noticed, the team wanted a hotfix and a post-mortem. I wanted to see what's in the database.
5% vs 43%
Here's what I found.
Before the bug, 5% of new users in that market picked the premium plan. That was the normal baseline. Everyone else scrolled down to the cheapest option and hit Continue.
During the bug- when premium was the default- 43% kept it.
Forty-three percent.
The onboarding screen wasn't hiding anything. The price was right there. "Change plan" was one click away. Nobody was forced into anything. Almost half the users just looked at the premium plan and thought: yeah, this works for me.
I didn't believe it at first. Classic survivorship bias, right? They selected it, but did they actually pay?
They did.
- 38% opened and activated their accounts
- 48% made real payments within the first month
- Only 16% downgraded later
The funnel shape was identical to the control group. Same activation rate, same payment rate. The only thing that changed was how many people entered the premium funnel. And that number was 9x higher because of a bug.
The number no one expected
I pulled the revenue numbers for both cohorts over the same period.
Normal users: ~€12,000/month.
Bug users: ~€21,000/month.
+73%. Same product. The only difference was which plan showed up first.
Let that sink in. A config error- an actual production defect- generated more incremental revenue than entire features that took months to build.
I didn't file a post-mortem. I filed a proposal.
This is where most stories end. Bug found, bug fixed, regression test added, everyone moves on. That's what a responsible engineer does.
I did the irresponsible thing. I went to the product team and said: don't fix this. Let me turn it into a controlled experiment.
The bug had accidentally created a perfect A/B test- no controls, no tracking, no consent framework, but a screaming signal. If we could reproduce it properly- with feature flags, per-country segmentation, and real cohort tracking- we'd know whether this was noise or an actual insight about how users make decisions.
They said yes.
The implementation
The whole thing was embarrassingly simple.
A feature-toggled tariff resolver that runs at registration time. Three conditions checked in sequence:
function resolveTariff(user):
if not experiment.isEnabled(user.country):
return defaultPlan()
if user.type not in experiment.targetSegments:
return defaultPlan()
return experiment.plan // premium
If any condition fails- default plan. No impact on anyone outside the experiment. No latency. A few in-memory lookups against cached config.
Each country had its own toggle. Kill one market without touching another. Add a new country without a deploy- just a config change.
That's it. That's the whole feature. A senior engineer could review this in ten minutes. A junior could build it in a day.
The hard part was never the code. The hard part was not fixing the bug.
It worked. Again.
The original market went first- we already had the accidental baseline. The experiment ran for a full billing cycle: real payments, not just plan selections.
The 43% selection rate from the bug period reproduced almost exactly under controlled conditions. Revenue uplift held.
We expanded to a second European market. Same results.
The product team's conclusion:
"The experiment was rather successful. We make the premium plan the recommended default during onboarding. We start the A/B experiment in the next market to check whether we'll get the same effect."
The "experiment" became the default. The feature flag stayed- as a kill switch, not an experiment.
What this actually taught me
I've shipped features with months of work behind them that moved metrics by low single digits. This one was a three-condition if statement that moved revenue by 73%.
There's a lesson here that most backend engineers never learn, because we're trained to think our value is in the complexity of what we build. Distributed systems, event sourcing, saga patterns, microservice choreography- that's the hard stuff, and the hard stuff is what matters. Right?
Wrong. The most impactful thing I did that year was stare at a SQL query for twenty minutes. The code I wrote afterward was trivial. A junior could do it. What a junior couldn't do- and what most seniors don't do- is pause before the fix and ask: what is the bug actually telling us?
Every production incident has a signal in it. Most of the time the signal is: something is broken, fix it. But occasionally- rarely- the signal is: your assumptions about user behaviour are wrong, and the bug just proved it.
No product manager would have proposed "show every user the most expensive plan by default." It sounds predatory. It sounds like a dark pattern. Except the data shows the opposite- users weren't tricked. They made informed decisions. They just needed better defaults.
The uncomfortable takeaway
If you're a backend engineer and you've never looked at the business impact of your code- you're flying blind.
Not because revenue is your job. It's not. But because understanding what your code does to the business changes the kind of decisions you make. It's the difference between "I fixed the bug" and "I found a pricing insight worth six figures a year."
I could have closed the ticket in an hour. Instead, I spent a day in the data, wrote a proposal, and changed the default pricing strategy across multiple markets.
The code was the easiest part. The observation was the whole thing.
Top comments (0)