This is a submission for the 2026 WeCoded Challenge: Echoes of Experience
.
When people talk about inclusion in tech, the conversation usually starts with access.
Who gets hired.
Who gets funded.
Who gets invited into the room.
That matters.
But there is another question that matters just as much, and it gets asked far less often:
Who was the software built to survive?
Because a lot of software feels inclusive only as long as the user is calm, connected, housed, charged, focused, and safe.
It works beautifully right up until reality enters the interface.
Right up until the person using it is in pain. Or exhausted. Or scared. Or offline. Or displaced. Or trying to make important decisions on a dying phone battery with weak signal and nowhere private to think.
That is where the truth of a system shows itself.
Not in the pitch deck.
Not in the mission statement.
Not in the polished design file.
In the failure mode.
I did not learn that lesson from a conference talk or a polished sprint retrospective.
I learned it the hard way.
I learned it while dealing with pain, stress, housing instability, weak connectivity, low battery, legal pressure, and the humiliating experience of needing a system most at the exact moment it was least capable of meeting me where I was.
There were nights in winter in British Columbia when I sat in a McDonald’s for as long as I could, nursing a single coffee because one more hour indoors mattered. I stayed until they closed the seating area, and then I was outside again. More than once, I slept near the building under a tarp, with an extension cord hooked to an outlet high up near the roofline so I could charge my scooter while I slept. All night, I could hear every car rolling through the drive-thru.
That changes how you understand a loading spinner.
That changes how you understand a recovery flow.
That changes how you understand the phrase “just try again later.”
I have looked at “we sent you a code” differently when signal kept cutting out.
I have looked at password recovery differently when the recovery path assumed uninterrupted attention, stable device access, and enough calm to troubleshoot like nothing else in life was on fire.
I have looked at cloud dependency differently when battery life, connectivity, and personal safety were all unstable at the same time.
Those experiences changed the way I understand technology.
A cloud dashboard stops sounding advanced when you know what it means to depend on a network that might vanish.
A beautiful onboarding flow stops sounding thoughtful when it assumes a quiet room, emotional surplus, and the luxury of making mistakes.
“Sync it later” stops sounding harmless when you know that, for some people, later is where things disappear.
That is the part of inclusion I think tech still struggles to name.
We are getting better at asking who is represented in the industry. That matters deeply. But we are still not honest enough about how many products are built around a hidden assumption of stability:
Stable housing.
Reliable internet.
Consistent power.
Private device access.
Cognitive bandwidth.
Predictable energy.
Institutional trust.
Enough spare calm to recover gracefully when something breaks.
Those are not neutral defaults.
They are privileges disguised as design assumptions.
And because they are rarely named, they quietly shape everything. They shape what gets called intuitive. They shape which failures are tolerated. They shape who gets blamed when the system collapses.
Tech loves the phrase edge case.
But for millions of people, the so-called edge case is not an exception.
It is the baseline.
It is pain.
It is displacement.
It is low battery.
It is device sharing.
It is trying to hold your life together through an interface that was designed as if your life would already be holding.
For a long time, I thought technical excellence mostly meant making systems faster, smoother, smarter, and more automated.
Some of it does. Performance matters. Clarity matters. Good tooling matters.
But I no longer believe speed is the highest proof of care.
A system is not humane because it is frictionless.
A system is not trustworthy because the landing page says “secure.”
A system is not inclusive because it works beautifully for users whose lives already match its assumptions.
Real trust shows up in architecture.
It shows up in whether the tool can still function when the network fails.
It shows up in whether recovery is possible under stress.
It shows up in whether privacy is structural instead of optional.
It shows up in whether usefulness quietly demands surrender.
That realization changed how I build.
I stopped thinking about privacy as a settings page and started thinking about it as a boundary the system has no right to cross.
I stopped treating offline support as a feature and started treating it as respect.
I stopped treating reliability as convenience and started seeing it for what it often is:
dignity under pressure.
That shift changes engineering decisions.
Local-first storage stops looking niche.
Graceful degradation stops looking secondary.
Shorter recovery paths stop looking like polish.
Data minimization stops sounding paranoid.
Lower cognitive load stops being a UX preference and becomes a survival requirement.
These are not decorative improvements.
They are moral decisions expressed through technical structure.
Because if software is meant to support human beings under pain, fear, coercion, instability, or exhaustion, then it should not quietly punish them for being human.
And if a product claims to care about trust, then trust should be visible in the system itself, not outsourced to branding, legal language, and hope.
That is a large part of what pushed me toward local-first and privacy-first thinking.
Not because it was trendy.
Because it felt necessary.
I wanted to build software that did not treat unstable people as defective versions of ideal users.
I wanted to build tools that did not require exposure as the cost of usefulness.
I wanted to build software that could still hold its shape when life no longer looked like a product demo.
That may sound philosophical.
It is not.
It is brutally practical.
A person in pain may not be able to navigate a dense form.
A person in crisis may not remember six recovery steps.
A person in an unsafe environment may not be able to risk their data living on someone else’s server.
A person under stress may not need a smarter experience. They may need one that fails less cruelly.
Yet so much of the industry still treats those realities like peripheral accommodations instead of first-order engineering constraints.
To me, that is one of the deepest forms of exclusion tech still struggles to name.
Not just exclusion from opportunity.
Exclusion from usability.
Exclusion from safety.
Exclusion from recoverability.
Exclusion from the basic assumption that your life deserves to remain survivable inside the system itself.
I do not think every developer needs to have lived through instability to understand this.
But I do think the industry improves when more of us take seriously the people who have.
Not as inspiration.
Not as branding.
Not as a resilience anecdote pasted over product ambition.
As sources of design truth.
Because lived experience exposes architectural lies faster than strategy ever will.
It shows you where the defaults break.
It shows you which “best practices” were only best for people with surplus.
It shows you that some systems do not merely inconvenience vulnerable users.
They abandon them exactly when they are most needed.
So yes, inclusion in tech matters at the hiring level. Deeply.
But if we stop there, we leave the harder question untouched:
What kind of life does this system assume is normal?
Because every product embeds an answer.
Every workflow.
Every dependency.
Every default.
Every recovery path.
And if the answer is a life with stable housing, strong signal, private device access, spare focus, emotional bandwidth, institutional trust, and enough calm to troubleshoot on demand, then a lot of what we call good software is only good software for the already protected.
That is the lesson I keep returning to.
A lot of software is built for users at their best.
Very little is built for users at their most fragile.
And the distance between those two choices is often the distance between support and abandonment.
If we want a technology industry worthy of the word inclusive, then we cannot stop at asking who gets to build the future.
We also have to ask:
Who is allowed to remain intact inside the systems we ship?
Top comments (1)
I wrote this from a question I keep coming back to: what happens when software meets a life that is not stable, private, charged, connected, or calm?
I’d genuinely like to hear from people: what product or system made you realize its default user assumptions were much narrower than they first appeared?