For a while, I have been writing about Protective Computing as a discipline.
That matters. But doctrine without contact is just theory wearing armor. If a framework cannot survive a collision with a real product, it is still only language.
So I stopped speaking in abstractions and ran the first public walkthrough.
I published PLS Walkthrough 0001: MyFitnessPal Public Surface Audit as a formal report.
The audit was intentionally scoped to publicly observable product surfaces and public documentation only. No packet capture. No authenticated runtime instrumentation. No reverse engineering. Just the visible architecture, the public promises, the exposed controls, and the policies the product asks people to trust.
Final result: 7.50 out of 100.
Hard fail triggered.
That number is bad enough on its own. The harder truth is what it represents.
Why I chose MyFitnessPal
I did not want an obscure target. I did not want a throwaway app nobody depends on. I wanted a mainstream platform that sits close to the body, close to behavior, and close to the quiet pressure people live under every day.
A food and fitness tracker is not neutral software. It collects routines, measurements, habits, and intimate forms of self observation. It enters the part of life where people are tired, ashamed, hopeful, depleted, recovering, spiraling, trying again, or simply trying to hold a pattern together long enough to function.
That is exactly where software should be judged more harshly, not less.
What this audit was actually measuring
The Protective Legitimacy Score is not a vibe check. It is not a trust badge. It is not a branding exercise.
It is a structured way of asking whether a system deserves to be trusted under real human conditions.
Not ideal conditions. Not demo conditions. Not investor deck conditions.
Real conditions.
What happens when the user is cognitively overloaded. What happens when they are in pain. What happens when they are being watched. What happens when they do not have perfect energy, perfect privacy, perfect connectivity, or perfect control over their environment.
A lot of software looks acceptable until you introduce reality.
Then the seams start showing.
What the public surface already reveals
One of the more dangerous habits in software criticism is pretending you need full internal access before you are allowed to make a serious judgment.
Sometimes the system tells on itself immediately.
Sometimes the public surface is already enough.
If the visible product posture depends on tracking, account dependence, unclear recovery, exposure-prone defaults, or missing coercion-aware safety framing, that is not a minor detail. That is the architecture speaking in plain sight.
And that is the point of this walkthrough.
Not to claim omniscience. Not to pretend a public-surface audit is the whole story. But to prove something much simpler and much more uncomfortable:
You can often detect structural failure before touching the internals.
Why the fail matters
This is not about theatrics. It is not about making a number sound dramatic. It is about what it means when a mainstream health-adjacent platform can be evaluated from its own public posture and still land at 7.50 out of 100 with a hard fail already triggered.
That should bother people.
Not because the score is sacred. Not because one report ends the conversation. But because the visible layer of the product is already telling you that the burden is being placed in the wrong place.
On the user.
On the tired person. On the sick person. On the person who is expected to navigate settings, disclosures, permissions, exports, deletion paths, and trust boundaries while also trying to live.
That is the part the industry keeps getting away with.
Software keeps presenting itself as helpful while quietly assuming stable conditions that many people do not have.
Stable attention. Stable privacy. Stable bandwidth. Stable housing. Stable emotional regulation. Stable safety.
Those assumptions are not neutral. They are load-bearing. And when they collapse, the product’s true design philosophy becomes visible.
The larger problem
Too much writing about privacy and trust still collapses into theater.
A company says it cares about privacy. A product adds a settings menu. A policy page grows longer. A dashboard gains one more toggle. Everyone acts like this is maturity.
It is not maturity if the underlying posture is still fragile. It is not care if the user is still carrying the cognitive burden alone. It is not protection if the design only works for people who are already safe.
That is why I care about Protective Computing.
Because I am not interested in whether a system sounds ethical. I am interested in whether it remains defensible when life stops being clean.
Can the system preserve agency under stress.
Can it reduce exposure instead of merely disclosing it.
Can it degrade honestly.
Can it avoid turning confusion, urgency, or dependency into a coercive condition.
Those are engineering questions.
They deserve engineering answers.
Why I published this as a formal report
I did not want this to live as another opinion post floating through the feed. I wanted a real artifact. Something citable. Something stable. Something that can be examined, challenged, reused, and built on.
That is why the first walkthrough exists as a DOI-backed report instead of a loose thread of claims.
If I am going to argue that software should be judged against human vulnerability instead of convenience theater, then I need to be willing to make that judgment in public, under my own name, with a method and a paper trail.
So that is what this is.
The beginning of a series that takes Protective Computing out of the realm of doctrine and puts it under load against real software.
In public. With receipts.
Read the audit
PLS Walkthrough 0001: MyFitnessPal Public Surface Audit
Framework basis: Protective Legitimacy Score (PLS) rubric
Target policy reviewed: MyFitnessPal Privacy Policy
This is the first walkthrough.
It will not be the last.
Because if a framework cannot survive contact with real software, it does not deserve to exist.
And if software cannot withstand evaluation under real human conditions, it does not deserve blind trust.
Top comments (0)