The Seven F12s
I hit F12 seven times trying to figure out why appointmentTimeUTC was missing from an API response.
Each jump took me somewhere new:
- Component receiving the prop
- Parent passing it down
- Grid selecting the row
- Async fetch populating the grid
- Response mapping logic
- Service making the API call
- Endpoint definition
Seven hops.
The code compiled.
Types existed.
“Go to Definition” worked.
And I still couldn’t answer a simple question:
Where does this value actually come from?
So I gave up and asked Claude to trace the data flow.
Two minutes.
Nothing was broken. Nothing was deprecated. It just… drifted. appointmentDateTimeUTC was the new name that was added 2 hours before git pull and never communicated on Slack.
The problem wasn’t tooling.
The system resisted understanding.
And that resistance is a performance characteristic we don’t measure.
Coherence Is Compression
A coherent system is compressible.
You can describe it with small rules:
- All external data is validated and normalized at the boundary
- One canonical representation per domain concept
- Components never consume raw DTOs
- Timestamps are always UTC ISO strings
Four rules.
Hold them in working memory.
Apply everywhere.
That’s compression.
An incoherent system is... well. We've all seen them.
And coherence debt compounds.
This isn’t aesthetics.
It’s throughput.
Cognitive Latency
In a coherent system, most questions resolve within one layer.
"What is TripLeg?" → open TripLeg.ts "What does the API return?" → open TripLegDto.ts "What transforms it?" → open mapTripLegDto.ts
Three hops. Predictable. Stable.
In an incoherent system, the same question becomes a distributed trace:
Component → hook → thunk → service → config → endpoint A → endpoint B → conditional mapping → implicit null semantics → scattered date math
That's not runtime latency.
That's cognitive latency.
You can approximate it:
Cognitive Latency = hops × context load per hop
3 hops × 30 seconds = 90 seconds 10 hops × 3 minutes = 30 minutes
And that's assuming by hop 5 you still remember why you started.
The bottleneck in modern software isn't CPU cycles.
It's time-to-understanding.
And here's what makes 2026 different: we now have something that measures cognitive latency empirically.
AI Is a Coherence Stress Test
The fastest velocity in 2026 isn't raw tokens/sec—it's how few clarification loops an agent needs before it ships safe changes.
LLMs are compression engines.
They thrive on:
- Stable shapes
- Consistent naming
- Predictable layering
When a system requires narrative explanation instead of structural inference—that's not an AI limitation.
That's architectural entropy.
AI doesn't fix spaghetti code; it just navigates it faster—until it hallucinates because your naming was inconsistent, or your boundaries were implicit, or your "one canonical shape" rule had seventeen unstated exceptions.
If your system only works when explained verbally by its original author, it isn't architecture. It's folklore.
And folklore doesn't compress.
The Economics of Friction
Let’s make it concrete.
In a 300K-line system:
- 7 F12s instead of 3 = ~10 extra minutes per investigation
- 5 investigations per day
- 5 developers
That’s ~4 hours per day lost to cognitive overhead.
Over a month?
~80 developer-hours.
At $95/hour:
~$7,600/month (assuming 20 work days and uninterrupted investigation time—realistically, context-switching makes this conservative).
Over $90k/year in cognitive tax.
That’s a senior engineer.
Or the feature you didn’t ship.
We measure bundle size in kilobytes.
We don’t measure confusion in hours.
The Performance Shift
2015 performance meant flame graphs and Lighthouse scores.
We optimized machines.
2026 performance means:
- Time-to-understanding
- Time-to-safe-change
- Time-to-confidence
Coherence isn’t about eliminating hops. It’s about making them predictable and cheap to reason about. A clean service mesh with explicit contracts can have 12 physical hops and still feel like 3 cognitive ones. The old monolith with implicit shared state and date-math roulette? Eight hops and three hours of dread.
If you care about velocity, optimize for compression:
- Normalize and validate at the boundary.
- One concept, one canonical shape.
- Make invariants explicit in types.
- Minimize hop distance.
- Count the jumps required to answer simple questions.
If it takes seven jumps to find the source of truth, your system is slow — even if it runs at 60fps.
The Uncomfortable Truth
Many codebases survive on:
- Institutional knowledge
- AI assistance
- Developer endurance
Remove the institutional knowledge, onboarding collapses.
Remove the AI, the cognitive tax becomes unbearable.
We’ve been compensating for incoherence with better tools instead of better boundaries.
That works.
Until it doesn’t.
Coherence isn’t polish.
It’s a performance primitive.
The fastest code isn’t the code that executes quickest.
It’s the code you can understand in one pass.
How many F12s to source of truth in your codebase?
If you can’t answer that number, you’re not measuring performance.
The number exists. You just haven’t been counting it.
Top comments (1)
This post is a massive wake-up call for anyone who thinks "clean code" is just an aesthetic choice for senior devs with too much time on their hands.
The "Seven F12s" resonates so hard it actually hurts. We’ve all been in that trance where you're clicking through layers of abstraction like a digital archeologist, only to realize the "source of truth" is just a string manipulation happening in a utility file three folders away. You're spot on about Cognitive Latency—we spend so much time worrying about whether a function takes 10ms or 100ms to execute, while completely ignoring the 45 minutes of "brain-loading" it takes for a human (or an AI agent) to even find that function.
The point about AI being a "coherence stress test" is the real 2026 takeaway. I’ve noticed this myself: when I'm using Cursor or Claude, the moments where the AI "hallucinates" are almost always at the exact spots where my architecture is inconsistent or "folklore-based." If a state-of-the-art LLM can't follow the data flow, what chance does a new hire on their first week have?
We need to stop measuring "Productivity" in JIRA tickets closed and start measuring it in Hops-to-Truth. If it takes more than three jumps to see where a piece of data originates, you're not building a feature—you're accruing high-interest cognitive debt.