If you ever feel like your computer is simultaneously incredible and fragile, you’re not imagining it.
We run 2026 workloads—always-online apps, sprawling dependency trees, continuous updates, remote collaboration—on operating system foundations that were largely shaped by late-80s and 90s priorities: single-machine ownership, clear “inside vs. outside” boundaries, and a world where software mostly came from a handful of vendors on physical media.
That mismatch matters because security isn’t just about tools (antivirus, EDR, “hardening guides”). It’s also about what the platform assumes is normal.
Why the old design made sense (back then)
In the 90s, the “typical” personal computer model looked like this:
- One person used one machine.
- The machine lived on a desk, not on hostile networks 24/7.
- Physical access often implied trust.
- Software arrived in boxes, from brands you recognized.
- The risk model was closer to “don’t break the system” than “someone is actively trying to subvert the system.”
So operating systems optimized for:
- Compatibility with existing applications
- Performance on limited hardware
- Centralized control for a local user/admin
- A relatively stable set of device drivers and peripherals
Even Windows NT—first released in 1993—was designed in a world where compatibility was existential. It was important that systems could keep running older Windows and DOS-era software, not that they could treat all code as potentially hostile by default.
Those priorities weren’t “bad.” They were rational for the era.
The inherited defaults that still shape today’s OSes
Fast forward. The internet became the default environment. Software became a supply chain. Attackers became professionalized. Yet many core OS patterns stayed familiar.
Here are a few “1995 assumptions” that still show up everywhere:
1) Long-lived accounts and ambient authority
Most mainstream OSes still revolve around user accounts that persist indefinitely, with a “superuser” or admin tier that can do almost anything. If an attacker gets a foothold in that context, the system’s own design makes escalation and discovery easier.
It’s like giving someone a visitor badge that quietly turns into a master key if they wander into the right hallway.
2) Permanent state everywhere
Home directories, caches, application support folders, registries, logs, temporary files that aren’t truly temporary—state accumulates. Over months or years, machines become layered with residue from normal use.
Even if you’ve avoided the “data retention is bad” angle in your other writing, the architectural point is separate: permanence becomes complexity, and complexity becomes opportunity.
3) The OS as a giant all-knowing manager
Traditional OS design concentrates a lot of responsibility in the kernel and core services: storage, drivers, permissions, networking, process management, security decisions. That centrality is efficient—but it also creates big blast radii when something goes wrong.
Backward compatibility: the ice that froze progress
Operating systems don’t get to reinvent themselves on a whim. Backward compatibility is one of the main reasons we still carry old assumptions forward.
Breaking compatibility means:
- enterprise software fails
- hardware drivers fail
- entire workflows collapse
- customers don’t upgrade
So vendors evolve systems in layers. New security controls often arrive as add-ons: sandboxing, permission prompts, code signing, endpoint monitoring, virtualization-based security, etc. They help—but they’re often built on top of an architecture whose baseline assumptions never changed.
And it’s not just Windows. Linux traces its roots to early-90s Unix-like design. macOS descends from Unix traditions too, with deep heritage and compatibility commitments.
The modern threat model the OS didn’t grow up with
Modern attackers don’t need your computer to be “broken.” They just need to exploit trust.
A few realities today’s OSes have to live with:
Software supply chains can be weaponized
The SolarWinds incident is a textbook example of why “just trust updates” is not a complete strategy. SolarWinds reported that up to 18,000 customers could have been potentially vulnerable based on downloads of impacted versions.
When malicious code arrives through a trusted update mechanism, the OS is doing what it was designed to do: accept legitimate software and run it.
Humans still get targeted, but the system amplifies mistakes
The Verizon 2024 Data Breach Investigations Report notes the “human element” was a component of 68% of breaches in their dataset.
This isn’t “humans are dumb.” It’s that systems are often built so one mistake (a login, a download, a permission grant) can snowball.
“Perimeter thinking” aged out
We used to design around a strong boundary: inside the network = trusted. Outside = untrusted.
But modern work is everywhere—home Wi-Fi, coffee shops, cloud services, contractors, BYOD, SaaS sprawl. Security frameworks have responded by pushing “assume breach” thinking.
NIST’s Zero Trust Architecture guidance explicitly describes the idea that you should grant no implicit trust based on network location or ownership.
Microsoft distills this into the phrase “never trust, always verify.”
That’s a modern posture—but most operating systems still have many “trusted by default” behaviors baked into their bones.
Why patching and hardening didn’t fix the fundamentals
Patching is necessary. Hardening is helpful. But neither changes the inherited assumptions that create systemic risk.
Security bolted onto legacy architecture can feel like reinforcing an old house:
- You can add locks (EDR, sandboxing, permission prompts).
- You can install cameras (logging, monitoring).
- You can upgrade the doors (secure boot, code signing).
But if the floor plan still routes everything through one hallway, you haven’t changed the core fragility—you’ve just improved your chances of detecting trouble.
What a “modern default” assumption set might look like
If we were designing OS defaults for today—without dragging 1995 behind us—we’d likely flip the assumptions:
- Software may be hostile even if it looks legitimate.
- Compromise is plausible; containment matters more than optimism.
- Authority should be granular, not ambient.
- Verification should be continuous, not occasional.
That leads to design directions like:
- Smaller trusted computing bases (microkernel approaches, tighter core)
- Capability-based security (explicit tokens for specific actions rather than broad “admin” power)
- Session-based or ephemeral execution models that reduce long-lived exposure windows
There isn’t one “correct” architecture. There are tradeoffs. But the key is whether the OS is still anchored to assumptions that no longer match reality.
Practical examples: New systems that challenge inherited OS assumptions
Several operating systems and research projects intentionally question the long-standing defaults around trust, persistence, and authority. They differ in goals and tradeoffs, but each illustrates how OS design changes when you stop assuming the environment is benign.
NØNOS
NØNOS explores a zero-trust, microkernel-based design where trust is minimized and verification is emphasized. Instead of assuming the operating system or its components are inherently safe, it focuses on capability-based controls and constrained execution environments.
Tails (The Amnesic Incognito Live System)
Tails is designed around the idea that persistence is a liability. It runs as a live system, routes traffic through Tor by default, and leaves no traces on the host machine unless explicitly configured to do so. The underlying assumption is simple: the safest state is one that doesn’t last.
Qubes OS
Qubes OS applies strong isolation as a first principle. Applications and tasks are separated into virtualized “qubes,” so a compromise in one domain does not automatically endanger the rest of the system. Its core assumption is that breaches are expected—and containment matters more than prevention.
OpenBSD
OpenBSD emphasizes correctness, code auditing, and secure defaults. While more traditional in architecture than some experimental systems, it demonstrates how conservative design, minimalism, and explicit security posture can significantly reduce risk without relying on complex add-ons.
seL4
The seL4 microkernel focuses on formal verification: mathematically proving that the kernel enforces its security properties as designed. It challenges the assumption that kernel behavior must be trusted on faith, showing that parts of an OS can be proven correct rather than assumed correct.
Homepage:
None of these systems claim to be a universal replacement for mainstream operating systems. What they demonstrate instead is that many “unchangeable” OS assumptions—about trust, persistence, isolation, and authority—are design choices, not laws of nature.
Inertia isn’t neutral
Operating systems aren’t “stuck in the past” because engineers are lazy. They’re stuck because compatibility is powerful, users hate breakage, and ecosystems are enormous.
But the threat model moved anyway.
So when we keep old assumptions—long-lived trust, ambient authority, permanent state everywhere—we’re not preserving stability. We’re accepting a mismatch between how systems behave and how adversaries operate.
The real question isn’t “why aren’t OSes perfect?” It’s:
Which assumptions are we still carrying that no longer deserve to be default?
Top comments (1)
I agree - the issues isn't these OS kernels have been around so long - it's that their original trust assumptions persist. Modern security comes from constraining behavior and limiting agency. Tools like Tails show security now comes from enforced constraints: containment, sandboxing, mandatory access controls, immutable systems, and disposable sessions.