DEV Community

Cover image for The Future of AI Agents Is Not the Web. It’s the Operating System.
Khaleel musleh
Khaleel musleh

Posted on

The Future of AI Agents Is Not the Web. It’s the Operating System.

AI Agents Should Not Live in the Browser

For the last two years, most AI products have followed the same architecture.

A web interface.

A chat box.

An agent running somewhere in the cloud.

But something about this model feels fundamentally wrong.

Because if you look at the history of computing, the real intelligence layer of a machine has never lived in the browser.

It has always lived in the operating system.


The Web Is the Wrong Layer for AI

Most AI agents today operate inside web applications.

You open a browser, type a prompt, and the system calls tools through APIs.

Everything runs in the cloud.

This works for demos, but it creates a strange limitation:

The AI has almost no direct relationship with the machine you actually use.

It cannot naturally:

  • observe your filesystem
  • monitor processes
  • automate your terminal
  • react to system events
  • interact with local workflows

Instead, it operates through HTTP requests.

That is an extremely narrow abstraction layer for something that is supposed to be intelligent.


The Real Intelligence Layer of a Computer

If you look at where real computing power lives, it isn't the browser.

It's the operating system.

The OS controls:

  • processes
  • files
  • networking
  • scheduling
  • permissions
  • hardware

Unix systems exposed this power through something remarkably simple.

Small tools.

Clear interfaces.

Composability.

Examples:


bash
ps
grep
awk
sed
curl

Each program does one thing well.

Together they form a programmable machine.

This is the Unix philosophy.

Modern AI Agents Ignore This

Most AI agent frameworks today attempt to orchestrate tasks using:

massive prompts

complex agent loops

hidden execution engines

large runtime environments

Everything becomes a monolithic system.

Ironically, Unix solved this orchestration problem decades ago.

Instead of giant programs, it used small composable tools connected by pipes.

Modern AI systems often reinvent this idea poorly inside giant frameworks.

The CodeAct Insight

Recent research such as CodeAct hints at a more interesting direction.

Instead of treating models as systems that generate text, CodeAct treats them as systems that execute actions through code.

The model becomes a reasoning layer.

The environment becomes the machine.

This aligns perfectly with Unix.

Instead of simulating work, the model can interact with real system primitives.

For example:

run terminal commands

manipulate files

automate browsers

call APIs

orchestrate processes

The operating system becomes the execution environment.

The model becomes the controller.

AI Agents Should Be OS-Native

If you follow this logic, the architecture changes dramatically.

AI agents should not live inside websites.

They should live inside the operating system.

Just like other system services.

The agent becomes a daemon that can:

observe the system

react to events

run commands

orchestrate tools

Instead of a chat interface controlling everything, the AI becomes a system capability.

Your computer gains a reasoning layer.

The Unix Way to Build AI Agents

The Unix philosophy provides a blueprint.

Programs should:

do one thing well

accept simple inputs

produce predictable outputs

compose with other tools

If AI agents follow this model, the architecture becomes much simpler.

Instead of a giant agent, the model orchestrates small capabilities.

Example workflow:

watch downloads folder
→ classify files
→ rename documents
→ generate summary

Or:

research topic
→ gather sources
→ summarize
→ generate markdown report

Each step is just another tool.

The agent orchestrates them.

The Operating System Is the Real AI Platform

The more you examine this architecture, the clearer it becomes.

The operating system already provides everything AI agents need:

process management

event systems

filesystem access

networking

scheduling

permission boundaries

All the AI needs to do is use those primitives intelligently.

The web layer becomes unnecessary.

A Small Experiment

I've been experimenting with this idea in a small project called Jeriko.

Jeriko runs as a local daemon that connects language models to system primitives like:

terminal commands

filesystem

browser automation

APIs

background workflows

The goal isn't to build another chatbot.

The goal is to explore a different architecture:

AI agents that are native to the operating system.

Not web apps.

Not SaaS platforms.

Just another process running on your machine.

Where This Could Go

If AI agents become OS-native, something interesting happens.

Computers stop being passive tools.

They become active systems that understand and execute workflows.

Automation becomes natural.

And the machine itself becomes programmable through language.

But the deeper idea is this:

AI might return computing to something familiar.

Not giant platforms.

Just small tools.

Connected together.

The Unix way.

If you're curious about the experiment:

https://jeriko.ai

I'm especially interested in hearing from people thinking about:

AI agents

operating systems

Unix philosophy

CodeAct architectures

Because the real question might be:

Should AI live on the web, or inside the machine itself?

#ai
#machinelearning
#unix
#programming
#opensource
Enter fullscreen mode Exit fullscreen mode

Top comments (0)