DEV Community

Soon Seah Toh
Soon Seah Toh

Posted on

The Entire AI Agent Industry Might Be Building Plumbing Nobody Needs

The Billion Dollar Question

Hot take: the entire AI agent industry might be building plumbing nobody needs. And we could be looking at billions of dollars wasted.

Thousands of startups. Tens of thousands of engineers. Billions in VC funding. All building agentic frameworks, tool definitions, orchestration logic, retry handlers, context managers, custom loops. Thousands of lines of code just to make AI do things.

Meanwhile, Claude Code and Codex already exist. They represent some of the largest AI engineering efforts on the planet. Anthropic and OpenAI have poured billions into making these agents work reliably. They can read files, run commands, search codebases, reason across complex systems, and take action.

The entire agentic AI ecosystem is collectively spending billions rebuilding capabilities that already exist in two products.

The Real Hard Part

After building enterprise software for over two decades, here's what I realized: the hard part of AI agents was never the loop. It was never the orchestration. It was always the data.

What if the entire industry has the problem backwards?

What if instead of building custom agentic frameworks, you just structure your data well into a filesystem, point an existing world-class agent at it, and let it reason?

Filesystem as API

No tool schemas. No orchestration code. No custom framework. No billion dollar rebuild.

Want to add a new capability? Drop a SKILL.md file. New data source? Write to a directory. No redeployment. No schema changes.

The concept is simple: treat your filesystem as the API layer between your data and an existing agent that already knows how to reason, explore, and take action.

Testing It

I tested this with our observability platform. Dumped system state (logs, metrics, configs, topology, runbooks) into a well-structured directory. Pointed Claude Code at it.

It explored the directory. Read the files. Correlated across data sources. Identified root causes. Answered questions like a senior engineer who'd been on the team for 10 years.

No custom tools. No framework. Just well-structured data and a world-class agent that already knows how to reason.

Where It Falls Short

Let's be honest about the limitations:

  • Latency: Sub-second responses aren't possible yet. Agent loops take 30-60 seconds for complex queries.
  • Freshness: Filesystem is a snapshot, not real-time streaming data.
  • Determinism: Purpose-built tools give more predictable behavior for narrow tasks.
  • Security: Directory access means broader data exposure than fine-grained API scoping.

The Pragmatic Answer

The pragmatic answer is hybrid. Use world-class agents like Claude Code for open-ended reasoning: troubleshooting, root cause analysis, architecture questions. Use purpose-built tools for narrow, fast, deterministic tasks: health checks, alerting, metric queries.

The Controversial Part

90% of what the "agentic AI" industry is building right now is open-ended reasoning dressed up as custom tooling. Teams spending months, burning millions, building orchestration that Claude Code or Codex already does better out of the box.

We might look back at this era the way we look back at companies that built their own databases in the 90s, or their own cloud infrastructure in the 2000s. A massive, industry-wide misallocation of engineering talent and capital, building commodity infrastructure instead of focusing on what actually matters.

The real moat was never the agent. It was always the data underneath it.

Stop building plumbing. Start structuring data. The world's best agents are already built. Use them.


Cloud Vista v15 is our take on this. After two decades of building enterprise observability, we focused on structuring the data right and letting AI reason over it.

Top comments (0)