DEV Community

Cover image for You're Already in AI control Debt. You Just Haven't Seen the Bill Yet
Olivier Miossec
Olivier Miossec

Posted on

You're Already in AI control Debt. You Just Haven't Seen the Bill Yet

Poor deadlines, unrealistic timelines, pressure to finish a project on time, or lack of knowledge, often result as of what we call technical debt. The concept of future cost, a debt that must be repay by refactoring or redeploying a system, and where interests are pay by extra maintenance actions and reduced reliability.

This concept was coined in 1992 by Ward Cunningham and has since become a widely used framework to argue for software quality, system sustainability, and long-term organizational efficiency.

Looking at AI adoption today in cloud and software engineering, the parallel is hard to miss. The developer shipping features without review, the office worker using OpenClaw to speed up their tasks, the cloud engineer letting an agent generate deployment documentation, the architect wiring up an unknown MCP server to get a job done. Worse still, many employees use AI with personal accounts and no validation from their IT department; enter the new world of shadow AI.

These situations are commonplace today, and sometimes the results go well beyond what was intended. We're not just delegating tasks only, we're delegating decisions.

Think of a critical system failing because agent-generated code wasn't production-grade, a presentation riddled with hallucinated references, secrets or API tokens pushed to a public repository, personal data sent to an LLM, or a rogue MCP plugin taking control of a machine. The thread connecting all these examples is a lack of control, diligence, and discernment.

This raises a real question: is agentic AI as safe as we assume? There are three systemic risks to be aware of: security risk, compliance risk, and quality risk.

In the early days of the cloud, people started using file transfer services without authorization. Strategic files, client data, and confidential information were sent outside company networks with no visibility. It wasn't because people were reckless, they knew it wasn't allowed, but the value proposition was compelling enough: instant file sharing.

The value proposition of AI is well known to anyone in a corporate role. People are already sending files and data to AI systems for proofreading, analysis, and document drafting. That alone is a breach. Agentic AI is simply the next step.

When the OpenClaw project launched under the name Clawbot, it quickly became one of the most popular repositories on GitHub. It's an open-source AI agent that connects to LLMs. It can run workflows, connect to your calendar, CRM, and other applications, a genuine game-changer for anyone who had been waiting for a real personal assistant.

But OpenClaw has a dark side. Between the project's rename on GitHub and the explosion of third-party plugins, malicious elements have polluted the ecosystem. You think you've deployed a plugin to manage your CRM — and you have, but it's also leaking your data to a third party. That email-reading plugin? It may be handing access to your mailbox to a threat actor. Check what the CVE database says about OpenClaw.

OpenClaw isn't the only risk vector here — think about MCP servers. Almost nobody had heard of MCP a few months ago; today there are more than 15,000 available. How many are misconfigured or outright malicious? Far from just helping you to complete tasks, they drastically expand your attack surface. You can check this website.

The same applies to plugins and skills: they can contain malicious scripts or prompt injection payloads. Not vetting what you add to your AI agent can turn it into a liability.

AI agents are also vulnerable to prompt injection when processing external documents, and the broader the permissions granted, the larger the blast radius.

This is the essence of shadow AI, analogous to shadow IT, where employees used unauthorized Dropbox accounts to share data. Here they're using AI agents with the same mindset, but with a far greater potential impact on the company's infrastructure

But shadow AI isn't limited to agents running on local machines. Anyone in a company can open an AI chatbot, it's just a webpage. People paste in Excel tables, internal documents, or email threads without a second thought. It's nearly invisible from a monitoring perspective, but company data is out the door.

Company data is now flowing to servers outside your control and may be used to train models. Code, medical, insurance, and legal data are already inside these systems. Data ingested by an LLM cannot be truly erased, it should be treated as a permanent data leak. Shadow AI makes this worse: by definition, none of it will ever be audited.

Data leaked to an LLM can include sensitive coding assets: tokens, API keys, and sometimes passwords. But there's a bigger problem. LLMs are incredibly effective at generating content that looks correct. We've recently seen multiple examples of consulting firms and law practices submitting documents filled with entirely fabricated citations.

Code generation isn't exempt from hallucinations either; non-existent APIs, invented function parameters, bogus CLI commands, non-functional logic. At least in this case, the verdict is fast and unambiguous: the code doesn't run.

More insidious is the sub-optimal code that does run. Security gaps, misconfigurations, systems that can't scale under load, all pushed to production without a second look. The result: incidents at scale.

The root cause is a loss of control over pace. With traditional software engineering, writing and reviewing code operated on a human timeline. With AI, the same output is generated in minutes instead of hours. A single coding agent can produce more in a day than a team can meaningfully review, forcing engineers to ship code they don't fully understand.

Even worse, AI agents will take the most optimal path to complete a task, which is a problem when the intent or context is malformed. The result can be quietly catastrophic. Cases of agents dropping databases or silently erasing data are more common than most teams realize.

AI is relatively new, and agentic AI even more so. The evolution is rapid, and adoption is outpacing it. Too fast for enterprises to put a governance skeleton in place. If even the largest tech players have suffered AI failures, it tells us this governance still needs to be invented — and that the AI control debt is already accumulating.

Top comments (0)