In the early 2010s, cloud services became accessible enough that individual employees and small teams could sign up for business software without any involvement from central IT. The resulting pattern was given a name: Shadow IT. Software was being used in a shadow, out of sight of the people formally responsible for it.
Shadow AI is the same pattern, repeated.
An employee who needs to summarize a long document pastes it into a public chatbot. A product manager drafting requirements does the same. A software engineer debugging a subtle problem pastes in the relevant code.
None of these actions feel like a governance event. They feel like productivity. They often are productivity.
And they are also, in aggregate, one of the largest data exfiltration channels that has existed inside enterprises in years.
Why this version is worse than Shadow IT
Shadow IT was bounded by the nature of SaaS: signing up for a new service required an email, a password, sometimes a credit card. There was a moment of decision, however brief, that left a footprint. Shadow AI has no such moment. Pasting text into a chatbot is indistinguishable from pasting text into a notepad, from the perspective of every monitoring tool the organization already owns.
The data loss prevention systems that flag source code being uploaded to Dropbox do not flag it being pasted into a browser tab. The security information platforms that detect anomalous login patterns do not detect an employee spending three hours a day in a chatbot interface. The application monitoring stack that catches latency spikes in microservices does not catch hallucinated answers being returned to customers.
The infrastructure for governing Shadow AI does not exist in most organizations — not because the technology is missing, but because the problem is new enough that the category hasn't been named, funded, or staffed.
What the data suggests
Surveys of enterprise employees conducted between 2023 and 2025 converge on a rough consensus. The share of knowledge workers using generative AI tools in their jobs is between 60 and 80 percent, depending on the industry and the survey. The share doing so with explicit permission from their employer is much lower, often around 20 to 30 percent. The gap is the Shadow AI problem, quantified.
The same surveys suggest that the categories of data most commonly pasted into public chatbots include customer communications, internal documents marked confidential, source code, financial data, and in some industries, protected health information. This is not a fringe problem. It is the central pattern of generative AI adoption inside enterprises.
The response is not a ban
The initial response of many organizations, in 2023 and early 2024, was to block public chatbots at the network layer. This proved counterproductive for reasons that were predictable in retrospect. Employees found workarounds, usually by using the tools on personal devices or via mobile networks. Productivity gains that had been silently accruing to the organization were re-routed through channels that were not just ungoverned but actively hidden. The visibility problem got worse, not better.
The more effective responses have involved providing sanctioned alternatives — internal deployments, approved SaaS relationships with data handling guarantees — combined with monitoring and policy infrastructure that treats AI usage the way previous generations of infrastructure treated file transfers and email.
This is the work of AI governance. It is not about whether to allow AI. That decision has already been made by the workforce. It is about what allowing AI looks like when it is done with the same operational discipline that is routine in other critical systems.
This post is adapted from Chapter 2 of "AI Governance: The Foundation for Organized AI in Production — A Field Guide", available free under CC BY 4.0 at thinkneo.ai/book.
Disclosure: I run ThinkNEO, which builds the control plane infrastructure discussed in the book. The book is vendor-neutral and the full PDF is preserved with a permanent DOI on Zenodo (CERN).
Top comments (0)