DEV Community

Cover image for 78% of AI Users Bring Unapproved Tools to Work — There's a Structural Fix
Richard  Ketelsen
Richard Ketelsen

Posted on

78% of AI Users Bring Unapproved Tools to Work — There's a Structural Fix

The Data

Two independent studies found the same number in 2024-2025:

  • Microsoft/LinkedIn 2024 Work Trend Index: 31,000 knowledge workers, 31 countries, conducted by Edelman Data & Intelligence. Finding: 75% use AI at work. Of those, 78% use unapproved tools (BYOAI).
  • WalkMe 2025 AI in the Workplace Survey: 1,000 U.S. working adults who use AI, conducted by Propeller Insights. Finding: 78% admit to using AI tools their employer never approved.

Same percentage. Independent methodologies. Different sample populations. The convergence is what makes this credible — two unrelated research efforts produced identical results.

Why It Matters for Developers

Every unapproved AI interaction is a data handling question. When a developer pastes proprietary code into an unvetted AI chatbot, that code hits someone else's infrastructure under someone else's data retention policy. When a PM uploads a roadmap to a free AI tool, that document exists on servers your organization never evaluated.

For anyone working in regulated industries or with sensitive intellectual property, the risk compounds. Most free AI tools have terms of service that permit using inputs for model training. Some retain data indefinitely. The specific policies vary, but the common problem is clear: you don't control what happens to the data after it leaves your machine.

IBM's 2024 Cost of a Data Breach Report found shadow AI adds an average of $670,000 to breach costs. That's not the breach itself — that's the additional cost from the unapproved AI component alone.

The Structural Problem

The pattern repeats across organizations:

1. Organization approves AI Tool X
2. Tool X handles ~60% of workflows
3. Employees find Tools Y, Z for the rest
4. Tools Y, Z are SaaS platforms with own data policies
5. IT has no visibility into Tools Y, Z
6. Data leaks through channels that don't exist in the threat model
Enter fullscreen mode Exit fullscreen mode

Enforcement doesn't scale when 78% of AI users are finding alternatives. You can write policies faster than employees can find workarounds, but the underlying incentive — getting work done efficiently — always wins.

The problem isn't people. It's architecture. Centralized AI platforms create a gap between approved capability and actual need. Employees fill that gap with whatever's available.

A File-Based Alternative

CRAFT Framework approaches this differently. Instead of another platform to approve, evaluate, and monitor, CRAFT stores AI workflows as plain text files on your own machine:

~/craft-projects/
├── recipes/          # Reusable prompt templates
├── cookbooks/        # Organized recipe collections
├── projects/
│   ├── project-001/
│   │   ├── chat-history.txt
│   │   ├── project-config.txt
│   │   └── handoffs/
│   └── project-002/
└── backups/          # cp -r. That's it.
Enter fullscreen mode Exit fullscreen mode

No platform. No installation. No vendor to vet. No approval process. Files work with any AI chat tool — Claude, ChatGPT, Gemini, or the next one. You're not adopting software. You're organizing text files.

The key insight: CRAFT separates the workflow from the tool. Your prompts, templates, and project context are just files. The AI tool is the execution layer. Since the files never leave your machine, there's no shadow IT category to manage and no data going anywhere you don't control.

Backup strategy? cp -r craft-projects/ /backup/drive/. That's the entire disaster recovery plan.

The 78% BYOAI problem exists because centralized platforms create a gap between what's approved and what people need. CRAFT removes the platform from the equation entirely.

Beta is open: craftframework.ai


Sources: Microsoft/LinkedIn 2024 Work Trend Index; WalkMe 2025 AI in the Workplace Survey; IBM 2024 Cost of a Data Breach Report

Top comments (0)