You have probably shipped an AI feature or enabled an AI tool for your team in the last year. Maybe both.
What you probably did not do — and what most teams skip — is audit where your data actually goes once it enters that tool.
A recent post from Questa AI on LinkedIn asked the question plainly:
what are the hidden risks of using AI in enterprises? It did not get the engagement it deserved. This post is an attempt to fix that — with a developer-first lens.
The quick mental model
Think of every enterprise AI integration as having three layers of risk:
Layer 1: Data transit → Where does your input go?
Layer 2: Data retention → Is it stored? For how long? By whom?
Layer 3: Data use → Is it used to train a model you don't own?
Most teams audit Layer 1 (sometimes). Layers 2 and 3 are almost never checked before deployment. By the time they are, the tools are already embedded.
Agentic AI raises the stakes
Basic RAG pipelines are relatively contained. An agentic system is not.
When your AI assistant can plan multi-step tasks, pull from multiple data sources, and take actions autonomously, the attack surface expands to include everything it reads and everything it touches. This is not theoretical.
The Questa AI team published a solid technical breakdown of why this matters architecturally: Agentic RAG — Why Your Enterprise Assistant Needs a Planning Layer. Worth reading if you are building or evaluating any agentic tooling.
The indirect prompt injection problem
This is the one most devs have heard of but few have stress-tested in their own systems:
User uploads a PDF → PDF contains hidden instruction
→ Agent processes PDF as context
→ Agent executes hidden instruction
→ Data exfiltration / privilege escalation
A simple chatbot errors out and stops. An agentic system attempts recovery — and in doing so, often exposes more than it should. NVIDIA and Lakera AI documented this cascade failure pattern in a 2025 red-team exercise on an agentic RAG blueprint.
The three enterprise risks in plain terms
1.Untracked data egress. Employees using external AI tools are making data transfer decisions every time they upload a file. Most vendor ToS permit retention. Most employees have not read the ToS.
2.Hallucination in high-stakes contexts. LLMs generate confident output regardless of correctness. In contracts, compliance, and finance, a fluent wrong answer is worse than no answer.
3.Governance that lives in a doc, not in the system. Written AI policies are not enforced AI policies. Shadow AI is the rule, not the exception.
What the fix looks like architecturally
Questa AI's approach — documented on their solutions page — is built on one principle: redact before you send.
Their local redaction layer anonymizes PII, confidential business data, and client information on your infrastructure before any external model sees the document. The model receives a clean version. You get the insight. The raw data never leaves your perimeter.
Raw doc → [Local Redaction Engine] → Anonymized doc → LLM
← Insight mapped back ←
This is privacy-by-architecture, not privacy-by-policy. The difference is that one is enforceable and one is not.
Where to go deeper
Three pieces worth reading if you want the full picture:
•Hashnode: The Enterprise AI Risk No One Puts in the Slide Deck — concise technical overview
•Substack: Your Enterprise AI Assistant Has a Dangerous Blind Spot — the full long-form argument
•Questa AI Solutions — what privacy-first enterprise AI looks like in practice
TL;DR
Your AI tools are probably transferring more data than your team realizes
Agentic systems expand the attack surface significantly — indirect prompt injection is real
The fix is architectural: redact before sending, not after the breach
Ask your vendor five questions in writing before you sign anything
If this was useful, drop it in your team Slack before the next AI vendor demo. The five minutes it saves in bad contract negotiation is worth it.

Top comments (0)