Varonis Threat Labs reports a new Copilot vulnerability called “Reprompt” that can enable silent data exfiltration from Microsoft Copilot via a single click. The research highlights that a specially crafted URL parameter can inject prompts, and an attacker-controlled server can then issue follow-up requests to harvest sensitive information from a user’s active Copilot session.
The report outlines three core techniques used in the attack: parameter-to-prompt (injecting instructions via the q URL parameter), a double-request method that circumvents initial leak protections, and a chain-request flow that sustains and automates staged exfiltration. Because the follow-up commands originate server-side, client-side monitoring can be blind to the theft.
Microsoft has confirmed a patch for the issue and advises affected users to apply updates; Varonis notes enterprise Microsoft 365 Copilot customers were not impacted. The disclosure underscores two practical points for organisations:
(1) treat AI assistants and their URL/connector surfaces as distinct security boundaries, and (2) require defensive controls (least-privilege connectors, prompt-sanitization, and monitoring that covers server-side request chains).
Top comments (0)