Researchers at Miggo Security have demonstrated a novel prompt injection attack against Google’s Gemini AI assistant that allows for the exfiltration of private Google Calendar data. By crafting a malicious calendar event with natural language instructions in the description, an attacker can trick Gemini into summarizing private schedule details and leaking them through new, attacker-visible events.
The attack is triggered when a victim asks the AI assistant a routine question about their daily schedule, causing Gemini to parse and execute the dormant payload. While Google utilizes an isolated model to detect malicious prompts, this specific technique bypassed defenses because the instructions appeared benign to filters. Google has since implemented mitigations following the disclosure of these findings.
Top comments (0)