Google Threat Intelligence Group (GTIG) has identified a zero-day exploit for an unnamed open-source web administration tool that was likely developed using AI. The exploit, designed to bypass two-factor authentication (2FA), featured Python code with characteristics typical of large language models (LLMs), such as educational docstrings and hallucinated CVSS scores. This discovery highlights a shift in threat actor methodology, moving towards AI-assisted vulnerability discovery for complex logic-based flaws.
Beyond this specific incident, Google's report notes that state-sponsored actors from China, North Korea, and Russia are increasingly industrializing AI use. This includes generating decoy code to obfuscate malware like CANFAIL, utilizing voice cloning for social engineering, and integrating Gemini APIs into Android malware like PromptSpy for autonomous device interaction. To scale these operations, attackers are building automated infrastructure to manage access to premium AI models.
Top comments (0)