⚠️ Region Alert: Middle East Unit 42 researchers have analyzed the evolving landscape of AI-integrated malware, identifying two distinct approaches in the wild: "AI theater" and "AI-gated execution." The first case involves a .NET information stealer that incorporates OpenAI’s GPT-3.5-Turbo to generate evasion technique names and obfuscation protocols. However, analysis reveals these features are largely superficial and not functionally implemented, serving more as an experimental or deceptive layer than a practical offensive tool.
The second case highlights a more sophisticated Golang-based malware dropper that leverages GPT-4 for remote decision-making. By sending host telemetry—such as process lists, AV presence, and system uptime—to the LLM, the malware offloads the environment "safety" assessment to AI before deploying a Sliver payload. This shift from hard-coded heuristics to AI-driven verdicts suggests a trend where threat actors use LLMs to enhance operational security and bypass traditional sandbox detections.
Top comments (0)